Compare commits

...

82 Commits

Author SHA1 Message Date
Lluis Agusti
ee11623735 chore: vercel preview 2025-07-14 16:14:58 +04:00
Lluis Agusti
0bb160e930 chore: generate 2025-07-14 15:38:33 +04:00
Lluis Agusti
81a09738dc chore: CAPTCHA 2025-07-14 15:23:39 +04:00
Lluis Agusti
6feedafd7d Merge 'dev' into 'feat/agent-notifications' 2025-07-14 15:05:36 +04:00
Lluis Agusti
547da633c4 Merge 'dev' into 'feat/agent-notifications' 2025-07-14 14:41:01 +04:00
Ubbe
fde3533943 fix(frontend): logout pages design adjustments (#10342)
## Changes 🏗️

- Put `Continue with Google` button below the other button on the forms
( _to confirm with design_ )
- Ensure some vertical spacing so the forms don't end touching the
header on small screens
- Apply style adjustments asked by design on navbar links

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Check the above

### For configuration changes:

None
2025-07-14 10:28:09 +00:00
Ubbe
a789f87734 fix(frontend): disable Cloudflare on Vercel previews (#10354)
## Changes 🏗️

Disable the Cloudflare check:

<img width="600" height="861" alt="Screenshot 2025-07-11 at 18 51 46"
src="https://github.com/user-attachments/assets/792ecca0-967e-4cef-a562-789125452d2f"
/>

On Vercel previews, so we can use previews for testing Front-end only
changes.

Vercel previews have dynamically generated URLs:
```
https://{branch}-{commit}-significant-gravitas.vercel.app/login
```

So if Cloudflare does not support URL wildcards we will neeed to do this
🙇🏽 ( _as an experiment_ )

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] You can login on the preview
  
### For configuration changes:

None
2025-07-14 10:27:56 +00:00
Abhimanyu Yadav
0b6e46d363 fix(frontend): fix my agent count in the library (#10357)
Currently, my agents count is showing the initial agent count loads on
the library and then adding more agents after pagination.

### Changes 🏗️
- I’ve used `total_items` inside the pagination response and shown the
correct result.

### Demo

https://github.com/user-attachments/assets/b9a2cf18-c9fc-42f8-b0d4-3f8a7ad3cbc5


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manually test everything, and it works fine.
2025-07-14 10:20:33 +00:00
Muhammad Ehsan
6ffe57c3df fix(docs): Updated Discord Badge in README for Better Visibility (#10360)
### Motivation 💡

The previous Discord badge in the README used `dcbadge.vercel.app`,
which often fails to render correctly and displays an invalid or broken
badge.

### Changes 🛠️

- Replaced the broken badge with a `shields.io` Discord badge that is
visually consistent with the Twitter badge
- Ensures clearer visual guidance and a more professional appearance

### Notes ✏️

This PR only updates the `README.md` no frontend, backend, or
configuration files are touched. This change improves the aesthetics and
onboarding experience for new contributors.

Screenshot of the issue:
<img width="405" height="47" alt="Screenshot 2025-07-12 175316"
src="https://github.com/user-attachments/assets/41f7355c-f795-4163-855f-3d01f2478dd7"
/>

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Bently <Github@bentlybro.com>
Co-authored-by: Bently <tomnoon9@gmail.com>
2025-07-14 09:56:32 +00:00
Bently
3ca0d04ea0 fix(readme): Removes MIT icon from readme (#10366)
This PR simply removes the MIT Icon from the main README.md
2025-07-14 09:40:29 +00:00
Zamil Majdy
c2eea593c0 fix(backend): Include node execution steps and cost of sub-graph execution (#10328)
## Summary
This PR enhances the node execution stats tracking system to properly
handle nested graph executions and additional cost/step metrics:

- **Add extra_cost and extra_steps fields** to `NodeExecutionStats`
model for tracking additional metrics from sub-graphs
- **Update AgentExecutorBlock** to merge nested execution stats from
sub-graphs into the parent execution
- **Fix stats update mechanism** in `execute_node` to use in-place
updates instead of `model_copy` for better performance
- **Add proper tracking** of extra costs and steps in graph execution
stats aggregation

## Changes Made
- Modified `backend/backend/data/model.py` to add `extra_cost` and
`extra_steps` fields
- Updated `backend/backend/blocks/agent.py` to merge stats from nested
graph executions
- Fixed `backend/backend/executor/manager.py` to properly update
execution stats and aggregate extra metrics

## Test Plan
- [x] Verify that nested graph executions properly propagate their stats
to parent graphs
- [x] Test that extra costs and steps are correctly tracked and
aggregated
- [x] Ensure debug logging provides useful information for monitoring
- [x] Run existing tests to ensure no regressions
- [x] Test with multi-level nested agent graphs

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-14 09:01:15 +00:00
Lluis Agusti
6d13dfc688 chore: empty state 2025-07-11 19:56:12 +04:00
Reinier van der Leer
36f5f24333 feat(platform/builder): Builder credentials support + UX improvements (#10323)
- Resolves #10313
- Resolves #10333

Before:


https://github.com/user-attachments/assets/a105b2b0-a90b-4bc6-89da-bef3f5a5fa1f
- No credentials input
- Stuttery experience when panning or zooming the viewport

After:


https://github.com/user-attachments/assets/f58d7864-055f-4e1c-a221-57154467c3aa
- Pretty much the same UX as in the Library, with fully-fledged
credentials input support
- Much smoother when moving around the canvas

### Changes 🏗️

Frontend:
- Add credentials input support to Run UX in Builder
  - Pass run inputs instead of storing them on the input nodes
- Re-implement `RunnerInputUI` using `AgentRunDetailsView`; rename to
`RunnerInputDialog`
    - Make `AgentRunDraftView` more flexible
    - Remove `RunnerInputList`, `RunnerInputBlock`
- Make moving around in the Builder *smooooth* by reducing unnecessary
re-renders
  - Clean up and partially re-write bead management logic
- Replace `request*` fire-and-forget methods in `useAgentGraph` with
direct action async callbacks
- Clean up run input UI components
  - Simplify `RunnerUIWrapper`
- Add `isEmpty` utility function in `@/lib/utils` (expanding on
`_.isEmpty`)
- Fix default value handling in `TypeBasedInput` (**Note:** after all
the changes I've made I'm not sure this is still necessary)
- Improve & clean up Builder test implementations

Backend + API:
- Fix front-end `Node`, `GraphMeta`, and `Block` types
- Small refactor of `Graph` to match naming of some `LibraryAgent`
attributes
- Fix typing of `list_graphs`,
`get_graph_meta_by_store_listing_version_id` endpoints
  - Add `GraphMeta` model and `GraphModel.meta()` shortcut
- Move `POST /library/agents/{library_agent_id}/setup-trigger` to `POST
/library/presets/setup-trigger`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Test the new functionality in the Builder:
    - [x] Running an agent with (credentials) inputs from the builder
      - [x] Beads behave correctly
    - [x] Running an agent without any inputs from the builder
    - [x] Scheduling an agent from the builder
    - [x] Adding and searching blocks in the block menu
- [x] Test that all existing `AgentRunDraftView` functionality in the
Library still works the same
    - [x] Run an agent
    - [x] Schedule an agent
    - [x] View past runs
- [x] Run an agent with inputs, then edit the agent's inputs and view
the agent in the Library (should be fine)
2025-07-11 15:46:06 +00:00
Lluis Agusti
d0d498fa66 chore: undo 2025-07-11 19:44:27 +04:00
Lluis Agusti
c843dee317 Merge 'dev' into 'feat/agent-notifications' 2025-07-11 19:36:01 +04:00
Lluis Agusti
db969c1bf8 chore: rename 2025-07-11 19:35:53 +04:00
Lluis Agusti
690fac91e4 chore: lint 2025-07-11 18:58:13 +04:00
Reinier van der Leer
309114a727 Merge commit from fork 2025-07-11 16:43:03 +02:00
Lluis Agusti
5368fdc998 chore: tests 2025-07-11 18:31:44 +04:00
Lluis Agusti
b9d293f181 chore: updates 2025-07-11 18:15:30 +04:00
Lluis Agusti
acbcef77b2 Merge 'dev' into 'feat/agent-notifications' 2025-07-11 17:40:50 +04:00
Zamil Majdy
4ffb99bfb0 feat(backend): Add block error rate monitoring and Discord alerts (#10332)
## Summary

This PR adds a simple block error rate monitoring system that runs every
24 hours (configurable) and sends Discord alerts when blocks exceed the
error rate threshold.

## Changes Made

**Modified Files:**
- `backend/executor/scheduler.py` - Added `report_block_error_rates`
function and scheduled job
- `backend/util/settings.py` - Added configuration options
- `backend/.env.example` - Added environment variable examples
- Refactor scheduled job logics in scheduler.py into seperate files

## Configuration

```bash
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5  # 50% error rate threshold
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400  # 24 hours
```

## How It Works

1. **Scheduled Job**: Runs every 24 hours (configurable via
`BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS`)
2. **Error Rate Calculation**: Queries last 24 hours of node executions
and calculates error rates per block
3. **Threshold Check**: Alerts on blocks with ≥50% error rate
(configurable via `BLOCK_ERROR_RATE_THRESHOLD`)
4. **Discord Alert**: Sends alert to Discord using existing
`discord_system_alert` function
5. **Manual Execution**: Available via
`execute_report_block_error_rates()` scheduler client method

## Alert Format

```
Block Error Rate Alert:
🚨 Block 'DeprecatedGPT3Block' has 75.0% error rate (75/100) in the last 24 hours
🚨 Block 'BrokenImageBlock' has 60.0% error rate (30/50) in the last 24 hours
```

## Testing

Can be tested manually via:
```python
from backend.executor.scheduler import SchedulerClient
client = SchedulerClient()
result = client.execute_report_block_error_rates()
```

## Implementation Notes

- Follows the same pattern as `report_late_executions` function
- Only checks blocks with ≥10 executions to avoid noise
- Uses existing Discord notification infrastructure
- Configurable threshold and check interval
- Proper error handling and logging

## Test plan

- [x] Verify configuration loads correctly
- [x] Test error rate calculation with existing database
- [x] Confirm Discord integration works
- [x] Test manual execution via scheduler client
- [x] Verify scheduled job runs correctly

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude AI <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-10 21:56:58 +00:00
Lluis Agusti
e902848e04 chore: fix 2025-07-10 23:06:07 +04:00
Lluis Agusti
cd917ec919 Merge 'dev' into 'feat/agent-notifications' 2025-07-10 22:48:24 +04:00
Ubbe
5741331250 feat(frontend): logged out pages UI updates (#10314)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-07-07 at 13 16 44"
src="https://github.com/user-attachments/assets/0d404958-d4c9-454d-b71a-9dd677fe0fdc"
/>

<img width="800" alt="Screenshot 2025-07-07 at 13 17 08"
src="https://github.com/user-attachments/assets/1142f6d5-a6af-485d-b42e-98afd26de3ed"
/>

Update the UI of the logged-out pages ( _login, signup,
reset-password..._ ) using the new Design System components, so the app
starts to look a bit more cohesive 💆🏽

Some notes:

- I refactored the `<AuthCard />` components a bit to be easier to use
- I split the render from hook login on login/signup
- I added a couple of modals to improve the UX when logging in with
Google or using non-whitelisted emails
  -  _see below my comments for more context_ 
- When there are API errors, they are shown in a toast to prevent the
layout of the form from jumping
- When using the components in the UI, an issue with border-radius, see
comments for an explanation




## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Logout on the platform
  - [x] Check the updated Login/Signup/Reset password pages
  - [x] The UI looks good and is consistent
  - [x]  The forms work as expected
2025-07-10 18:27:24 +00:00
Ubbe
2fda8dfd32 feat(frontend): new navbar design (#10341)
## Changes 🏗️

<img width="900" height="327" alt="Screenshot 2025-07-10 at 20 12 38"
src="https://github.com/user-attachments/assets/044f00ed-7e05-46b7-a821-ce1cb0ee9298"
/>
<br /><br />

Navbar updated to look pretty from the new designs:
- the logo is now centred instead of on the left
- menu items have been updated to a smaller font-size and less radius
- icons have been updated

I also generated the API files ( _sorry for the noise_ ). I had to do
some border-radius and button updates on the atoms/tokens for it to look
good.

## Checklist 📋

## For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login/logout
  - [x] The new navbar looks good across screens 

## For configuration changes

No config changes
2025-07-10 18:06:12 +00:00
Ubbe
22c76eab61 feat(toast): update styles (#10339)
## Changes 🏗️

Style refinements on Toasts.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Check Storybook toast stories
  - [x] They match Figma 

#### For configuration changes:

None
2025-07-10 15:04:14 +00:00
Swifty
7688a9701e perf(backend/db): Optimize StoreAgent and Creator views with database indexes and materialized views (#10084)
### Summary
Performance optimization for the platform's store and creator
functionality by adding targeted database indexes and implementing
materialized views to reduce query execution time.

### Changes 🏗️

**Database Performance Optimizations:**
- Added strategic database indexes for `StoreListing`,
`StoreListingVersion`, `StoreListingReview`, `AgentGraphExecution`, and
`Profile` tables
- Implemented materialized views (`mv_agent_run_counts`,
`mv_review_stats`) to cache expensive aggregation queries
- Optimized `StoreAgent` and `Creator` views to use materialized views
and improved query patterns
- Added automated refresh function with 15-minute scheduling for
materialized views (when pg_cron extension is available)

**Key Performance Improvements:**
- Filtered indexes on approved store listings to speed up marketplace
queries
- GIN index on categories for faster category-based searches
- Composite indexes for common query patterns (e.g., listing + version
lookups)
- Pre-computed agent run counts and review statistics to eliminate
expensive aggregations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified migration runs successfully without errors
  - [x] Confirmed materialized views are created and populated correctly
- [x] Tested StoreAgent and Creator view queries return expected results
  - [x] Validated automatic refresh function works properly
  - [x] Confirmed rollback migration successfully removes all changes

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note:** No configuration changes were required as this is purely a
database schema optimization.
2025-07-10 14:57:55 +00:00
Lluis Agusti
8ae37491e4 Merge 'dev' into 'feat/agent-notifications' 2025-07-10 18:41:12 +04:00
Swifty
243400e128 feat(platform): Add Block Development SDK with auto-registration system (#10074)
## Block Development SDK - Simplifying Block Creation

### Problem
Currently, creating a new block requires manual updates to **5+ files**
scattered across the codebase:
- `backend/data/block_cost_config.py` - Manually add block costs
- `backend/integrations/credentials_store.py` - Add default credentials
- `backend/integrations/providers.py` - Register new providers
- `backend/integrations/oauth/__init__.py` - Register OAuth handlers
- `backend/integrations/webhooks/__init__.py` - Register webhook
managers

This creates significant friction for developers, increases the chance
of configuration errors, and makes the platform difficult to scale.

### Solution
This PR introduces a **Block Development SDK** that provides:
- Single import for all block development needs: `from backend.sdk
import *`
- Automatic registration of all block configurations
- Zero external file modifications required
- Provider-based configuration with inheritance

### Changes 🏗️

#### 1. **New SDK Module** (`backend/sdk/`)
- **`__init__.py`**: Unified exports of 68+ block development components
- **`registry.py`**: Central auto-registration system for all block
configurations
- **`builder.py`**: `ProviderBuilder` class for fluent provider
configuration
- **`provider.py`**: Provider configuration management
- **`cost_integration.py`**: Automatic cost application system

#### 2. **Provider Builder Pattern**
```python
# Configure once, use everywhere
my_provider = (
    ProviderBuilder("my-service")
    .with_api_key("MY_SERVICE_API_KEY", "My Service API Key")
    .with_base_cost(5, BlockCostType.RUN)
    .build()
)
```

#### 3. **Automatic Cost System**
- Provider base costs automatically applied to all blocks using that
provider
- Override with `@cost` decorator for block-specific pricing
- Tiered pricing support with cost filters

#### 4. **Dynamic Provider Support**
- Modified `ProviderName` enum to accept any string via `_missing_`
method
- No more manual enum updates for new providers

#### 5. **Application Integration**
- Added `sync_all_provider_costs()` to `initialize_blocks()` for
automatic cost registration
- Maintains full backward compatibility with existing blocks

#### 6. **Comprehensive Examples** (`backend/blocks/examples/`)
- `simple_example_block.py` - Basic block structure
- `example_sdk_block.py` - Provider with credentials
- `cost_example_block.py` - Various cost patterns
- `advanced_provider_example.py` - Custom API clients
- `example_webhook_sdk_block.py` - Webhook configuration

#### 7. **Extensive Testing**
- 6 new test modules with 30+ test cases
- Integration tests for all SDK features
- Cost calculation verification
- Provider registration tests

### Before vs After

**Before SDK:**
```python
# 1. Multiple complex imports
from backend.data.block import Block, BlockCategory, BlockOutput
from backend.data.model import SchemaField, CredentialsField
# ... many more imports

# 2. Update block_cost_config.py
BLOCK_COSTS[MyBlock] = [BlockCost(...)]

# 3. Update credentials_store.py
DEFAULT_CREDENTIALS.append(...)

# 4. Update providers.py enum
# 5. Update oauth/__init__.py
# 6. Update webhooks/__init__.py
```

**After SDK:**
```python
from backend.sdk import *

# Everything configured in one place
my_provider = (
    ProviderBuilder("my-service")
    .with_api_key("MY_API_KEY", "My API Key")
    .with_base_cost(10, BlockCostType.RUN)
    .build()
)

class MyBlock(Block):
    class Input(BlockSchema):
        credentials: CredentialsMetaInput = my_provider.credentials_field()
        data: String = SchemaField(description="Input data")
    
    class Output(BlockSchema):
        result: String = SchemaField(description="Result")
    
    # That's it\! No external files to modify
```

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Created new blocks using SDK pattern with provider configuration
  - [x] Verified automatic cost registration for provider-based blocks
  - [x] Tested cost override with @cost decorator
  - [x] Confirmed custom providers work without enum modifications
  - [x] Verified all example blocks execute correctly
  - [x] Tested backward compatibility with existing blocks
  - [x] Ran all SDK tests (30+ tests, all passing)
  - [x] Created blocks with credentials and verified authentication
  - [x] Tested webhook block configuration
  - [x] Verified application startup with auto-registration

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
(no changes needed)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes needed)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Impact

- **Developer Experience**: Block creation time reduced from hours to
minutes
- **Maintainability**: All block configuration in one place
- **Scalability**: Support hundreds of blocks without enum updates
- **Type Safety**: Full IDE support with proper type hints
- **Testing**: Easier to test blocks in isolation

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-07-10 16:17:55 +02:00
Reinier van der Leer
c77cb1fcfb fix(backend/library): Fix sub_graphs check in LibraryAgent.from_db(..) (#10316)
- Follow-up fix for #10301

The condition that determines whether
`LibraryAgent.credentials_input_schema` is set incorrectly handles empty
lists of sub-graphs.

### Changes 🏗️

- Check if `sub_graphs is not None` rather than using the boolean
interpretation of `sub_graphs`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, no test needed.
2025-07-10 07:48:18 +00:00
Ubbe
b3b5eefe2c feat(frontend): change to use Sonner toast (#10334)
## Changes 🏗️

Makes changes to use [Sonner for Toasts](https://sonner.emilkowal.ski/)
rather than the [Radix UI
primitive](https://www.radix-ui.com/primitives/docs/components/toast).

<img width="431" alt="Screenshot 2025-07-09 at 15 49 47"
src="https://github.com/user-attachments/assets/c09c3c1e-fd80-44d2-9336-c955c2d4f288"
/>
<img width="444" alt="Screenshot 2025-07-09 at 15 51 05"
src="https://github.com/user-attachments/assets/cc2a3491-7b76-44e2-8bec-3ad0ac917148"
/>
<img width="450" alt="Screenshot 2025-07-09 at 15 51 50"
src="https://github.com/user-attachments/assets/e8ede05d-3488-43f4-aa43-7d3cba92a050"
/>


https://github.com/user-attachments/assets/deb4ce1c-13bb-4f69-890e-9b8680c848e7

<img width="500" alt="Screenshot 2025-07-09 at 15 59 09"
src="https://github.com/user-attachments/assets/5636969d-4c9a-41e6-acd1-afa49b8e70c6"
/>

Sonner is [the one used in
shadcn](https://ui.shadcn.com/docs/components/toast) nowadays, because
it brings great UX on touch devices:
- allows to swipe to dismiss
- they can stack nicely if multiple toasts appear ( see video 📹 )
- when stack, hovering over them reveals them all nicely ( see video 📹 )

I kept the existing `useToast()` API used on the pages, so I had to only
refactor the hook not the calls 🏁

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Click around the app and trigger toasts
  - [x] Toasts look good 

### For configuration changes

Nope
2025-07-09 17:09:16 +00:00
Lluis Agusti
f45e5e0d59 chore: prettier 2025-07-09 14:57:33 +04:00
Lluis Agusti
1231236d87 chore: lock 2025-07-09 14:46:43 +04:00
Lluis Agusti
4db0792ade Merge 'dev' into 'feat/agent-notifications' 2025-07-09 14:45:46 +04:00
dependabot[bot]
fe36ba55dd chore(frontend/deps): Bump the production-dependencies group across 1 directory with 12 updates (#10321)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-09 09:39:21 +00:00
dependabot[bot]
45c1ca1ca1 chore(libs/deps-dev): Bump ruff from 0.11.10 to 0.11.13 in /autogpt_platform/autogpt_libs in the development-dependencies group (#10178)
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.11.10 to 0.11.13
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.11.13</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add unsafe fix for module moved cases
(<code>AIR301</code>,<code>AIR311</code>,<code>AIR312</code>,<code>AIR302</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/18367">#18367</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18366">#18366</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18363">#18363</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18093">#18093</a>)</li>
<li>[<code>refurb</code>] Add coverage of <code>set</code> and
<code>frozenset</code> calls (<code>FURB171</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18035">#18035</a>)</li>
<li>[<code>refurb</code>] Mark <code>FURB180</code> fix unsafe when
class has bases (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18149">#18149</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>perflint</code>] Fix missing parentheses for lambda and
ternary conditions (<code>PERF401</code>, <code>PERF403</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18412">#18412</a>)</li>
<li>[<code>pyupgrade</code>] Apply <code>UP035</code> only on py313+ for
<code>get_type_hints()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18476">#18476</a>)</li>
<li>[<code>pyupgrade</code>] Make fix unsafe if it deletes comments
(<code>UP004</code>,<code>UP050</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18393">#18393</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/18390">#18390</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>fastapi</code>] Avoid false positive for class dependencies
(<code>FAST003</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18271">#18271</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Update editor setup docs for Neovim and Vim (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18324">#18324</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Support Python 3.14 template strings (t-strings) in formatter and
parser (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17851">#17851</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a
href="https://github.com/InSyncWithFoo"><code>@​InSyncWithFoo</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/Viicos"><code>@​Viicos</code></a></li>
<li><a
href="https://github.com/abhijeetbodas2001"><code>@​abhijeetbodas2001</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a
href="https://github.com/dcreager"><code>@​dcreager</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a></li>
<li><a
href="https://github.com/ibraheemdev"><code>@​ibraheemdev</code></a></li>
<li><a
href="https://github.com/lipefree"><code>@​lipefree</code></a></li>
<li><a href="https://github.com/mtshiba"><code>@​mtshiba</code></a></li>
<li><a
href="https://github.com/naslundx"><code>@​naslundx</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/otakutyrant"><code>@​otakutyrant</code></a></li>
<li><a
href="https://github.com/renovate"><code>@​renovate</code></a></li>
<li><a
href="https://github.com/robsdedude"><code>@​robsdedude</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.11.13</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add unsafe fix for module moved cases
(<code>AIR301</code>,<code>AIR311</code>,<code>AIR312</code>,<code>AIR302</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/18367">#18367</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18366">#18366</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18363">#18363</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18093">#18093</a>)</li>
<li>[<code>refurb</code>] Add coverage of <code>set</code> and
<code>frozenset</code> calls (<code>FURB171</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18035">#18035</a>)</li>
<li>[<code>refurb</code>] Mark <code>FURB180</code> fix unsafe when
class has bases (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18149">#18149</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>perflint</code>] Fix missing parentheses for lambda and
ternary conditions (<code>PERF401</code>, <code>PERF403</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18412">#18412</a>)</li>
<li>[<code>pyupgrade</code>] Apply <code>UP035</code> only on py313+ for
<code>get_type_hints()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18476">#18476</a>)</li>
<li>[<code>pyupgrade</code>] Make fix unsafe if it deletes comments
(<code>UP004</code>,<code>UP050</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18393">#18393</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/18390">#18390</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>fastapi</code>] Avoid false positive for class dependencies
(<code>FAST003</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18271">#18271</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Update editor setup docs for Neovim and Vim (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18324">#18324</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Support Python 3.14 template strings (t-strings) in formatter and
parser (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17851">#17851</a>)</li>
</ul>
<h2>0.11.12</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Revise fix titles (<code>AIR3</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18215">#18215</a>)</li>
<li>[<code>pylint</code>] Implement <code>missing-maxsplit-arg</code>
(<code>PLC0207</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17454">#17454</a>)</li>
<li>[<code>pyupgrade</code>] New rule <code>UP050</code>
(<code>useless-class-metaclass-type</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18334">#18334</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Replace <code>os.symlink</code>
with <code>Path.symlink_to</code> (<code>PTH211</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18337">#18337</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-bugbear</code>] Ignore <code>__debug__</code>
attribute in <code>B010</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18357">#18357</a>)</li>
<li>[<code>flake8-async</code>] Fix <code>anyio.sleep</code> argument
name (<code>ASYNC115</code>, <code>ASYNC116</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18262">#18262</a>)</li>
<li>[<code>refurb</code>] Fix <code>FURB129</code> autofix generating
invalid syntax (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18235">#18235</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>flake8-implicit-str-concat</code>] Add autofix for
<code>ISC003</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18256">#18256</a>)</li>
<li>[<code>pycodestyle</code>] Improve the diagnostic message for
<code>E712</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18328">#18328</a>)</li>
<li>[<code>flake8-2020</code>] Fix diagnostic message for
<code>!=</code> comparisons (<code>YTT201</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18293">#18293</a>)</li>
<li>[<code>pyupgrade</code>] Make fix unsafe if it deletes comments
(<code>UP010</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18291">#18291</a>)</li>
</ul>
<h3>Documentation</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5faf72a4d9"><code>5faf72a</code></a>
Bump 0.11.13 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18484">#18484</a>)</li>
<li><a
href="28dbc5c51e"><code>28dbc5c</code></a>
[ty] Fix completion order in playground (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18480">#18480</a>)</li>
<li><a
href="ce216c79cc"><code>ce216c7</code></a>
Remove <code>Message::to_rule</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18447">#18447</a>)</li>
<li><a
href="33468cc8cc"><code>33468cc</code></a>
[<code>pyupgrade</code>] Apply <code>UP035</code> only on py313+ for
<code>get_type_hints()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18476">#18476</a>)</li>
<li><a
href="8531f4b3ca"><code>8531f4b</code></a>
[ty] Add infrastructure for AST garbage collection (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18445">#18445</a>)</li>
<li><a
href="55100209c7"><code>5510020</code></a>
[ty] IDE: add support for <code>object.\&lt;CURSOR&gt;</code>
completions (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18468">#18468</a>)</li>
<li><a
href="c0bb83b882"><code>c0bb83b</code></a>
[<code>perflint</code>] fix missing parentheses for lambda and ternary
conditions (PERF4...</li>
<li><a
href="74a4e9af3d"><code>74a4e9a</code></a>
Combine lint and syntax error handling (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18471">#18471</a>)</li>
<li><a
href="8485dbb324"><code>8485dbb</code></a>
[ty] Fix <code>--python</code> argument for Windows, and improve error
messages for bad ...</li>
<li><a
href="0858896bc4"><code>0858896</code></a>
[ty] type narrowing by attribute/subscript assignments (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18041">#18041</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.11.10...0.11.13">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.11.10&new-version=0.11.13)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-09 03:58:04 +00:00
dependabot[bot]
b12507fb21 chore(frontend/deps-dev): Bump the development-dependencies group in /autogpt_platform/frontend with 11 updates (#10322)
Bumps the development-dependencies group in /autogpt_platform/frontend
with 11 updates:

| Package | From | To |
| --- | --- | --- |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.53.1`
| `1.53.2` |
|
[@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y)
| `9.0.14` | `9.0.15` |
|
[@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs)
| `9.0.14` | `9.0.15` |
|
[@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links)
| `9.0.14` | `9.0.15` |
|
[@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding)
| `9.0.14` | `9.0.15` |
|
[@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs)
| `9.0.14` | `9.0.15` |
|
[@types/lodash](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/lodash)
| `4.17.19` | `4.17.20` |
|
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
| `15.3.4` | `15.3.5` |
|
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
| `9.0.14` | `9.0.15` |
| [msw](https://github.com/mswjs/msw) | `2.10.2` | `2.10.3` |
|
[storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core)
| `9.0.14` | `9.0.15` |

Updates `@playwright/test` from 1.53.1 to 1.53.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.53.2</h2>
<h3>Highlights</h3>
<p><a
href="https://redirect.github.com/microsoft/playwright/issues/36317">microsoft/playwright#36317</a>
- [Regression]: Merging pre-1.53 blob reports loses attachments
<a
href="https://redirect.github.com/microsoft/playwright/pull/36357">microsoft/playwright#36357</a>
- [Regression (Chromium)]: CDP missing trailing slash
<a
href="https://redirect.github.com/microsoft/playwright/issues/36292">microsoft/playwright#36292</a>
- [Bug (MSEdge)]: Edge fails to launch when using
<code>msRelaunchNoCompatLayer</code></p>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 138.0.7204.23</li>
<li>Mozilla Firefox 139.0</li>
<li>WebKit 18.5</li>
</ul>
<p>This version was also tested against the following stable
channels:</p>
<ul>
<li>Google Chrome 137</li>
<li>Microsoft Edge 137</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8c38de4d13"><code>8c38de4</code></a>
chore: mark v1.53.2 (<a
href="https://redirect.github.com/microsoft/playwright/issues/36502">#36502</a>)</li>
<li><a
href="50d76d7910"><code>50d76d7</code></a>
(<a
href="https://redirect.github.com/microsoft/playwright/issues/36462">#36462</a>):
fix(chromium): fix compatibility with Edge msRelaunchNoCompatLayer
...</li>
<li><a
href="48be646aa4"><code>48be646</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36443">#36443</a>):
fix(blob): correctly type pre-1.53 onTestEnd event for a...</li>
<li><a
href="dc1555648b"><code>dc15556</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36377">#36377</a>):
chore: follow-up to connectOverCDP fetch logic</li>
<li><a
href="4d0938cb2e"><code>4d0938c</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/36357">#36357</a>):
fix: adding trialing slash detection logic back in urlTo...</li>
<li>See full diff in <a
href="https://github.com/microsoft/playwright/compare/v1.53.1...v1.53.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-a11y` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-a11y</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-a11y</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/addons/a11y">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-docs` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-docs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-docs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/addons/docs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-links` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-links</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-links</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/addons/links">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-onboarding` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-onboarding</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-onboarding</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/addons/onboarding">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/nextjs` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/frameworks/nextjs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/lodash` from 4.17.19 to 4.17.20
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/lodash">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-config-next` from 15.3.4 to 15.3.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.3.5</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Turbopack: list assert/strict as external (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/80884">#80884</a>)</li>
<li>omit searchParam data from FlightRouterState before transport (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/80734">#80734</a>)</li>
<li>bugfix: propagate staleTime to seeded prefetch entry (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/81263">#81263</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>document turbopack trace viewer (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78184">#78184</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/ztanner"><code>@​ztanner</code></a>, <a
href="https://github.com/mischnic"><code>@​mischnic</code></a>, and <a
href="https://github.com/bgw"><code>@​bgw</code></a> for helping!</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e678913aec"><code>e678913</code></a>
v15.3.5</li>
<li>See full diff in <a
href="https://github.com/vercel/next.js/commits/v15.3.5/packages/eslint-config-next">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-plugin-storybook` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases">eslint-plugin-storybook's
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md">eslint-plugin-storybook's
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/lib/eslint-plugin">compare
view</a></li>
</ul>
</details>
<br />

Updates `msw` from 2.10.2 to 2.10.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw/releases">msw's
releases</a>.</em></p>
<blockquote>
<h2>v2.10.3 (2025-07-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>ws:</strong> support <code>resolutionContext</code> on
<code>parse</code> and <code>run</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2544">#2544</a>)
(024568571990b6068601a0ba9f03e143ccbbfffb) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
<li><strong>getResponse:</strong> support <code>resolutionContext</code>
argument (<a
href="https://redirect.github.com/mswjs/msw/issues/2543">#2543</a>)
(ce3ab1fdd3b353d6a1d8db3c69532bde44483a8a) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="594e91f3b0"><code>594e91f</code></a>
chore(release): v2.10.3</li>
<li><a
href="0245685719"><code>0245685</code></a>
fix(ws): support <code>resolutionContext</code> on <code>parse</code>
and <code>run</code> (<a
href="https://redirect.github.com/mswjs/msw/issues/2544">#2544</a>)</li>
<li><a
href="ce3ab1fdd3"><code>ce3ab1f</code></a>
fix(getResponse): support <code>resolutionContext</code> argument (<a
href="https://redirect.github.com/mswjs/msw/issues/2543">#2543</a>)</li>
<li><a
href="13e52aa154"><code>13e52aa</code></a>
test: add type test for mocked responses without type arguments (<a
href="https://redirect.github.com/mswjs/msw/issues/2538">#2538</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw/compare/v2.10.2...v2.10.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `storybook` from 9.0.14 to 9.0.15
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases">storybook's
releases</a>.</em></p>
<blockquote>
<h2>v9.0.15</h2>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md">storybook's
changelog</a>.</em></p>
<blockquote>
<h2>9.0.15</h2>
<ul>
<li>CLI: Do not fail incompatible package check in doctor if only core
packages used - <a
href="https://redirect.github.com/storybookjs/storybook/pull/31886">#31886</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
<li>React: Bump
<code>@​joshwooding/vite-plugin-react-docgen-typescript</code> to 0.6.1
- <a
href="https://redirect.github.com/storybookjs/storybook/pull/31899">#31899</a>,
thanks <a
href="https://github.com/mrginglymus"><code>@​mrginglymus</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d6367d8de4"><code>d6367d8</code></a>
Bump version from &quot;9.0.14&quot; to &quot;9.0.15&quot; [skip
ci]</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.0.15/code/core">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 21:03:15 +00:00
dependabot[bot]
ab4eb10c3d chore(backend/deps-dev): Bump the development-dependencies group across 1 directory with 4 updates (#10173)
Bumps the development-dependencies group with 4 updates in the
/autogpt_platform/backend directory:
[poethepoet](https://github.com/nat-n/poethepoet),
[pyright](https://github.com/RobertCraigie/pyright-python),
[requests](https://github.com/psf/requests) and
[ruff](https://github.com/astral-sh/ruff).

Updates `poethepoet` from 0.34.0 to 0.35.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nat-n/poethepoet/releases">poethepoet's
releases</a>.</em></p>
<blockquote>
<h2>0.35.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Support script tasks that run packages with a <code>__main__</code>
module by <a href="https://github.com/nat-n"><code>@​nat-n</code></a> in
<a
href="https://redirect.github.com/nat-n/poethepoet/pull/300">nat-n/poethepoet#300</a></li>
<li>Allow virtualenv location to reference special git related env vars
by <a href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/302">nat-n/poethepoet#302</a></li>
<li>Simplify CLI help page header by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/291">nat-n/poethepoet#291</a></li>
</ul>
<h2>Fixes</h2>
<ul>
<li>Don't register hidden tasks with poetry plugin by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/292">nat-n/poethepoet#292</a></li>
<li>Don't resolve symlinks to poetry in PoetryExecutor by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/293">nat-n/poethepoet#293</a></li>
<li>Crash with invalid help option on task by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/294">nat-n/poethepoet#294</a></li>
<li>Always validate task args when loading config by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/295">nat-n/poethepoet#295</a></li>
<li>Coerce switch case values to string to avoid errors by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/296">nat-n/poethepoet#296</a></li>
<li>Always print help when no arguments provided by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/299">nat-n/poethepoet#299</a></li>
<li>Suppress useless global options in the poetry plugin cli by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/301">nat-n/poethepoet#301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.34.0...v0.35.0">https://github.com/nat-n/poethepoet/compare/v0.34.0...v0.35.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/nat-n/poethepoet/compare/v0.34.0...v0.35.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.401 to 1.1.402
<details>
<summary>Commits</summary>
<ul>
<li><a
href="708a9d4a96"><code>708a9d4</code></a>
Pyright NPM Package update to 1.1.402 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/349">#349</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.401...v1.1.402">compare
view</a></li>
</ul>
</details>
<br />

Updates `requests` from 2.32.3 to 2.32.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/psf/requests/releases">requests's
releases</a>.</em></p>
<blockquote>
<h2>v2.32.4</h2>
<h2>2.32.4 (2025-06-10)</h2>
<p><strong>Security</strong></p>
<ul>
<li>CVE-2024-47081 Fixed an issue where a maliciously crafted URL and
trusted
environment will retrieve credentials for the wrong hostname/machine
from a
netrc file. (<a
href="https://redirect.github.com/psf/requests/issues/6965">#6965</a>)</li>
</ul>
<p><strong>Improvements</strong></p>
<ul>
<li>Numerous documentation improvements</li>
</ul>
<p><strong>Deprecations</strong></p>
<ul>
<li>Added support for pypy 3.11 for Linux and macOS. (<a
href="https://redirect.github.com/psf/requests/issues/6926">#6926</a>)</li>
<li>Dropped support for pypy 3.9 following its end of support. (<a
href="https://redirect.github.com/psf/requests/issues/6926">#6926</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's
changelog</a>.</em></p>
<blockquote>
<h2>2.32.4 (2025-06-10)</h2>
<p><strong>Security</strong></p>
<ul>
<li>CVE-2024-47081 Fixed an issue where a maliciously crafted URL and
trusted
environment will retrieve credentials for the wrong hostname/machine
from a
netrc file.</li>
</ul>
<p><strong>Improvements</strong></p>
<ul>
<li>Numerous documentation improvements</li>
</ul>
<p><strong>Deprecations</strong></p>
<ul>
<li>Added support for pypy 3.11 for Linux and macOS.</li>
<li>Dropped support for pypy 3.9 following its end of support.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="021dc729f0"><code>021dc72</code></a>
Polish up release tooling for last manual release</li>
<li><a
href="821770e822"><code>821770e</code></a>
Bump version and add release notes for v2.32.4</li>
<li><a
href="59f8aa2adf"><code>59f8aa2</code></a>
Add netrc file search information to authentication documentation (<a
href="https://redirect.github.com/psf/requests/issues/6876">#6876</a>)</li>
<li><a
href="5b4b64c346"><code>5b4b64c</code></a>
Add more tests to prevent regression of CVE 2024 47081</li>
<li><a
href="7bc45877a8"><code>7bc4587</code></a>
Add new test to check netrc auth leak (<a
href="https://redirect.github.com/psf/requests/issues/6962">#6962</a>)</li>
<li><a
href="96ba401c12"><code>96ba401</code></a>
Only use hostname to do netrc lookup instead of netloc</li>
<li><a
href="7341690e84"><code>7341690</code></a>
Merge pull request <a
href="https://redirect.github.com/psf/requests/issues/6951">#6951</a>
from tswast/patch-1</li>
<li><a
href="6716d7c9f2"><code>6716d7c</code></a>
remove links</li>
<li><a
href="a7e1c745dc"><code>a7e1c74</code></a>
Update docs/conf.py</li>
<li><a
href="c799b8167a"><code>c799b81</code></a>
docs: fix dead links to kenreitz.org</li>
<li>Additional commits viewable in <a
href="https://github.com/psf/requests/compare/v2.32.3...v2.32.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.11.12 to 0.11.13
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.11.13</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add unsafe fix for module moved cases
(<code>AIR301</code>,<code>AIR311</code>,<code>AIR312</code>,<code>AIR302</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/18367">#18367</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18366">#18366</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18363">#18363</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18093">#18093</a>)</li>
<li>[<code>refurb</code>] Add coverage of <code>set</code> and
<code>frozenset</code> calls (<code>FURB171</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18035">#18035</a>)</li>
<li>[<code>refurb</code>] Mark <code>FURB180</code> fix unsafe when
class has bases (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18149">#18149</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>perflint</code>] Fix missing parentheses for lambda and
ternary conditions (<code>PERF401</code>, <code>PERF403</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18412">#18412</a>)</li>
<li>[<code>pyupgrade</code>] Apply <code>UP035</code> only on py313+ for
<code>get_type_hints()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18476">#18476</a>)</li>
<li>[<code>pyupgrade</code>] Make fix unsafe if it deletes comments
(<code>UP004</code>,<code>UP050</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18393">#18393</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/18390">#18390</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>fastapi</code>] Avoid false positive for class dependencies
(<code>FAST003</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18271">#18271</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Update editor setup docs for Neovim and Vim (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18324">#18324</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Support Python 3.14 template strings (t-strings) in formatter and
parser (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17851">#17851</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a
href="https://github.com/InSyncWithFoo"><code>@​InSyncWithFoo</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/Viicos"><code>@​Viicos</code></a></li>
<li><a
href="https://github.com/abhijeetbodas2001"><code>@​abhijeetbodas2001</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a
href="https://github.com/dcreager"><code>@​dcreager</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a></li>
<li><a
href="https://github.com/ibraheemdev"><code>@​ibraheemdev</code></a></li>
<li><a
href="https://github.com/lipefree"><code>@​lipefree</code></a></li>
<li><a href="https://github.com/mtshiba"><code>@​mtshiba</code></a></li>
<li><a
href="https://github.com/naslundx"><code>@​naslundx</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/otakutyrant"><code>@​otakutyrant</code></a></li>
<li><a
href="https://github.com/renovate"><code>@​renovate</code></a></li>
<li><a
href="https://github.com/robsdedude"><code>@​robsdedude</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.11.13</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add unsafe fix for module moved cases
(<code>AIR301</code>,<code>AIR311</code>,<code>AIR312</code>,<code>AIR302</code>)
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/18367">#18367</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18366">#18366</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18363">#18363</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18093">#18093</a>)</li>
<li>[<code>refurb</code>] Add coverage of <code>set</code> and
<code>frozenset</code> calls (<code>FURB171</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18035">#18035</a>)</li>
<li>[<code>refurb</code>] Mark <code>FURB180</code> fix unsafe when
class has bases (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18149">#18149</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>perflint</code>] Fix missing parentheses for lambda and
ternary conditions (<code>PERF401</code>, <code>PERF403</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18412">#18412</a>)</li>
<li>[<code>pyupgrade</code>] Apply <code>UP035</code> only on py313+ for
<code>get_type_hints()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18476">#18476</a>)</li>
<li>[<code>pyupgrade</code>] Make fix unsafe if it deletes comments
(<code>UP004</code>,<code>UP050</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18393">#18393</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/18390">#18390</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>fastapi</code>] Avoid false positive for class dependencies
(<code>FAST003</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18271">#18271</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Update editor setup docs for Neovim and Vim (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18324">#18324</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Support Python 3.14 template strings (t-strings) in formatter and
parser (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17851">#17851</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5faf72a4d9"><code>5faf72a</code></a>
Bump 0.11.13 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18484">#18484</a>)</li>
<li><a
href="28dbc5c51e"><code>28dbc5c</code></a>
[ty] Fix completion order in playground (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18480">#18480</a>)</li>
<li><a
href="ce216c79cc"><code>ce216c7</code></a>
Remove <code>Message::to_rule</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18447">#18447</a>)</li>
<li><a
href="33468cc8cc"><code>33468cc</code></a>
[<code>pyupgrade</code>] Apply <code>UP035</code> only on py313+ for
<code>get_type_hints()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18476">#18476</a>)</li>
<li><a
href="8531f4b3ca"><code>8531f4b</code></a>
[ty] Add infrastructure for AST garbage collection (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18445">#18445</a>)</li>
<li><a
href="55100209c7"><code>5510020</code></a>
[ty] IDE: add support for <code>object.\&lt;CURSOR&gt;</code>
completions (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18468">#18468</a>)</li>
<li><a
href="c0bb83b882"><code>c0bb83b</code></a>
[<code>perflint</code>] fix missing parentheses for lambda and ternary
conditions (PERF4...</li>
<li><a
href="74a4e9af3d"><code>74a4e9a</code></a>
Combine lint and syntax error handling (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18471">#18471</a>)</li>
<li><a
href="8485dbb324"><code>8485dbb</code></a>
[ty] Fix <code>--python</code> argument for Windows, and improve error
messages for bad ...</li>
<li><a
href="0858896bc4"><code>0858896</code></a>
[ty] type narrowing by attribute/subscript assignments (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18041">#18041</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.11.12...0.11.13">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-07-08 19:58:00 +00:00
dependabot[bot]
42e141012f chore(backend/deps): Bump the production-dependencies group across 1 directory with 20 updates (#10242)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-07-08 19:20:28 +00:00
Nicholas Tindle
b7f9dcf419 fix(backend): add back perplexity_llama (#10327)
<!-- Clearly explain the need for these changes: -->

We flew too close to the sun

### Changes 🏗️
adds back perplexity due to the need to remove it after it has already
been migrated not before or the system will automatically migrate it to
a different model so that it is one that exists

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested locally; no impact since we are simply re-enabling it
2025-07-08 16:14:56 +00:00
Lluis Agusti
81cb6fb1e6 chore: fixes... 2025-07-08 19:45:09 +04:00
Lluis Agusti
c16598eed6 Merge 'dev' into 'feat/agent-notifications' 2025-07-08 19:32:04 +04:00
Toran Bruce Richards
a4ff8402f1 feat(backend): add Perplexity Sonar models (#10326)
<!-- Clearly explain the need for these changes: -->
Adds the latest Perplexity Sonar models from OpenRouter and removes the
decommissioned Sonar Large model.

### Changes 🏗️
- Added constants for `perplexity/sonar`, `perplexity/sonar-pro`, and
`perplexity/sonar-deep-research` in the `LlmModel` enum​
- Included metadata entries for the new models
- Mapped the new models in the cost configuration with their respective
pricing tiers
- Removed the outdated Sonar Large model

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] `poetry run format`
  - [x] `poetry run test` 

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-07-08 14:08:06 +00:00
Abhimanyu Yadav
2183c94c58 feat(frontend): update data fetching strategy and restructure dashboard page (#10265)
This plugin helps users organise library pages and use React Query for
data fetching on these pages.

### Changes

- Restructure the component position.
- Divide the component into two parts: one for rendering and the other
for hooks.
- Change data fetching from the normal fetch to an autogenerated React
query.
- Everything is shifted to the client side.

### Important Notes

- I haven’t changed any UI in this. I’ve divided it into sub-parts
because my main focus is on data fetching.
- Edit is not working, so I need to fix it in the follow-up PR. I
haven’t broken it; it broke already.
- I need to fix prop drilling in further PRs.
- I need to fix loading states.

> I haven’t changed the credit page or integration because I’m getting
errors while setting up Stripe for testing. My card is constantly
declined, and the integration page is attached to the builder page. I’ll
add changes to it when I’m working with the builder.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
   - [x] Tested manually and everything is working perfectly
   - [x] Verified agent listing loads correctly
   - [x] Confirmed delete functionality works
2025-07-08 13:21:22 +00:00
Zamil Majdy
5ff6d2ca56 fix(backend): Fix stop graph response on already stopped graph 2025-07-08 09:49:17 +08:00
Zamil Majdy
02d3b42745 fix(backend;frontend): Add auto-type convertion support for optional types (#10325)
Auto type conversion doesn't work on optional type.

To reproduce:
<img width="981" alt="image"
src="https://github.com/user-attachments/assets/92198d32-bce9-44fd-a9b0-b7b431aec3ba"
/>

Use the AgentNumberInput block and try to pass a string value to the
sub-agent that uses it.


### Changes 🏗️

Added optional type auto conversation support.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Try to convert string to optional[int]
2025-07-08 08:20:13 +07:00
Ubbe
4bf73f63f4 fix(frontend): vulnerability dep (#10319)
## Changes 🏗️

`pbkdf2` should be on `3.1.3` to address [this
alert](https://github.com/Significant-Gravitas/AutoGPT/security/dependabot/343).

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] pnpm installs work

### For configuration changes:

None
2025-07-07 19:03:08 +00:00
Ubbe
189c353c59 fix(frontend): make reading input_schema way more defensive (#10318)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-07-07 at 22 24 07"
src="https://github.com/user-attachments/assets/72551f58-e41d-4a67-839b-98f63c6aad6b"
/>

Looking at the generated types, it looks like `input_schema` for the
agent can be anything:
-
[libraryAgent](https://github.com/Significant-Gravitas/AutoGPT/blob/dev/autogpt_platform/frontend/src/app/api/__generated__/models/libraryAgent.ts#L18-L38)
-
[libraryAgentInputSchema](https://github.com/Significant-Gravitas/AutoGPT/blob/dev/autogpt_platform/frontend/src/app/api/__generated__/models/libraryAgentInputSchema.ts#L9)

But the Front-end is reading it optimistically through the hardcoded
types on Backend API:

-
[GraphIOSchema](443995d79a/autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts (L324-L329))

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open agents in library
  - [x] The page does not break  

### For configuration changes:

No configuration changes.
2025-07-07 18:43:55 +00:00
Ubbe
07461a88cf fix(frontend): better proxy error logging (#10305)
## Changes 🏗️

If the proxied API call fails with an error that is not JSON-like,
expose it still to the client so it can be shown.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Try to top up credits
- [x] You see a better failure on the error toast when redirected back
to the app

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-07-07 13:37:51 +00:00
Ubbe
daf05cb7bf fix(frontend): fix agent run details view (#10311)
## Changes 🏗️

Fixes the layout getting very wide if the output of an agent is very
long:


https://github.com/user-attachments/assets/e032f425-ed9a-4a13-925f-1bb444f84ef1

It also makes the library agent code a bit more defensive, I get full
page errors on certain agents in the library:

<img width="800" alt="Screenshot 2025-07-04 at 17 35 46"
src="https://github.com/user-attachments/assets/ff8ae461-3792-4e94-941e-9fdd2ead1c87"
/>



## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login and go to agent in library
  - [x] It loads without errors
- [x] When the execution output is long, it doesn't make the page wider
2025-07-07 13:37:32 +00:00
Abhimanyu Yadav
29bdbf3650 fix(frontend): auth e2e tests (#10312)
This pull request introduces extensive updates to the frontend testing
infrastructure, focusing on Playwright-based testing for user
authentication flows. Key changes include the addition of a global setup
for creating test users, new utilities for managing test user pools, and
expanded test coverage for signup and authentication scenarios.

### Testing Infrastructure Enhancements:

* **Global Setup for Tests**:
- Added `globalSetup` in `playwright.config.ts` to create test users
before all tests run. This ensures consistent test data across test
suites. (`autogpt_platform/frontend/playwright.config.ts`,
[autogpt_platform/frontend/playwright.config.tsR16-R17](diffhunk://#diff-27484f7f20f2eb1aeb289730a440f3a126fa825a7b3fae1f9ed19e217c4f2e40R16-R17))
- Implemented `global-setup.ts` to handle test user creation and save
user pools to the file system. Includes fallback for reusing existing
user pools if available.
(`autogpt_platform/frontend/src/tests/global-setup.ts`,
[autogpt_platform/frontend/src/tests/global-setup.tsR1-R43](diffhunk://#diff-3a8141beba2a6117e0eb721c35b39acc168a8f913ee625ce056c6fab5ac3b192R1-R43))

* **Test User Management Utilities**:
- Added functions in `auth.ts` to create, save, load, and clean up test
users. Supports batch creation and file-based persistence for user
pools. (`autogpt_platform/frontend/src/tests/utils/auth.ts`,
[autogpt_platform/frontend/src/tests/utils/auth.tsR1-R190](diffhunk://#diff-198b5d07aa72d50c153a70ecdfdc4bacc408c2d638c90d858f40d0183549973bR1-R190))
- Enhanced `user-generator.ts` to generate individual or multiple test
users with customizable options.
(`autogpt_platform/frontend/src/tests/utils/user-generator.ts`,
[autogpt_platform/frontend/src/tests/utils/user-generator.tsR2-R41](diffhunk://#diff-a7cb4f403a4cf3605ed1046b0263412205e56e51b16052a9da1e8db9bf34b940R2-R41))

### Expanded Test Coverage:

* **Signup Flow Tests**:
- Added comprehensive tests for signup functionality, including
successful signup, form validation, custom credentials, and duplicate
email handling. (`autogpt_platform/frontend/src/tests/signup.spec.ts`,
[autogpt_platform/frontend/src/tests/signup.spec.tsR1-R113](diffhunk://#diff-d1baa54deff7f3b1eedefd6cec5619ae8edd872d361ef57b6c32998ed22d6661R1-R113))
- Developed `signup.ts` utility functions to automate signup processes
and validate form behavior.
(`autogpt_platform/frontend/src/tests/utils/signup.ts`,
[autogpt_platform/frontend/src/tests/utils/signup.tsR1-R184](diffhunk://#diff-cb05d73a6bd7a129759b0b3382825e90cde561a42fc85b6a25777f6bd2f84511R1-R184))

* **Authentication Utilities**:
- Introduced `SigninUtils` in `signin.ts` for login, logout, and
authentication cycle testing. Provides reusable methods for verifying
user states. (`autogpt_platform/frontend/src/tests/utils/signin.ts`,
[autogpt_platform/frontend/src/tests/utils/signin.tsR1-R94](diffhunk://#diff-7cfec955c159d69f51ba9fcca7d979be090acd6fe246b125551d60192d697d98R1-R94))

### Minor Updates:

* Added environment variable `BROWSER_TYPE` to CI workflow for
browser-specific Playwright tests.
(`.github/workflows/platform-frontend-ci.yml`,
[.github/workflows/platform-frontend-ci.ymlR215-R216](diffhunk://#diff-29396f5dccefac146b71bed295fdbb790b17fda6a5ce2e9f4f8abe80eb14a527R215-R216))

These changes collectively improve the robustness and maintainability of
the frontend testing framework, enabling more reliable and scalable
testing of user authentication features.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Validated all authentication tests, and they are working
2025-07-07 13:04:14 +00:00
Lluis Agusti
7706740308 chore: agent notifications 2025-07-07 13:36:43 +04:00
Zamil Majdy
171deea806 feat(block): Added best-effort support of multiple/parallel tool calls for SmartDecisionMaker Block 2025-07-04 10:19:09 -07:00
Zamil Majdy
149bbd910a feat(block): Introduce GoogleSheetsFindBlock 2025-07-04 09:37:30 -07:00
Zamil Majdy
c6741e7c14 fix(block): Fix Broken SmartDecisionManager block using Anthropic 2025-07-04 08:39:25 -07:00
Reinier van der Leer
6de1e470d9 fix(frontend/library): Support number values in empty input check (#10308)
- Resolves #10307
- Follow-up fix to #10167

### Changes 🏗️

- Update check for empty/missing inputs in `AgentRunDraftView` to
correctly handle number values

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] In the library, run an agent that requires a number input
2025-07-04 12:23:27 +00:00
Ubbe
67eefdd35c fix(frontend): handle JSON requests without payload (#10310)
## Changes 🏗️

We created a proxy route ( `/api/proxy/...` ) to handle API calls made
in the browser from the legacy `BackendAPI`, ensuring security and
compatibility with server cookies 💆🏽 🍪

However, the code on the proxy was written optimistically, expecting the
payload to be present in the JSON requests... even though many requests,
such as `POST` or `PATCH`, can sometimes be fired without a body.

This fixed the issue we saw when stopping a running agent wasn't
working, because to stop it, fires a `PATCH` without a payload.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checkout and run this locally
  - [x] Login
  - [x] Go to Library
  - [x] Run agent
  - [x] Stop it
  - [x] It works without errors
2025-07-04 12:14:30 +00:00
Ubbe
01950ccc42 fix(frontend): password reset via server callback (#10303)
## Changes 🏗️

### Root Cause

With httpOnly cookies, the Supabase client can't automatically exchange
password reset codes for sessions client-side because it can't access
the secure cookies 🍪 ( _which is a good thing_ ).

Previously, when users clicked email reset links, the Supabase client on
the browser would automatically handle the code exchange, but
with`httpOnly`, this is not possible because the Supabase browser client
does not have access to session info, so it fails silently 🥵

### Solution

Moved password reset code exchange to server-side middleware that can
access `httpOnly` cookies and properly create authenticated sessions.

### Code Changes

**`middleware.ts`**
- intercepts `/reset-password` URLs containing `code` parameter
- uses helper function to exchange code for session server-side
- redirects with error parameters if exchange fails
- moved `getUser()` call to avoid middleware timing issues

**`reset-password/page.tsx`**
- added toast notifications for password reset errors
- checks URL parameters for error messages on page load

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Password reset emails send successfully
  - [x] Valid reset codes exchange for sessions server-side
  - [x] Invalid/expired codes show error messages via toast
  - [x] Successfully authenticated users can change passwords
  - [x] URL parameters are cleaned up after error display
  - [x] Middleware doesn't break normal authentication flows
  
 ### For configuration changes:

For this to work we need to configure Supabase with the new
password-reset redirect URL.
```
/api/auth/callback/reset-password
```
- [x] Already added in Supabase dev
- [ ] We need to add it on Supabase prod
2025-07-04 12:09:13 +00:00
Reinier van der Leer
358ce1d258 fix(backend/library): Include subgraphs in get_library_agent (#10301)
- Resolves #10300
- Follow-up fix to #10167

### Changes 🏗️

- Include sub-graphs in `get_library_agent` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Executing agent with sub-graphs that require credentials works
2025-07-04 10:29:53 +00:00
Zamil Majdy
a5691c0e89 feat(block): Add dict append capability for GoogleSheetsAppendBlock 2025-07-03 20:54:59 -07:00
Zamil Majdy
0b35dff1e6 fix(block): Fix failing GoogleSheetsAppendBlock on undefined append range 2025-07-03 17:13:41 -07:00
Zamil Majdy
6cf9136cdd feat(block): Support URL format input instead of ID for Google Sheet blocks 2025-07-03 16:47:33 -07:00
Zamil Majdy
5d91a9c9b9 feat(block): Make RetrieveInformationBlock output static 2025-07-03 14:15:22 -07:00
Zamil Majdy
e3d84d87f8 fix(blocks): restore batching logic in CreateListBlock
During data manipulation refactoring, the CreateListBlock lost its
important batching functionality with max_size and max_tokens parameters.
This restores the original implementation that can yield lists in chunks
based on size or token limits.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-03 14:01:22 -07:00
Toran Bruce Richards
9fecbe2a31 feat(blocks): add plural outputs where blocks yield singular values in loops (#10304)
## Summary

This PR adds missing plural output versions to blocks that yield
individual items in loops but don't provide the complete collection,
enabling both individual item access (for iteration) and complete
collection access (for aggregate operations).

## Changes

### GitHub Blocks (existing)
- **GithubListPullRequestsBlock**: Added `pull_requests` output
alongside existing `pull_request`
- **GithubListPRReviewersBlock**: Added `reviewers` output alongside
existing `reviewer`

### Additional Blocks (added in this PR)
- **GetRedditPostsBlock**: Added `posts` output for complete list of
Reddit posts
- **ReadRSSFeedBlock**: Added `entries` output for complete list of RSS
entries
- **AddMemoryBlock**: Added `results` output for complete list of memory
operation results

## Pattern Applied

The pattern ensures blocks provide both:
```python
# Complete collection first
yield "plural_output", all_items

# Then individual items for iteration
for item in all_items:
    yield "singular_output", item
```

## Testing
- Updated test outputs to include plural versions
- All blocks maintain backward compatibility with existing singular
outputs
- `poetry run format` -  Passed
- `poetry run test` -  Blocks validated

## Benefits
- **Iteration**: Users can still iterate over individual items as before
- **Aggregation**: Users can now access complete collections for
operations like counting, filtering, or batch processing
- **Compatibility**: Existing workflows continue to work unchanged

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-03 20:06:31 +00:00
Toran Bruce Richards
4744e0f6b1 feat(blocks): add data manipulation blocks and refactor basic.py (#10261)
### Changes 🏗️

#### New List Operation Blocks
- Implement `GetListItemBlock` for retrieving an element at a specific
index, with negative index support
- Introduce `RemoveFromListBlock` to remove or pop items and optionally
return the removed value
- Add `ReplaceListItemBlock` to overwrite an item at a given index and
return the old value
- Provide `ListIsEmptyBlock` for quickly checking if a list has no
elements

#### New Dictionary Operation Blocks (for consistency with list
operations)
- Add `RemoveFromDictionaryBlock` to remove key-value pairs and
optionally return the removed value
- Implement `ReplaceDictionaryValueBlock` to replace values for a
specified key and return the old value
- Provide `DictionaryIsEmptyBlock` for checking if a dictionary has no
elements

#### Code Organization & Refactoring
- **Created `data_manipulation.py`**: Moved all dictionary and list
manipulation blocks to a dedicated file to prevent `basic.py` from
becoming too large
- **Refactored `basic.py`**: Now contains only core utility blocks
(FileStore, StoreValue, PrintToConsole, Note, UniversalTypeConverter)
- **Ensured consistency**: Dictionary and list blocks now have
equivalent functionality and follow the same patterns
- **Removed redundancy**: Eliminated duplicate `GetDictionaryValueBlock`
since `FindInDictionaryBlock` already provides comprehensive lookup
functionality
- **Preserved UUIDs**: All existing block UUIDs maintained to ensure no
breaking changes

#### Block Organization Summary
**`basic.py` (core utilities):**
- `FileStoreBlock`, `StoreValueBlock`, `PrintToConsoleBlock`,
`NoteBlock`, `UniversalTypeConverterBlock`

**`data_manipulation.py` (dictionary & list operations):**
- **Dictionary blocks:** Create, Add, Find, Remove, Replace, IsEmpty
- **List blocks:** Create, Add, Find, Get, Remove, Replace, IsEmpty

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description  
- [x] I have made a test plan  
- [x] I have tested my changes according to the test plan:
  - [x] `poetry run format`
  - [x] `poetry run test`
  - [x] `pnpm format`

<details>
  <summary>Example test plan</summary>

  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-07-03 20:04:21 +00:00
Zamil Majdy
24b4ab9864 feat(block): Enhance Mem0 blocks filetering & add more GoogleSheets blocks (#10287)
The block library was missing two key capabilities that keep coming up
in real-world agent flows:

1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.

This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}

---

### Changes 🏗️
#### 📚 Mem0 block enhancements  
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.

#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)  
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |

* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}

---

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
2025-07-03 18:01:30 +00:00
Ubbe
04e90da031 fix(frontend): proxy via API route no actions (#10296)
## Changes 🏗️

We noticed that in some pages ( `/build` _mainly_ ), where a lot of API
calls are fired in parallel using the old `BackendAPI`( _running many
agent executions_ ) the performance became worse. That is because the
`BackendAPI` was proxied via [server
actions](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
( _to make calls to our Backend on the Next.js server_ ).

Looks like server actions don't run in parallel, and their performance
is also subpar, given that we are not hosted on Vercel (they don't
utilise the edge middleware).

These changes cause all `BackendAPI` calls to be proxied via the Next.js
`/api/` route when executed on the browser; when executed on the server,
they bypass the proxy and directly access the API. Hopefully we gain:

- 🚀 Better Performance - API routes are faster than server actions for
this use case
- 🔧 Less Magic - Direct fetch calls instead of hidden server action
complexity
- ♻️ Code Reuse - Leveraging the existing proxy infrastructure used by
react-query
- 🎯 Cleaner Architecture - Single proxy pattern for all API calls
- 🔒 Same Security - Still uses server-side authentication with httpOnly
cookies

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] E2E tests pass
  - [x] Click through the app, there is no issues
  - [x] Agent executions are fast again in the builder
  - [x] Test file uploads
2025-07-03 15:18:47 +00:00
Zamil Majdy
d4646c249d feat(backend): implement KV data storage blocks 2025-07-03 07:54:50 -07:00
Zamil Majdy
095199bfa6 feat(backend): implement KV data storage blocks (#10294)
This PR introduces key-value storage blocks.

### Changes 🏗️

- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run all 244 block tests - all passing 
  - [x] Test PersistInformationBlock with mock data storage
  - [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
  - [x] Verify database function integration through all manager classes
  - [x] Run lint and type checking - all passing 
  - [x] Verify database migration is included and valid

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-03 14:24:51 +00:00
Zamil Majdy
90fb223114 fix(frontend): Fix status chip not showing on graph with INCOMPLETE status 2025-07-03 07:25:59 -07:00
Reinier van der Leer
b1f3122243 fix(frontend): Add fallback for NEXT_PUBLIC_FRONTEND_BASE_URL to API proxy (#10299)
- Resolves #10298
- Follow-up to #10270

### Changes 🏗️

Amend two changes from #10270:

- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
  - Run the platform locally
  - [x] -> `/library` loads normally

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-07-03 12:26:50 +00:00
Zamil Majdy
f1cc2afbda feat(backend): improve stop graph execution reliability (#10293)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking

## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions

### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling

## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-02 21:21:26 +00:00
Ubbe
f394a0eabb fix(frontend): do not swallow errors on the proxy (#10289)
## Changes 🏗️

Requests to the Backend happen now on the server, given we moved to
server-side cookies 🍪 ... however the client proxy is not exposing the
API errors to the client correctly. This aim to fix that.

## Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to the platform
  - [x] Run agents until you encounter an error
  - [x] The error is shown on the toast
2025-07-02 14:18:17 +00:00
Ubbe
311bcc7751 fix(frontend): onboarding runtime error (#10288)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-07-02 at 16 43 08"
src="https://github.com/user-attachments/assets/d7cd0dd7-e671-4c5d-8ed9-6d8f56371ff5"
/>

During logout, the user state gets cleared but the onboarding provider
continues to run and tries to access onboarding.completedSteps on a null
object, causing a runtime error 😬

This mostly happens because onboarding is broken on local and dev ( the
onboarding agents don't work ), so I manually skip it after creating an
account, navigating to `/marketplace`. That makes me think that the
onboarding provider still thinks I need to onboard, and hence why this
error?

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new user
- [x] Instead of completing onboarding, navigate to `/marketplace` via
browser URL
- [x] logout, login/logout again few times and you don't see runtime
errors
2025-07-02 14:13:22 +00:00
SOUHAILA SERBOUT
e2bd727798 feat: optimize frontend CI with shared setup job (#10286)
# Change details
- **Before**: Each job separately installs dependencies (~4 redundant
installations)
  ### Massive Redundancy in Setup Steps
  Each job repeats these SAME 4 steps:
  - Checkout repository
  - Set up Node.js (version 21) 
  - Enable corepack
  - Install dependencies (pnpm install --frozen-lockfile)
  
  This happens 4+ times across different jobs:

   -  lint job
    - type-check job
    - chromatic job
    - test job (runs 2x due to matrix strategy)
    
   ### No Dependency Caching
  No caching strategy - downloads packages fresh every time
   - Every workflow run downloads all packages from scratch
   - No benefit from previous runs, even with identical pnpm-lock.yaml


- **After**: Dependencies installed once in setup job, cached and reused

This optimization maintains all existing CI functionality while
significantly improving pipeline efficiency.
A workflow run example is dispatched:
https://github.com/souhailaS/AutoGPT/actions/workflows/platform-frontend-ci.yml


## Additional Context 
We are a team of researchers from University of Zurich
(https://www.ifi.uzh.ch/en/zest.html) currently **working on automating
energy optimizations in GitHub Actions workflows**. This optimization
maintains full functionality while significantly reducing computational
overhead and energy consumption.

souhaila.serbout@uzh.ch
2025-07-02 10:02:19 +00:00
seer-by-sentry[bot]
47f503f223 feat(backend): Support aiohttp.BasicAuth in make_request (#10283)
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284

### Changes 🏗️

- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.

Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.

This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
2025-07-01 13:09:54 +00:00
Bently
22d58367ec dx(platform): Add initial setup scripts for linux and windows (#9912)
This PR adds two setup scripts that will setup autogpt fully, it has a
windows .bat and a linux/mac .sh script, for now they are placed in a
new folder called "Installer"

### Note, the installers are supposed to be run outside of the autogpt
repo folder like Desktop/in a new empy folder because it will clone the
repo into the current directory.

I have had to add ``cross-env`` via ``pnpm add cross-env `` as on
windows the env is set differently in the ``package.json`` build section
``"build": "cross-env pnpm run generate:api-client &&
SKIP_STORYBOOK_TESTS=true next build"``

once fully setup i plan to make it so these commands can be run with the
following commands

Linux/Mac
```bash
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
```

Windows cmd/powershell
```bash
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
```

Currently the commands above dont work but will later on!

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have tested the linux ``install.sh`` on a ubuntu system and it
setup the platform fully.
- [x] I have tested the windows ``install.bat`` on my windows system and
it setup the platform fully.
- [x] I have tested on both OS's and checked with missing prerequisites
to see if it shows the errors and it does
2025-07-01 13:09:38 +00:00
dependabot[bot]
d076d0175f chore(frontend/deps-dev): Bump the development-dependencies group in /autogpt_platform/frontend with 9 updates (#10277)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-07-01 06:30:05 +00:00
Ubbe
a33d58dd33 chore(frontend): add generated files/queries to Git (#10281)
## Changes 🏗️

We want to make running the AutoGPT Front-end as easy as possible. For
that, you should be able to run it with the least amount of commands.

We recently added generated queries and types on the Front-end from the
Back-end OpenAPI schema, to make development easier and catch bugs
earlier. However, with the current setup, developers are forced to run
`pnpm generate:api-all` with the Back-end running, which is annoying.

After this PR, the Front-end can be rerun with just `pnpm i & pnpm dev`.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the Front-end with just `pnpm dev` and it works

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-07-01 06:01:05 +00:00
Ubbe
254bb6236f fix(frontend): use NEXT_PUBLIC_AGPT_SERVER_URL on proxy route (#10280)
### Changes 🏗️

A new undocumented env var, `NEXT_PUBLIC_AGPT_SERVER_BASE_URL`, was
added to the proxy route for it to work with the new `react-query`
mutator.

I removed it and used the existing `NEXT_PUBLIC_AGPT_SERVER_URL`, so we
have fewer environment variables to manage ( _and this one is already
added to all environments_ ).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the server locally
  - [x] All pages ( library, marketplace, builder, settings ) work
2025-07-01 05:14:18 +00:00
587 changed files with 39663 additions and 6586 deletions

View File

@@ -18,11 +18,14 @@ defaults:
working-directory: autogpt_platform/frontend
jobs:
lint:
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
@@ -32,6 +35,45 @@ jobs:
- name: Enable corepack
run: corepack enable
- name: Generate cache key
id: cache-key
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
lint:
runs-on: ubuntu-latest
needs: setup
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -40,9 +82,11 @@ jobs:
type-check:
runs-on: ubuntu-latest
needs: setup
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
@@ -52,21 +96,29 @@ jobs:
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Generate API client
run: pnpm generate:api-client
- name: Run tsc check
run: pnpm type-check
chromatic:
runs-on: ubuntu-latest
needs: setup
# Only run on dev branch pushes or PRs targeting dev
if: github.ref == 'refs/heads/dev' || github.base_ref == 'dev'
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -78,6 +130,14 @@ jobs:
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -88,9 +148,11 @@ jobs:
onlyChanged: true
workingDir: autogpt_platform/frontend
token: ${{ secrets.GITHUB_TOKEN }}
exitOnceUploaded: true
test:
runs-on: ubuntu-latest
needs: setup
strategy:
fail-fast: false
matrix:
@@ -128,6 +190,14 @@ jobs:
run: |
docker compose -f ../docker-compose.yml up -d
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -143,6 +213,8 @@ jobs:
- name: Run Playwright tests
run: pnpm test:no-build --project=${{ matrix.browser }}
env:
BROWSER_TYPE: ${{ matrix.browser }}
- name: Print Final Docker Compose logs
if: always()

3
.gitignore vendored
View File

@@ -177,6 +177,3 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
.claude/settings.local.json
# Auto generated client
autogpt_platform/frontend/src/app/api/__generated__

View File

@@ -1,8 +1,7 @@
# AutoGPT: Build, Deploy, and Run AI Agents
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) &ensp;
[![Discord Follow](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fautogpt%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&label=total%20members&logo=discord&logoColor=white&color=7289da)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.

View File

@@ -1,7 +1,6 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
AutoGPT Platform is a monorepo containing:
@@ -121,6 +120,7 @@ Key models (defined in `/backend/schema.prisma`):
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
@@ -143,4 +143,4 @@ Key models (defined in `/backend/schema.prisma`):
- Cacheable paths include: static assets (`/static/*`, `/_next/static/*`), health checks, public store pages, documentation
- Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications
- Applied to both main API server and external API applications

View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
[[package]]
name = "aiohappyeyeballs"
@@ -177,7 +177,7 @@ files = [
{file = "async-timeout-4.0.3.tar.gz", hash = "sha256:4640d96be84d82d02ed59ea2b7105a0f7b33abe8703703cd0ab0bf87c427522f"},
{file = "async_timeout-4.0.3-py3-none-any.whl", hash = "sha256:7405140ff1230c310e51dc27b3145b9092d659ce68ff733fb0cefe3ee42be028"},
]
markers = {main = "python_version == \"3.10\"", dev = "python_full_version < \"3.11.3\""}
markers = {main = "python_version < \"3.11\"", dev = "python_full_version < \"3.11.3\""}
[[package]]
name = "attrs"
@@ -390,7 +390,7 @@ description = "Backport of PEP 654 (exception groups)"
optional = false
python-versions = ">=3.7"
groups = ["main"]
markers = "python_version == \"3.10\""
markers = "python_version < \"3.11\""
files = [
{file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"},
{file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"},
@@ -1667,30 +1667,30 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.11.10"
version = "0.12.2"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.11.10-py3-none-linux_armv6l.whl", hash = "sha256:859a7bfa7bc8888abbea31ef8a2b411714e6a80f0d173c2a82f9041ed6b50f58"},
{file = "ruff-0.11.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:968220a57e09ea5e4fd48ed1c646419961a0570727c7e069842edd018ee8afed"},
{file = "ruff-0.11.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1067245bad978e7aa7b22f67113ecc6eb241dca0d9b696144256c3a879663bca"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4854fd09c7aed5b1590e996a81aeff0c9ff51378b084eb5a0b9cd9518e6cff2"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8b4564e9f99168c0f9195a0fd5fa5928004b33b377137f978055e40008a082c5"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b6a9cc5b62c03cc1fea0044ed8576379dbaf751d5503d718c973d5418483641"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:607ecbb6f03e44c9e0a93aedacb17b4eb4f3563d00e8b474298a201622677947"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7b3a522fa389402cd2137df9ddefe848f727250535c70dafa840badffb56b7a4"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f071b0deed7e9245d5820dac235cbdd4ef99d7b12ff04c330a241ad3534319f"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a60e3a0a617eafba1f2e4186d827759d65348fa53708ca547e384db28406a0b"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:da8ec977eaa4b7bf75470fb575bea2cb41a0e07c7ea9d5a0a97d13dbca697bf2"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:ddf8967e08227d1bd95cc0851ef80d2ad9c7c0c5aab1eba31db49cf0a7b99523"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:5a94acf798a82db188f6f36575d80609072b032105d114b0f98661e1679c9125"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:3afead355f1d16d95630df28d4ba17fb2cb9c8dfac8d21ced14984121f639bad"},
{file = "ruff-0.11.10-py3-none-win32.whl", hash = "sha256:dc061a98d32a97211af7e7f3fa1d4ca2fcf919fb96c28f39551f35fc55bdbc19"},
{file = "ruff-0.11.10-py3-none-win_amd64.whl", hash = "sha256:5cc725fbb4d25b0f185cb42df07ab6b76c4489b4bfb740a175f3a59c70e8a224"},
{file = "ruff-0.11.10-py3-none-win_arm64.whl", hash = "sha256:ef69637b35fb8b210743926778d0e45e1bffa850a7c61e428c6b971549b5f5d1"},
{file = "ruff-0.11.10.tar.gz", hash = "sha256:d522fb204b4959909ecac47da02830daec102eeb100fb50ea9554818d47a5fa6"},
{file = "ruff-0.12.2-py3-none-linux_armv6l.whl", hash = "sha256:093ea2b221df1d2b8e7ad92fc6ffdca40a2cb10d8564477a987b44fd4008a7be"},
{file = "ruff-0.12.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:09e4cf27cc10f96b1708100fa851e0daf21767e9709e1649175355280e0d950e"},
{file = "ruff-0.12.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:8ae64755b22f4ff85e9c52d1f82644abd0b6b6b6deedceb74bd71f35c24044cc"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3eb3a6b2db4d6e2c77e682f0b988d4d61aff06860158fdb413118ca133d57922"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:73448de992d05517170fc37169cbca857dfeaeaa8c2b9be494d7bcb0d36c8f4b"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b8b94317cbc2ae4a2771af641739f933934b03555e51515e6e021c64441532d"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:45fc42c3bf1d30d2008023a0a9a0cfb06bf9835b147f11fe0679f21ae86d34b1"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce48f675c394c37e958bf229fb5c1e843e20945a6d962cf3ea20b7a107dcd9f4"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:793d8859445ea47591272021a81391350205a4af65a9392401f418a95dfb75c9"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6932323db80484dda89153da3d8e58164d01d6da86857c79f1961934354992da"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:6aa7e623a3a11538108f61e859ebf016c4f14a7e6e4eba1980190cacb57714ce"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2a4a20aeed74671b2def096bdf2eac610c7d8ffcbf4fb0e627c06947a1d7078d"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:71a4c550195612f486c9d1f2b045a600aeba851b298c667807ae933478fcef04"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:4987b8f4ceadf597c927beee65a5eaf994c6e2b631df963f86d8ad1bdea99342"},
{file = "ruff-0.12.2-py3-none-win32.whl", hash = "sha256:369ffb69b70cd55b6c3fc453b9492d98aed98062db9fec828cdfd069555f5f1a"},
{file = "ruff-0.12.2-py3-none-win_amd64.whl", hash = "sha256:dca8a3b6d6dc9810ed8f328d406516bf4d660c00caeaef36eb831cf4871b0639"},
{file = "ruff-0.12.2-py3-none-win_arm64.whl", hash = "sha256:48d6c6bfb4761df68bc05ae630e24f506755e702d4fb08f08460be778c7ccb12"},
{file = "ruff-0.12.2.tar.gz", hash = "sha256:d7b4f55cd6f325cb7621244f19c873c565a08aff5a4ba9c69aa7355f3f7afd3e"},
]
[[package]]
@@ -1823,7 +1823,7 @@ description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "python_version == \"3.10\""
markers = "python_version < \"3.11\""
files = [
{file = "tomli-2.1.0-py3-none-any.whl", hash = "sha256:a5c57c3d1c56f5ccdf89f6523458f60ef716e210fc47c4cfb188c5ba473e0391"},
{file = "tomli-2.1.0.tar.gz", hash = "sha256:3f646cae2aec94e17d04973e4249548320197cfabdf130015d023de4b74d8ab8"},
@@ -2176,4 +2176,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "d92143928a88ca3a56ac200c335910eafac938940022fed8bd0d17c95040b54f"
content-hash = "574057127b05f28c2ae39f7b11aa0d7c52f857655e9223e23a27c9989b2ac10f"

View File

@@ -23,7 +23,7 @@ uvicorn = "^0.34.3"
[tool.poetry.group.dev.dependencies]
redis = "^5.2.1"
ruff = "^0.11.10"
ruff = "^0.12.2"
[build-system]
requires = ["poetry-core"]

View File

@@ -55,9 +55,9 @@ RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
## GCS bucket is required for marketplace and library functionality
MEDIA_GCS_BUCKET_NAME=
## For local development, you may need to set NEXT_PUBLIC_FRONTEND_BASE_URL for the OAuth flow
## For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow
## for integrations to work. Defaults to the value of PLATFORM_BASE_URL if not set.
# NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000
# FRONTEND_BASE_URL=http://localhost:3000
## PLATFORM_BASE_URL must be set to a *publicly accessible* URL pointing to your backend
## to use the platform's webhook-related functionality.
@@ -199,9 +199,18 @@ ZEROBOUNCE_API_KEY=
## ===== OPTIONAL API KEYS END ===== ##
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400
# Logging Configuration
LOG_LEVEL=INFO
ENABLE_CLOUD_LOGGING=false
ENABLE_FILE_LOGGING=false
# Use to manually set the log directory
# LOG_DIR=./logs
# Example Blocks Configuration
# Set to true to enable example blocks in development
# These blocks are disabled by default in production
ENABLE_EXAMPLE_BLOCKS=false

View File

@@ -0,0 +1,150 @@
# Test Data Scripts
This directory contains scripts for creating and updating test data in the AutoGPT Platform database, specifically designed to test the materialized views for the store functionality.
## Scripts
### test_data_creator.py
Creates a comprehensive set of test data including:
- Users with profiles
- Agent graphs, nodes, and executions
- Store listings with multiple versions
- Reviews and ratings
- Library agents
- Integration webhooks
- Onboarding data
- Credit transactions
**Image/Video Domains Used:**
- Images: `picsum.photos` (for all image URLs)
- Videos: `youtube.com` (for store listing videos)
### test_data_updater.py
Updates existing test data to simulate real-world changes:
- Adds new agent graph executions
- Creates new store listing reviews
- Updates store listing versions
- Adds credit transactions
- Refreshes materialized views
### check_db.py
Tests and verifies materialized views functionality:
- Checks pg_cron job status (for automatic refresh)
- Displays current materialized view counts
- Adds test data (executions and reviews)
- Creates store listings if none exist
- Manually refreshes materialized views
- Compares before/after counts to verify updates
- Provides a summary of test results
## Materialized Views
The scripts test three key database views:
1. **mv_agent_run_counts**: Tracks execution counts by agent
2. **mv_review_stats**: Tracks review statistics (count, average rating) by store listing
3. **StoreAgent**: A view that combines store listing data with execution counts and ratings for display
The materialized views (mv_agent_run_counts and mv_review_stats) are automatically refreshed every 15 minutes via pg_cron, or can be manually refreshed using the `refresh_store_materialized_views()` function.
## Usage
### Prerequisites
1. Ensure the database is running:
```bash
docker compose up -d
# or for test database:
docker compose -f docker-compose.test.yaml --env-file ../.env up -d
```
2. Run database migrations:
```bash
poetry run prisma migrate deploy
```
### Running the Scripts
#### Option 1: Use the helper script (from backend directory)
```bash
poetry run python run_test_data.py
```
#### Option 2: Run individually
```bash
# From backend/test directory:
# Create initial test data
poetry run python test_data_creator.py
# Update data to test materialized view changes
poetry run python test_data_updater.py
# From backend directory:
# Test materialized views functionality
poetry run python check_db.py
# Check store data status
poetry run python check_store_data.py
```
#### Option 3: Use the shell script (from backend directory)
```bash
./run_test_data_scripts.sh
```
### Manual Materialized View Refresh
To manually refresh the materialized views:
```sql
SELECT refresh_store_materialized_views();
```
## Configuration
The scripts use the database configuration from your `.env` file:
- `DATABASE_URL`: PostgreSQL connection string
- Database should have the platform schema
## Data Generation Limits
Configured in `test_data_creator.py`:
- 100 users
- 100 agent blocks
- 1-5 graphs per user
- 2-5 nodes per graph
- 1-5 presets per user
- 1-10 library agents per user
- 1-20 executions per graph
- 1-5 reviews per store listing version
## Notes
- All image URLs use `picsum.photos` for consistency with Next.js image configuration
- The scripts create realistic relationships between entities
- Materialized views are refreshed at the end of each script
- Data is designed to test both happy paths and edge cases
## Troubleshooting
### Reviews and StoreAgent view showing 0
If `check_db.py` shows that reviews remain at 0 and StoreAgent view shows 0 store agents:
1. **No store listings exist**: The script will automatically create test store listings if none exist
2. **No approved versions**: Store listings need approved versions to appear in the StoreAgent view
3. **Check with `check_store_data.py`**: This script provides detailed information about:
- Total store listings
- Store listing versions by status
- Existing reviews
- StoreAgent view contents
- Agent graph executions
### pg_cron not installed
The warning "pg_cron extension is not installed" is normal in local development environments. The materialized views can still be refreshed manually using the `refresh_store_materialized_views()` function, which all scripts do automatically.
### Common Issues
- **Type errors with None values**: Fixed in the latest version of check_db.py by using `or 0` for nullable numeric fields
- **Missing relations**: Ensure you're using the correct field names (e.g., `StoreListing` not `storeListing` in includes)
- **Column name mismatches**: The database uses camelCase for column names (e.g., `agentGraphId` not `agent_graph_id`)

View File

@@ -14,14 +14,27 @@ T = TypeVar("T")
@functools.cache
def load_all_blocks() -> dict[str, type["Block"]]:
from backend.data.block import Block
from backend.util.settings import Config
# Check if example blocks should be loaded from settings
config = Config()
load_examples = config.enable_example_blocks
# Dynamically load all modules under backend.blocks
current_dir = Path(__file__).parent
modules = [
str(f.relative_to(current_dir))[:-3].replace(os.path.sep, ".")
for f in current_dir.rglob("*.py")
if f.is_file() and f.name != "__init__.py" and not f.name.startswith("test_")
]
modules = []
for f in current_dir.rglob("*.py"):
if not f.is_file() or f.name == "__init__.py" or f.name.startswith("test_"):
continue
# Skip examples directory if not enabled
relative_path = f.relative_to(current_dir)
if not load_examples and relative_path.parts[0] == "examples":
continue
module_path = str(relative_path)[:-3].replace(os.path.sep, ".")
modules.append(module_path)
for module in modules:
if not re.match("^[a-z0-9_.]+$", module):
raise ValueError(

View File

@@ -14,10 +14,10 @@ from backend.data.block import (
get_block,
)
from backend.data.execution import ExecutionStatus
from backend.data.model import SchemaField
from backend.util import json
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util import json, retry
logger = logging.getLogger(__name__)
_logger = logging.getLogger(__name__)
class AgentExecutorBlock(Block):
@@ -77,27 +77,42 @@ class AgentExecutorBlock(Block):
use_db_query=False,
)
logger = execution_utils.LogMetadata(
logger=_logger,
user_id=input_data.user_id,
graph_eid=graph_exec.id,
graph_id=input_data.graph_id,
node_eid="*",
node_id="*",
block_name=self.name,
)
try:
async for name, data in self._run(
graph_id=input_data.graph_id,
graph_version=input_data.graph_version,
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
):
yield name, data
except asyncio.CancelledError:
logger.warning(
f"Execution of graph {input_data.graph_id} version {input_data.graph_version} was cancelled."
await self._stop(
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
)
await execution_utils.stop_graph_execution(
graph_exec.id, use_db_query=False
logger.warning(
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} was cancelled."
)
except Exception as e:
logger.error(
f"Execution of graph {input_data.graph_id} version {input_data.graph_version} failed: {e}, stopping execution."
await self._stop(
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
)
await execution_utils.stop_graph_execution(
graph_exec.id, use_db_query=False
logger.error(
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} failed: {e}, execution is stopped."
)
raise
@@ -107,6 +122,7 @@ class AgentExecutorBlock(Block):
graph_version: int,
graph_exec_id: str,
user_id: str,
logger,
) -> BlockOutput:
from backend.data.execution import ExecutionEventType
@@ -135,6 +151,12 @@ class AgentExecutorBlock(Block):
if event.event_type == ExecutionEventType.GRAPH_EXEC_UPDATE:
# If the graph execution is COMPLETED, TERMINATED, or FAILED,
# we can stop listening for further events.
self.merge_stats(
NodeExecutionStats(
extra_cost=event.stats.cost if event.stats else 0,
extra_steps=event.stats.node_exec_count if event.stats else 0,
)
)
break
logger.debug(
@@ -159,3 +181,25 @@ class AgentExecutorBlock(Block):
f"Execution {log_id} produced {output_name}: {output_data}"
)
yield output_name, output_data
@retry.func_retry
async def _stop(
self,
graph_exec_id: str,
user_id: str,
logger,
) -> None:
from backend.executor import utils as execution_utils
log_id = f"Graph exec-id: {graph_exec_id}"
logger.info(f"Stopping execution of {log_id}")
try:
await execution_utils.stop_graph_execution(
graph_exec_id=graph_exec_id,
user_id=user_id,
use_db_query=False,
)
logger.info(f"Execution {log_id} stopped successfully.")
except Exception as e:
logger.error(f"Failed to stop execution {log_id}: {e}")

View File

@@ -1,12 +1,9 @@
import enum
from typing import Any, List
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType
from backend.data.model import SchemaField
from backend.util import json
from backend.util.file import store_media_file
from backend.util.mock import MockObject
from backend.util.prompt import estimate_token_count_str
from backend.util.type import MediaFileType, convert
@@ -121,266 +118,6 @@ class PrintToConsoleBlock(Block):
yield "status", "printed"
class FindInDictionaryBlock(Block):
class Input(BlockSchema):
input: Any = SchemaField(description="Dictionary to lookup from")
key: str | int = SchemaField(description="Key to lookup in the dictionary")
class Output(BlockSchema):
output: Any = SchemaField(description="Value found for the given key")
missing: Any = SchemaField(
description="Value of the input that missing the key"
)
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[
{"input": {"apple": 1, "banana": 2, "cherry": 3}, "key": "banana"},
{"input": {"x": 10, "y": 20, "z": 30}, "key": "w"},
{"input": [1, 2, 3], "key": 1},
{"input": [1, 2, 3], "key": 3},
{"input": MockObject(value="!!", key="key"), "key": "key"},
{"input": [{"k1": "v1"}, {"k2": "v2"}, {"k1": "v3"}], "key": "k1"},
],
test_output=[
("output", 2),
("missing", {"x": 10, "y": 20, "z": 30}),
("output", 2),
("missing", [1, 2, 3]),
("output", "key"),
("output", ["v1", "v3"]),
],
categories={BlockCategory.BASIC},
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
obj = input_data.input
key = input_data.key
if isinstance(obj, str):
obj = json.loads(obj)
if isinstance(obj, dict) and key in obj:
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, int) and 0 <= key < len(obj):
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, str):
if len(obj) == 0:
yield "output", []
elif isinstance(obj[0], dict) and key in obj[0]:
yield "output", [item[key] for item in obj if key in item]
else:
yield "output", [getattr(val, key) for val in obj if hasattr(val, key)]
elif isinstance(obj, object) and isinstance(key, str) and hasattr(obj, key):
yield "output", getattr(obj, key)
else:
yield "missing", input_data.input
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
)
key: str = SchemaField(
default="",
description="The key for the new entry.",
placeholder="new_key",
advanced=False,
)
value: Any = SchemaField(
default=None,
description="The value for the new entry.",
placeholder="new_value",
advanced=False,
)
entries: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The entries to add to the dictionary. This is the batch version of the `key` and `value` fields.",
advanced=True,
)
class Output(BlockSchema):
updated_dictionary: dict = SchemaField(
description="The dictionary with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="31d1064e-7446-4693-a7d4-65e5ca1180d1",
description="Adds a new key-value pair to a dictionary. If no dictionary is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToDictionaryBlock.Input,
output_schema=AddToDictionaryBlock.Output,
test_input=[
{
"dictionary": {"existing_key": "existing_value"},
"key": "new_key",
"value": "new_value",
},
{"key": "first_key", "value": "first_value"},
{
"dictionary": {"existing_key": "existing_value"},
"entries": {"new_key": "new_value", "first_key": "first_value"},
},
],
test_output=[
(
"updated_dictionary",
{"existing_key": "existing_value", "new_key": "new_value"},
),
("updated_dictionary", {"first_key": "first_value"}),
(
"updated_dictionary",
{
"existing_key": "existing_value",
"new_key": "new_value",
"first_key": "first_value",
},
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
if input_data.value is not None and input_data.key:
updated_dict[input_data.key] = input_data.value
for key, value in input_data.entries.items():
updated_dict[key] = value
yield "updated_dictionary", updated_dict
class AddToListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(
default_factory=list,
advanced=False,
description="The list to add the entry to. If not provided, a new list will be created.",
)
entry: Any = SchemaField(
description="The entry to add to the list. Can be of any type (string, int, dict, etc.).",
advanced=False,
default=None,
)
entries: List[Any] = SchemaField(
default_factory=lambda: list(),
description="The entries to add to the list. This is the batch version of the `entry` field.",
advanced=True,
)
position: int | None = SchemaField(
default=None,
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
)
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(
description="The list with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="aeb08fc1-2fc1-4141-bc8e-f758f183a822",
description="Adds a new entry to a list. The entry can be of any type. If no list is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToListBlock.Input,
output_schema=AddToListBlock.Output,
test_input=[
{
"list": [1, "string", {"existing_key": "existing_value"}],
"entry": {"new_key": "new_value"},
"position": 1,
},
{"entry": "first_entry"},
{"list": ["a", "b", "c"], "entry": "d"},
{
"entry": "e",
"entries": ["f", "g"],
"list": ["a", "b"],
"position": 1,
},
],
test_output=[
(
"updated_list",
[
1,
{"new_key": "new_value"},
"string",
{"existing_key": "existing_value"},
],
),
("updated_list", ["first_entry"]),
("updated_list", ["a", "b", "c", "d"]),
("updated_list", ["a", "f", "g", "e", "b"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
entries_added = input_data.entries.copy()
if input_data.entry:
entries_added.append(input_data.entry)
updated_list = input_data.list.copy()
if (pos := input_data.position) is not None:
updated_list = updated_list[:pos] + entries_added + updated_list[pos:]
else:
updated_list += entries_added
yield "updated_list", updated_list
class FindInListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to search in.")
value: Any = SchemaField(description="The value to search for.")
class Output(BlockSchema):
index: int = SchemaField(description="The index of the value in the list.")
found: bool = SchemaField(
description="Whether the value was found in the list."
)
not_found_value: Any = SchemaField(
description="The value that was not found in the list."
)
def __init__(self):
super().__init__(
id="5e2c6d0a-1e37-489f-b1d0-8e1812b23333",
description="Finds the index of the value in the list.",
categories={BlockCategory.BASIC},
input_schema=FindInListBlock.Input,
output_schema=FindInListBlock.Output,
test_input=[
{"list": [1, 2, 3, 4, 5], "value": 3},
{"list": [1, 2, 3, 4, 5], "value": 6},
],
test_output=[
("index", 2),
("found", True),
("found", False),
("not_found_value", 6),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "index", input_data.list.index(input_data.value)
yield "found", True
except ValueError:
yield "found", False
yield "not_found_value", input_data.value
class NoteBlock(Block):
class Input(BlockSchema):
text: str = SchemaField(description="The text to display in the sticky note.")
@@ -406,133 +143,6 @@ class NoteBlock(Block):
yield "output", input_data.text
class CreateDictionaryBlock(Block):
class Input(BlockSchema):
values: dict[str, Any] = SchemaField(
description="Key-value pairs to create the dictionary with",
placeholder="e.g., {'name': 'Alice', 'age': 25}",
)
class Output(BlockSchema):
dictionary: dict[str, Any] = SchemaField(
description="The created dictionary containing the specified key-value pairs"
)
error: str = SchemaField(
description="Error message if dictionary creation failed"
)
def __init__(self):
super().__init__(
id="b924ddf4-de4f-4b56-9a85-358930dcbc91",
description="Creates a dictionary with the specified key-value pairs. Use this when you know all the values you want to add upfront.",
categories={BlockCategory.DATA},
input_schema=CreateDictionaryBlock.Input,
output_schema=CreateDictionaryBlock.Output,
test_input=[
{
"values": {"name": "Alice", "age": 25, "city": "New York"},
},
{
"values": {"numbers": [1, 2, 3], "active": True, "score": 95.5},
},
],
test_output=[
(
"dictionary",
{"name": "Alice", "age": 25, "city": "New York"},
),
(
"dictionary",
{"numbers": [1, 2, 3], "active": True, "score": 95.5},
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# The values are already validated by Pydantic schema
yield "dictionary", input_data.values
except Exception as e:
yield "error", f"Failed to create dictionary: {str(e)}"
class CreateListBlock(Block):
class Input(BlockSchema):
values: List[Any] = SchemaField(
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
)
max_size: int | None = SchemaField(
default=None,
description="Maximum size of the list. If provided, the list will be yielded in chunks of this size.",
advanced=True,
)
max_tokens: int | None = SchemaField(
default=None,
description="Maximum tokens for the list. If provided, the list will be yielded in chunks that fit within this token limit.",
advanced=True,
)
class Output(BlockSchema):
list: List[Any] = SchemaField(
description="The created list containing the specified values."
)
error: str = SchemaField(description="Error message if list creation failed.")
def __init__(self):
super().__init__(
id="a912d5c7-6e00-4542-b2a9-8034136930e4",
description="Creates a list with the specified values. Use this when you know all the values you want to add upfront. This block can also yield the list in batches based on a maximum size or token limit.",
categories={BlockCategory.DATA},
input_schema=CreateListBlock.Input,
output_schema=CreateListBlock.Output,
test_input=[
{
"values": ["Alice", 25, True],
},
{
"values": [1, 2, 3, "four", {"key": "value"}],
},
],
test_output=[
(
"list",
["Alice", 25, True],
),
(
"list",
[1, 2, 3, "four", {"key": "value"}],
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
chunk = []
cur_tokens, max_tokens = 0, input_data.max_tokens
cur_size, max_size = 0, input_data.max_size
for value in input_data.values:
if max_tokens:
tokens = estimate_token_count_str(value)
else:
tokens = 0
# Check if adding this value would exceed either limit
if (max_tokens and (cur_tokens + tokens > max_tokens)) or (
max_size and (cur_size + 1 > max_size)
):
yield "list", chunk
chunk = [value]
cur_size, cur_tokens = 1, tokens
else:
chunk.append(value)
cur_size, cur_tokens = cur_size + 1, cur_tokens + tokens
# Yield final chunk if any
if chunk:
yield "list", chunk
class TypeOptions(enum.Enum):
STRING = "string"
NUMBER = "number"
@@ -550,6 +160,7 @@ class UniversalTypeConverterBlock(Block):
class Output(BlockSchema):
value: Any = SchemaField(description="The converted value.")
error: str = SchemaField(description="Error message if conversion failed.")
def __init__(self):
super().__init__(

View File

@@ -0,0 +1,683 @@
from typing import Any, List
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.json import loads
from backend.util.mock import MockObject
from backend.util.prompt import estimate_token_count_str
# =============================================================================
# Dictionary Manipulation Blocks
# =============================================================================
class CreateDictionaryBlock(Block):
class Input(BlockSchema):
values: dict[str, Any] = SchemaField(
description="Key-value pairs to create the dictionary with",
placeholder="e.g., {'name': 'Alice', 'age': 25}",
)
class Output(BlockSchema):
dictionary: dict[str, Any] = SchemaField(
description="The created dictionary containing the specified key-value pairs"
)
error: str = SchemaField(
description="Error message if dictionary creation failed"
)
def __init__(self):
super().__init__(
id="b924ddf4-de4f-4b56-9a85-358930dcbc91",
description="Creates a dictionary with the specified key-value pairs. Use this when you know all the values you want to add upfront.",
categories={BlockCategory.DATA},
input_schema=CreateDictionaryBlock.Input,
output_schema=CreateDictionaryBlock.Output,
test_input=[
{
"values": {"name": "Alice", "age": 25, "city": "New York"},
},
{
"values": {"numbers": [1, 2, 3], "active": True, "score": 95.5},
},
],
test_output=[
(
"dictionary",
{"name": "Alice", "age": 25, "city": "New York"},
),
(
"dictionary",
{"numbers": [1, 2, 3], "active": True, "score": 95.5},
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# The values are already validated by Pydantic schema
yield "dictionary", input_data.values
except Exception as e:
yield "error", f"Failed to create dictionary: {str(e)}"
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
)
key: str = SchemaField(
default="",
description="The key for the new entry.",
placeholder="new_key",
advanced=False,
)
value: Any = SchemaField(
default=None,
description="The value for the new entry.",
placeholder="new_value",
advanced=False,
)
entries: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The entries to add to the dictionary. This is the batch version of the `key` and `value` fields.",
advanced=True,
)
class Output(BlockSchema):
updated_dictionary: dict = SchemaField(
description="The dictionary with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="31d1064e-7446-4693-a7d4-65e5ca1180d1",
description="Adds a new key-value pair to a dictionary. If no dictionary is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToDictionaryBlock.Input,
output_schema=AddToDictionaryBlock.Output,
test_input=[
{
"dictionary": {"existing_key": "existing_value"},
"key": "new_key",
"value": "new_value",
},
{"key": "first_key", "value": "first_value"},
{
"dictionary": {"existing_key": "existing_value"},
"entries": {"new_key": "new_value", "first_key": "first_value"},
},
],
test_output=[
(
"updated_dictionary",
{"existing_key": "existing_value", "new_key": "new_value"},
),
("updated_dictionary", {"first_key": "first_value"}),
(
"updated_dictionary",
{
"existing_key": "existing_value",
"new_key": "new_value",
"first_key": "first_value",
},
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
if input_data.value is not None and input_data.key:
updated_dict[input_data.key] = input_data.value
for key, value in input_data.entries.items():
updated_dict[key] = value
yield "updated_dictionary", updated_dict
class FindInDictionaryBlock(Block):
class Input(BlockSchema):
input: Any = SchemaField(description="Dictionary to lookup from")
key: str | int = SchemaField(description="Key to lookup in the dictionary")
class Output(BlockSchema):
output: Any = SchemaField(description="Value found for the given key")
missing: Any = SchemaField(
description="Value of the input that missing the key"
)
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[
{"input": {"apple": 1, "banana": 2, "cherry": 3}, "key": "banana"},
{"input": {"x": 10, "y": 20, "z": 30}, "key": "w"},
{"input": [1, 2, 3], "key": 1},
{"input": [1, 2, 3], "key": 3},
{"input": MockObject(value="!!", key="key"), "key": "key"},
{"input": [{"k1": "v1"}, {"k2": "v2"}, {"k1": "v3"}], "key": "k1"},
],
test_output=[
("output", 2),
("missing", {"x": 10, "y": 20, "z": 30}),
("output", 2),
("missing", [1, 2, 3]),
("output", "key"),
("output", ["v1", "v3"]),
],
categories={BlockCategory.BASIC},
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
obj = input_data.input
key = input_data.key
if isinstance(obj, str):
obj = loads(obj)
if isinstance(obj, dict) and key in obj:
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, int) and 0 <= key < len(obj):
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, str):
if len(obj) == 0:
yield "output", []
elif isinstance(obj[0], dict) and key in obj[0]:
yield "output", [item[key] for item in obj if key in item]
else:
yield "output", [getattr(val, key) for val in obj if hasattr(val, key)]
elif isinstance(obj, object) and isinstance(key, str) and hasattr(obj, key):
yield "output", getattr(obj, key)
else:
yield "missing", input_data.input
class RemoveFromDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
key: str | int = SchemaField(description="Key to remove from the dictionary.")
return_value: bool = SchemaField(
default=False, description="Whether to return the removed value."
)
class Output(BlockSchema):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after removal."
)
removed_value: Any = SchemaField(description="The removed value if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="46afe2ea-c613-43f8-95ff-6692c3ef6876",
description="Removes a key-value pair from a dictionary.",
categories={BlockCategory.BASIC},
input_schema=RemoveFromDictionaryBlock.Input,
output_schema=RemoveFromDictionaryBlock.Output,
test_input=[
{
"dictionary": {"a": 1, "b": 2, "c": 3},
"key": "b",
"return_value": True,
},
{"dictionary": {"x": "hello", "y": "world"}, "key": "x"},
],
test_output=[
("updated_dictionary", {"a": 1, "c": 3}),
("removed_value", 2),
("updated_dictionary", {"y": "world"}),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
try:
removed_value = updated_dict.pop(input_data.key)
yield "updated_dictionary", updated_dict
if input_data.return_value:
yield "removed_value", removed_value
except KeyError:
yield "error", f"Key '{input_data.key}' not found in dictionary"
class ReplaceDictionaryValueBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
key: str | int = SchemaField(description="Key to replace the value for.")
value: Any = SchemaField(description="The new value for the given key.")
class Output(BlockSchema):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after replacement."
)
old_value: Any = SchemaField(description="The value that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="27e31876-18b6-44f3-ab97-f6226d8b3889",
description="Replaces the value for a specified key in a dictionary.",
categories={BlockCategory.BASIC},
input_schema=ReplaceDictionaryValueBlock.Input,
output_schema=ReplaceDictionaryValueBlock.Output,
test_input=[
{"dictionary": {"a": 1, "b": 2, "c": 3}, "key": "b", "value": 99},
{
"dictionary": {"x": "hello", "y": "world"},
"key": "y",
"value": "universe",
},
],
test_output=[
("updated_dictionary", {"a": 1, "b": 99, "c": 3}),
("old_value", 2),
("updated_dictionary", {"x": "hello", "y": "universe"}),
("old_value", "world"),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
try:
old_value = updated_dict[input_data.key]
updated_dict[input_data.key] = input_data.value
yield "updated_dictionary", updated_dict
yield "old_value", old_value
except KeyError:
yield "error", f"Key '{input_data.key}' not found in dictionary"
class DictionaryIsEmptyBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(description="The dictionary to check.")
class Output(BlockSchema):
is_empty: bool = SchemaField(description="True if the dictionary is empty.")
def __init__(self):
super().__init__(
id="a3cf3f64-6bb9-4cc6-9900-608a0b3359b0",
description="Checks if a dictionary is empty.",
categories={BlockCategory.BASIC},
input_schema=DictionaryIsEmptyBlock.Input,
output_schema=DictionaryIsEmptyBlock.Output,
test_input=[{"dictionary": {}}, {"dictionary": {"a": 1}}],
test_output=[("is_empty", True), ("is_empty", False)],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.dictionary) == 0
# =============================================================================
# List Manipulation Blocks
# =============================================================================
class CreateListBlock(Block):
class Input(BlockSchema):
values: List[Any] = SchemaField(
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
)
max_size: int | None = SchemaField(
default=None,
description="Maximum size of the list. If provided, the list will be yielded in chunks of this size.",
advanced=True,
)
max_tokens: int | None = SchemaField(
default=None,
description="Maximum tokens for the list. If provided, the list will be yielded in chunks that fit within this token limit.",
advanced=True,
)
class Output(BlockSchema):
list: List[Any] = SchemaField(
description="The created list containing the specified values."
)
error: str = SchemaField(description="Error message if list creation failed.")
def __init__(self):
super().__init__(
id="a912d5c7-6e00-4542-b2a9-8034136930e4",
description="Creates a list with the specified values. Use this when you know all the values you want to add upfront. This block can also yield the list in batches based on a maximum size or token limit.",
categories={BlockCategory.DATA},
input_schema=CreateListBlock.Input,
output_schema=CreateListBlock.Output,
test_input=[
{
"values": ["Alice", 25, True],
},
{
"values": [1, 2, 3, "four", {"key": "value"}],
},
],
test_output=[
(
"list",
["Alice", 25, True],
),
(
"list",
[1, 2, 3, "four", {"key": "value"}],
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
chunk = []
cur_tokens, max_tokens = 0, input_data.max_tokens
cur_size, max_size = 0, input_data.max_size
for value in input_data.values:
if max_tokens:
tokens = estimate_token_count_str(value)
else:
tokens = 0
# Check if adding this value would exceed either limit
if (max_tokens and (cur_tokens + tokens > max_tokens)) or (
max_size and (cur_size + 1 > max_size)
):
yield "list", chunk
chunk = [value]
cur_size, cur_tokens = 1, tokens
else:
chunk.append(value)
cur_size, cur_tokens = cur_size + 1, cur_tokens + tokens
# Yield final chunk if any
if chunk or not input_data.values:
yield "list", chunk
class AddToListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(
default_factory=list,
advanced=False,
description="The list to add the entry to. If not provided, a new list will be created.",
)
entry: Any = SchemaField(
description="The entry to add to the list. Can be of any type (string, int, dict, etc.).",
advanced=False,
default=None,
)
entries: List[Any] = SchemaField(
default_factory=lambda: list(),
description="The entries to add to the list. This is the batch version of the `entry` field.",
advanced=True,
)
position: int | None = SchemaField(
default=None,
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
)
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(
description="The list with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="aeb08fc1-2fc1-4141-bc8e-f758f183a822",
description="Adds a new entry to a list. The entry can be of any type. If no list is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToListBlock.Input,
output_schema=AddToListBlock.Output,
test_input=[
{
"list": [1, "string", {"existing_key": "existing_value"}],
"entry": {"new_key": "new_value"},
"position": 1,
},
{"entry": "first_entry"},
{"list": ["a", "b", "c"], "entry": "d"},
{
"entry": "e",
"entries": ["f", "g"],
"list": ["a", "b"],
"position": 1,
},
],
test_output=[
(
"updated_list",
[
1,
{"new_key": "new_value"},
"string",
{"existing_key": "existing_value"},
],
),
("updated_list", ["first_entry"]),
("updated_list", ["a", "b", "c", "d"]),
("updated_list", ["a", "f", "g", "e", "b"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
entries_added = input_data.entries.copy()
if input_data.entry:
entries_added.append(input_data.entry)
updated_list = input_data.list.copy()
if (pos := input_data.position) is not None:
updated_list = updated_list[:pos] + entries_added + updated_list[pos:]
else:
updated_list += entries_added
yield "updated_list", updated_list
class FindInListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to search in.")
value: Any = SchemaField(description="The value to search for.")
class Output(BlockSchema):
index: int = SchemaField(description="The index of the value in the list.")
found: bool = SchemaField(
description="Whether the value was found in the list."
)
not_found_value: Any = SchemaField(
description="The value that was not found in the list."
)
def __init__(self):
super().__init__(
id="5e2c6d0a-1e37-489f-b1d0-8e1812b23333",
description="Finds the index of the value in the list.",
categories={BlockCategory.BASIC},
input_schema=FindInListBlock.Input,
output_schema=FindInListBlock.Output,
test_input=[
{"list": [1, 2, 3, 4, 5], "value": 3},
{"list": [1, 2, 3, 4, 5], "value": 6},
],
test_output=[
("index", 2),
("found", True),
("found", False),
("not_found_value", 6),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "index", input_data.list.index(input_data.value)
yield "found", True
except ValueError:
yield "found", False
yield "not_found_value", input_data.value
class GetListItemBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to get the item from.")
index: int = SchemaField(
description="The 0-based index of the item (supports negative indices)."
)
class Output(BlockSchema):
item: Any = SchemaField(description="The item at the specified index.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="262ca24c-1025-43cf-a578-534e23234e97",
description="Returns the element at the given index.",
categories={BlockCategory.BASIC},
input_schema=GetListItemBlock.Input,
output_schema=GetListItemBlock.Output,
test_input=[
{"list": [1, 2, 3], "index": 1},
{"list": [1, 2, 3], "index": -1},
],
test_output=[
("item", 2),
("item", 3),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "item", input_data.list[input_data.index]
except IndexError:
yield "error", "Index out of range"
class RemoveFromListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to modify.")
value: Any = SchemaField(
default=None, description="Value to remove from the list."
)
index: int | None = SchemaField(
default=None,
description="Index of the item to pop (supports negative indices).",
)
return_item: bool = SchemaField(
default=False, description="Whether to return the removed item."
)
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(description="The list after removal.")
removed_item: Any = SchemaField(description="The removed item if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="d93c5a93-ac7e-41c1-ae5c-ef67e6e9b826",
description="Removes an item from a list by value or index.",
categories={BlockCategory.BASIC},
input_schema=RemoveFromListBlock.Input,
output_schema=RemoveFromListBlock.Output,
test_input=[
{"list": [1, 2, 3], "index": 1, "return_item": True},
{"list": ["a", "b", "c"], "value": "b"},
],
test_output=[
("updated_list", [1, 3]),
("removed_item", 2),
("updated_list", ["a", "c"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
lst = input_data.list.copy()
removed = None
try:
if input_data.index is not None:
removed = lst.pop(input_data.index)
elif input_data.value is not None:
lst.remove(input_data.value)
removed = input_data.value
else:
raise ValueError("No index or value provided for removal")
except (IndexError, ValueError):
yield "error", "Index or value not found"
return
yield "updated_list", lst
if input_data.return_item:
yield "removed_item", removed
class ReplaceListItemBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to modify.")
index: int = SchemaField(
description="Index of the item to replace (supports negative indices)."
)
value: Any = SchemaField(description="The new value for the given index.")
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(description="The list after replacement.")
old_item: Any = SchemaField(description="The item that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="fbf62922-bea1-4a3d-8bac-23587f810b38",
description="Replaces an item at the specified index.",
categories={BlockCategory.BASIC},
input_schema=ReplaceListItemBlock.Input,
output_schema=ReplaceListItemBlock.Output,
test_input=[
{"list": [1, 2, 3], "index": 1, "value": 99},
{"list": ["a", "b"], "index": -1, "value": "c"},
],
test_output=[
("updated_list", [1, 99, 3]),
("old_item", 2),
("updated_list", ["a", "c"]),
("old_item", "b"),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
lst = input_data.list.copy()
try:
old = lst[input_data.index]
lst[input_data.index] = input_data.value
except IndexError:
yield "error", "Index out of range"
return
yield "updated_list", lst
yield "old_item", old
class ListIsEmptyBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to check.")
class Output(BlockSchema):
is_empty: bool = SchemaField(description="True if the list is empty.")
def __init__(self):
super().__init__(
id="896ed73b-27d0-41be-813c-c1c1dc856c03",
description="Checks if a list is empty.",
categories={BlockCategory.BASIC},
input_schema=ListIsEmptyBlock.Input,
output_schema=ListIsEmptyBlock.Output,
test_input=[{"list": []}, {"list": [1]}],
test_output=[("is_empty", True), ("is_empty", False)],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.list) == 0

View File

@@ -1,32 +0,0 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsField, CredentialsMetaInput
from backend.integrations.providers import ProviderName
ExaCredentials = APIKeyCredentials
ExaCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.EXA],
Literal["api_key"],
]
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="exa",
api_key=SecretStr("mock-exa-api-key"),
title="Mock Exa API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
def ExaCredentialsField() -> ExaCredentialsInput:
"""Creates an Exa credentials input on a block."""
return CredentialsField(description="The Exa integration requires an API Key.")

View File

@@ -0,0 +1,16 @@
"""
Shared configuration for all Exa blocks using the new SDK pattern.
"""
from backend.sdk import BlockCostType, ProviderBuilder
from ._webhook import ExaWebhookManager
# Configure the Exa provider once for all blocks
exa = (
ProviderBuilder("exa")
.with_api_key("EXA_API_KEY", "Exa API Key")
.with_webhook_manager(ExaWebhookManager)
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,134 @@
"""
Exa Webhook Manager implementation.
"""
import hashlib
import hmac
from enum import Enum
from backend.data.model import Credentials
from backend.sdk import (
APIKeyCredentials,
BaseWebhooksManager,
ProviderName,
Requests,
Webhook,
)
class ExaWebhookType(str, Enum):
"""Available webhook types for Exa."""
WEBSET = "webset"
class ExaEventType(str, Enum):
"""Available event types for Exa webhooks."""
WEBSET_CREATED = "webset.created"
WEBSET_DELETED = "webset.deleted"
WEBSET_PAUSED = "webset.paused"
WEBSET_IDLE = "webset.idle"
WEBSET_SEARCH_CREATED = "webset.search.created"
WEBSET_SEARCH_CANCELED = "webset.search.canceled"
WEBSET_SEARCH_COMPLETED = "webset.search.completed"
WEBSET_SEARCH_UPDATED = "webset.search.updated"
IMPORT_CREATED = "import.created"
IMPORT_COMPLETED = "import.completed"
IMPORT_PROCESSING = "import.processing"
WEBSET_ITEM_CREATED = "webset.item.created"
WEBSET_ITEM_ENRICHED = "webset.item.enriched"
WEBSET_EXPORT_CREATED = "webset.export.created"
WEBSET_EXPORT_COMPLETED = "webset.export.completed"
class ExaWebhookManager(BaseWebhooksManager):
"""Webhook manager for Exa API."""
PROVIDER_NAME = ProviderName("exa")
class WebhookType(str, Enum):
WEBSET = "webset"
@classmethod
async def validate_payload(cls, webhook: Webhook, request) -> tuple[dict, str]:
"""Validate incoming webhook payload and signature."""
payload = await request.json()
# Get event type from payload
event_type = payload.get("eventType", "unknown")
# Verify webhook signature if secret is available
if webhook.secret:
signature = request.headers.get("X-Exa-Signature")
if signature:
# Compute expected signature
body = await request.body()
expected_signature = hmac.new(
webhook.secret.encode(), body, hashlib.sha256
).hexdigest()
# Compare signatures
if not hmac.compare_digest(signature, expected_signature):
raise ValueError("Invalid webhook signature")
return payload, event_type
async def _register_webhook(
self,
credentials: Credentials,
webhook_type: str,
resource: str,
events: list[str],
ingress_url: str,
secret: str,
) -> tuple[str, dict]:
"""Register webhook with Exa API."""
if not isinstance(credentials, APIKeyCredentials):
raise ValueError("Exa webhooks require API key credentials")
api_key = credentials.api_key.get_secret_value()
# Create webhook via Exa API
response = await Requests().post(
"https://api.exa.ai/v0/webhooks",
headers={"x-api-key": api_key},
json={
"url": ingress_url,
"events": events,
"metadata": {
"resource": resource,
"webhook_type": webhook_type,
},
},
)
if not response.ok:
error_data = response.json()
raise Exception(f"Failed to create Exa webhook: {error_data}")
webhook_data = response.json()
# Store the secret returned by Exa
return webhook_data["id"], {
"events": events,
"resource": resource,
"exa_secret": webhook_data.get("secret"),
}
async def _deregister_webhook(
self, webhook: Webhook, credentials: Credentials
) -> None:
"""Deregister webhook from Exa API."""
if not isinstance(credentials, APIKeyCredentials):
raise ValueError("Exa webhooks require API key credentials")
api_key = credentials.api_key.get_secret_value()
# Delete webhook via Exa API
response = await Requests().delete(
f"https://api.exa.ai/v0/webhooks/{webhook.provider_webhook_id}",
headers={"x-api-key": api_key},
)
if not response.ok and response.status != 404:
error_data = response.json()
raise Exception(f"Failed to delete Exa webhook: {error_data}")

View File

@@ -0,0 +1,124 @@
from backend.sdk import (
APIKeyCredentials,
BaseModel,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
class CostBreakdown(BaseModel):
keywordSearch: float
neuralSearch: float
contentText: float
contentHighlight: float
contentSummary: float
class SearchBreakdown(BaseModel):
search: float
contents: float
breakdown: CostBreakdown
class PerRequestPrices(BaseModel):
neuralSearch_1_25_results: float
neuralSearch_26_100_results: float
neuralSearch_100_plus_results: float
keywordSearch_1_100_results: float
keywordSearch_100_plus_results: float
class PerPagePrices(BaseModel):
contentText: float
contentHighlight: float
contentSummary: float
class CostDollars(BaseModel):
total: float
breakDown: list[SearchBreakdown]
perRequestPrices: PerRequestPrices
perPagePrices: PerPagePrices
class ExaAnswerBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
query: str = SchemaField(
description="The question or query to answer",
placeholder="What is the latest valuation of SpaceX?",
)
text: bool = SchemaField(
default=False,
description="If true, the response includes full text content in the search results",
advanced=True,
)
model: str = SchemaField(
default="exa",
description="The search model to use (exa or exa-pro)",
placeholder="exa",
advanced=True,
)
class Output(BlockSchema):
answer: str = SchemaField(
description="The generated answer based on search results"
)
citations: list[dict] = SchemaField(
description="Search results used to generate the answer",
default_factory=list,
)
cost_dollars: CostDollars = SchemaField(
description="Cost breakdown of the request"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="b79ca4cc-9d5e-47d1-9d4f-e3a2d7f28df5",
description="Get an LLM answer to a question informed by Exa search results",
categories={BlockCategory.SEARCH, BlockCategory.AI},
input_schema=ExaAnswerBlock.Input,
output_schema=ExaAnswerBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/answer"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
# Build the payload
payload = {
"query": input_data.query,
"text": input_data.text,
"model": input_data.model,
}
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "answer", data.get("answer", "")
yield "citations", data.get("citations", [])
yield "cost_dollars", data.get("costDollars", {})
except Exception as e:
yield "error", str(e)
yield "answer", ""
yield "citations", []
yield "cost_dollars", {}

View File

@@ -1,57 +1,39 @@
from typing import List
from pydantic import BaseModel
from backend.blocks.exa._auth import (
ExaCredentials,
ExaCredentialsField,
ExaCredentialsInput,
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import Requests
class ContentRetrievalSettings(BaseModel):
text: dict = SchemaField(
description="Text content settings",
default={"maxCharacters": 1000, "includeHtmlTags": False},
advanced=True,
)
highlights: dict = SchemaField(
description="Highlight settings",
default={
"numSentences": 3,
"highlightsPerUrl": 3,
"query": "",
},
advanced=True,
)
summary: dict = SchemaField(
description="Summary settings",
default={"query": ""},
advanced=True,
)
from ._config import exa
from .helpers import ContentSettings
class ExaContentsBlock(Block):
class Input(BlockSchema):
credentials: ExaCredentialsInput = ExaCredentialsField()
ids: List[str] = SchemaField(
description="Array of document IDs obtained from searches",
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
contents: ContentRetrievalSettings = SchemaField(
ids: list[str] = SchemaField(
description="Array of document IDs obtained from searches"
)
contents: ContentSettings = SchemaField(
description="Content retrieval settings",
default=ContentRetrievalSettings(),
default=ContentSettings(),
advanced=True,
)
class Output(BlockSchema):
results: list = SchemaField(
description="List of document contents",
default_factory=list,
description="List of document contents", default_factory=list
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
super().__init__(
@@ -63,7 +45,7 @@ class ExaContentsBlock(Block):
)
async def run(
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/contents"
headers = {
@@ -71,6 +53,7 @@ class ExaContentsBlock(Block):
"x-api-key": credentials.api_key.get_secret_value(),
}
# Convert ContentSettings to API format
payload = {
"ids": input_data.ids,
"text": input_data.contents.text,

View File

@@ -1,8 +1,6 @@
from typing import Optional
from pydantic import BaseModel
from backend.data.model import SchemaField
from backend.sdk import BaseModel, SchemaField
class TextSettings(BaseModel):
@@ -42,13 +40,90 @@ class SummarySettings(BaseModel):
class ContentSettings(BaseModel):
text: TextSettings = SchemaField(
default=TextSettings(),
description="Text content settings",
)
highlights: HighlightSettings = SchemaField(
default=HighlightSettings(),
description="Highlight settings",
)
summary: SummarySettings = SchemaField(
default=SummarySettings(),
description="Summary settings",
)
# Websets Models
class WebsetEntitySettings(BaseModel):
type: Optional[str] = SchemaField(
default=None,
description="Entity type (e.g., 'company', 'person')",
placeholder="company",
)
class WebsetCriterion(BaseModel):
description: str = SchemaField(
description="Description of the criterion",
placeholder="Must be based in the US",
)
success_rate: Optional[int] = SchemaField(
default=None,
description="Success rate percentage",
ge=0,
le=100,
)
class WebsetSearchConfig(BaseModel):
query: str = SchemaField(
description="Search query",
placeholder="Marketing agencies based in the US",
)
count: int = SchemaField(
default=10,
description="Number of results to return",
ge=1,
le=100,
)
entity: Optional[WebsetEntitySettings] = SchemaField(
default=None,
description="Entity settings for the search",
)
criteria: Optional[list[WebsetCriterion]] = SchemaField(
default=None,
description="Search criteria",
)
behavior: Optional[str] = SchemaField(
default="override",
description="Behavior when updating results ('override' or 'append')",
placeholder="override",
)
class EnrichmentOption(BaseModel):
label: str = SchemaField(
description="Label for the enrichment option",
placeholder="Option 1",
)
class WebsetEnrichmentConfig(BaseModel):
title: str = SchemaField(
description="Title of the enrichment",
placeholder="Company Details",
)
description: str = SchemaField(
description="Description of what this enrichment does",
placeholder="Extract company information",
)
format: str = SchemaField(
default="text",
description="Format of the enrichment result",
placeholder="text",
)
instructions: Optional[str] = SchemaField(
default=None,
description="Instructions for the enrichment",
placeholder="Extract key company metrics",
)
options: Optional[list[EnrichmentOption]] = SchemaField(
default=None,
description="Options for the enrichment",
)

View File

@@ -1,71 +1,61 @@
from datetime import datetime
from typing import List
from backend.blocks.exa._auth import (
ExaCredentials,
ExaCredentialsField,
ExaCredentialsInput,
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from backend.blocks.exa.helpers import ContentSettings
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import Requests
from ._config import exa
from .helpers import ContentSettings
class ExaSearchBlock(Block):
class Input(BlockSchema):
credentials: ExaCredentialsInput = ExaCredentialsField()
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
query: str = SchemaField(description="The search query")
use_auto_prompt: bool = SchemaField(
description="Whether to use autoprompt",
default=True,
advanced=True,
)
type: str = SchemaField(
description="Type of search",
default="",
advanced=True,
description="Whether to use autoprompt", default=True, advanced=True
)
type: str = SchemaField(description="Type of search", default="", advanced=True)
category: str = SchemaField(
description="Category to search within",
default="",
advanced=True,
description="Category to search within", default="", advanced=True
)
number_of_results: int = SchemaField(
description="Number of results to return",
default=10,
advanced=True,
description="Number of results to return", default=10, advanced=True
)
include_domains: List[str] = SchemaField(
description="Domains to include in search",
default_factory=list,
include_domains: list[str] = SchemaField(
description="Domains to include in search", default_factory=list
)
exclude_domains: List[str] = SchemaField(
exclude_domains: list[str] = SchemaField(
description="Domains to exclude from search",
default_factory=list,
advanced=True,
)
start_crawl_date: datetime = SchemaField(
description="Start date for crawled content",
description="Start date for crawled content"
)
end_crawl_date: datetime = SchemaField(
description="End date for crawled content",
description="End date for crawled content"
)
start_published_date: datetime = SchemaField(
description="Start date for published content",
description="Start date for published content"
)
end_published_date: datetime = SchemaField(
description="End date for published content",
description="End date for published content"
)
include_text: List[str] = SchemaField(
description="Text patterns to include",
default_factory=list,
advanced=True,
include_text: list[str] = SchemaField(
description="Text patterns to include", default_factory=list, advanced=True
)
exclude_text: List[str] = SchemaField(
description="Text patterns to exclude",
default_factory=list,
advanced=True,
exclude_text: list[str] = SchemaField(
description="Text patterns to exclude", default_factory=list, advanced=True
)
contents: ContentSettings = SchemaField(
description="Content retrieval settings",
@@ -75,8 +65,7 @@ class ExaSearchBlock(Block):
class Output(BlockSchema):
results: list = SchemaField(
description="List of search results",
default_factory=list,
description="List of search results", default_factory=list
)
error: str = SchemaField(
description="Error message if the request failed",
@@ -92,7 +81,7 @@ class ExaSearchBlock(Block):
)
async def run(
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/search"
headers = {
@@ -104,7 +93,7 @@ class ExaSearchBlock(Block):
"query": input_data.query,
"useAutoprompt": input_data.use_auto_prompt,
"numResults": input_data.number_of_results,
"contents": input_data.contents.dict(),
"contents": input_data.contents.model_dump(),
}
date_field_mapping = {

View File

@@ -1,57 +1,60 @@
from datetime import datetime
from typing import Any, List
from typing import Any
from backend.blocks.exa._auth import (
ExaCredentials,
ExaCredentialsField,
ExaCredentialsInput,
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import Requests
from ._config import exa
from .helpers import ContentSettings
class ExaFindSimilarBlock(Block):
class Input(BlockSchema):
credentials: ExaCredentialsInput = ExaCredentialsField()
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
url: str = SchemaField(
description="The url for which you would like to find similar links"
)
number_of_results: int = SchemaField(
description="Number of results to return",
default=10,
advanced=True,
description="Number of results to return", default=10, advanced=True
)
include_domains: List[str] = SchemaField(
include_domains: list[str] = SchemaField(
description="Domains to include in search",
default_factory=list,
advanced=True,
)
exclude_domains: List[str] = SchemaField(
exclude_domains: list[str] = SchemaField(
description="Domains to exclude from search",
default_factory=list,
advanced=True,
)
start_crawl_date: datetime = SchemaField(
description="Start date for crawled content",
description="Start date for crawled content"
)
end_crawl_date: datetime = SchemaField(
description="End date for crawled content",
description="End date for crawled content"
)
start_published_date: datetime = SchemaField(
description="Start date for published content",
description="Start date for published content"
)
end_published_date: datetime = SchemaField(
description="End date for published content",
description="End date for published content"
)
include_text: List[str] = SchemaField(
include_text: list[str] = SchemaField(
description="Text patterns to include (max 1 string, up to 5 words)",
default_factory=list,
advanced=True,
)
exclude_text: List[str] = SchemaField(
exclude_text: list[str] = SchemaField(
description="Text patterns to exclude (max 1 string, up to 5 words)",
default_factory=list,
advanced=True,
@@ -63,11 +66,13 @@ class ExaFindSimilarBlock(Block):
)
class Output(BlockSchema):
results: List[Any] = SchemaField(
results: list[Any] = SchemaField(
description="List of similar documents with title, URL, published date, author, and score",
default_factory=list,
)
error: str = SchemaField(description="Error message if the request failed")
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
@@ -79,7 +84,7 @@ class ExaFindSimilarBlock(Block):
)
async def run(
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/findSimilar"
headers = {
@@ -90,7 +95,7 @@ class ExaFindSimilarBlock(Block):
payload = {
"url": input_data.url,
"numResults": input_data.number_of_results,
"contents": input_data.contents.dict(),
"contents": input_data.contents.model_dump(),
}
optional_field_mapping = {

View File

@@ -0,0 +1,201 @@
"""
Exa Webhook Blocks
These blocks handle webhook events from Exa's API for websets and other events.
"""
from backend.sdk import (
BaseModel,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockType,
BlockWebhookConfig,
CredentialsMetaInput,
Field,
ProviderName,
SchemaField,
)
from ._config import exa
from ._webhook import ExaEventType
class WebsetEventFilter(BaseModel):
"""Filter configuration for Exa webset events."""
webset_created: bool = Field(
default=True, description="Receive notifications when websets are created"
)
webset_deleted: bool = Field(
default=False, description="Receive notifications when websets are deleted"
)
webset_paused: bool = Field(
default=False, description="Receive notifications when websets are paused"
)
webset_idle: bool = Field(
default=False, description="Receive notifications when websets become idle"
)
search_created: bool = Field(
default=True,
description="Receive notifications when webset searches are created",
)
search_completed: bool = Field(
default=True, description="Receive notifications when webset searches complete"
)
search_canceled: bool = Field(
default=False,
description="Receive notifications when webset searches are canceled",
)
search_updated: bool = Field(
default=False,
description="Receive notifications when webset searches are updated",
)
item_created: bool = Field(
default=True, description="Receive notifications when webset items are created"
)
item_enriched: bool = Field(
default=True, description="Receive notifications when webset items are enriched"
)
export_created: bool = Field(
default=False,
description="Receive notifications when webset exports are created",
)
export_completed: bool = Field(
default=True, description="Receive notifications when webset exports complete"
)
import_created: bool = Field(
default=False, description="Receive notifications when imports are created"
)
import_completed: bool = Field(
default=True, description="Receive notifications when imports complete"
)
import_processing: bool = Field(
default=False, description="Receive notifications when imports are processing"
)
class ExaWebsetWebhookBlock(Block):
"""
Receives webhook notifications for Exa webset events.
This block allows you to monitor various events related to Exa websets,
including creation, updates, searches, and exports.
"""
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="Exa API credentials for webhook management"
)
webhook_url: str = SchemaField(
description="URL to receive webhooks (auto-generated)",
default="",
hidden=True,
)
webset_id: str = SchemaField(
description="The webset ID to monitor (optional, monitors all if empty)",
default="",
)
event_filter: WebsetEventFilter = SchemaField(
description="Configure which events to receive", default=WebsetEventFilter()
)
payload: dict = SchemaField(
description="Webhook payload data", default={}, hidden=True
)
class Output(BlockSchema):
event_type: str = SchemaField(description="Type of event that occurred")
event_id: str = SchemaField(description="Unique identifier for this event")
webset_id: str = SchemaField(description="ID of the affected webset")
data: dict = SchemaField(description="Event-specific data")
timestamp: str = SchemaField(description="When the event occurred")
metadata: dict = SchemaField(description="Additional event metadata")
def __init__(self):
super().__init__(
id="d0204ed8-8b81-408d-8b8d-ed087a546228",
description="Receive webhook notifications for Exa webset events",
categories={BlockCategory.INPUT},
input_schema=ExaWebsetWebhookBlock.Input,
output_schema=ExaWebsetWebhookBlock.Output,
block_type=BlockType.WEBHOOK,
webhook_config=BlockWebhookConfig(
provider=ProviderName("exa"),
webhook_type="webset",
event_filter_input="event_filter",
resource_format="{webset_id}",
),
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
"""Process incoming Exa webhook payload."""
try:
payload = input_data.payload
# Extract event details
event_type = payload.get("eventType", "unknown")
event_id = payload.get("eventId", "")
# Get webset ID from payload or input
webset_id = payload.get("websetId", input_data.webset_id)
# Check if we should process this event based on filter
should_process = self._should_process_event(
event_type, input_data.event_filter
)
if not should_process:
# Skip events that don't match our filter
return
# Extract event data
event_data = payload.get("data", {})
timestamp = payload.get("occurredAt", payload.get("createdAt", ""))
metadata = payload.get("metadata", {})
yield "event_type", event_type
yield "event_id", event_id
yield "webset_id", webset_id
yield "data", event_data
yield "timestamp", timestamp
yield "metadata", metadata
except Exception as e:
# Handle errors gracefully
yield "event_type", "error"
yield "event_id", ""
yield "webset_id", input_data.webset_id
yield "data", {"error": str(e)}
yield "timestamp", ""
yield "metadata", {}
def _should_process_event(
self, event_type: str, event_filter: WebsetEventFilter
) -> bool:
"""Check if an event should be processed based on the filter."""
filter_mapping = {
ExaEventType.WEBSET_CREATED: event_filter.webset_created,
ExaEventType.WEBSET_DELETED: event_filter.webset_deleted,
ExaEventType.WEBSET_PAUSED: event_filter.webset_paused,
ExaEventType.WEBSET_IDLE: event_filter.webset_idle,
ExaEventType.WEBSET_SEARCH_CREATED: event_filter.search_created,
ExaEventType.WEBSET_SEARCH_COMPLETED: event_filter.search_completed,
ExaEventType.WEBSET_SEARCH_CANCELED: event_filter.search_canceled,
ExaEventType.WEBSET_SEARCH_UPDATED: event_filter.search_updated,
ExaEventType.WEBSET_ITEM_CREATED: event_filter.item_created,
ExaEventType.WEBSET_ITEM_ENRICHED: event_filter.item_enriched,
ExaEventType.WEBSET_EXPORT_CREATED: event_filter.export_created,
ExaEventType.WEBSET_EXPORT_COMPLETED: event_filter.export_completed,
ExaEventType.IMPORT_CREATED: event_filter.import_created,
ExaEventType.IMPORT_COMPLETED: event_filter.import_completed,
ExaEventType.IMPORT_PROCESSING: event_filter.import_processing,
}
# Try to convert string to ExaEventType enum
try:
event_type_enum = ExaEventType(event_type)
return filter_mapping.get(event_type_enum, True)
except ValueError:
# If event_type is not a valid enum value, process it by default
return True

View File

@@ -0,0 +1,456 @@
from typing import Any, Optional
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import exa
from .helpers import WebsetEnrichmentConfig, WebsetSearchConfig
class ExaCreateWebsetBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
search: WebsetSearchConfig = SchemaField(
description="Initial search configuration for the Webset"
)
enrichments: Optional[list[WebsetEnrichmentConfig]] = SchemaField(
default=None,
description="Enrichments to apply to Webset items",
advanced=True,
)
external_id: Optional[str] = SchemaField(
default=None,
description="External identifier for the webset",
placeholder="my-webset-123",
advanced=True,
)
metadata: Optional[dict] = SchemaField(
default=None,
description="Key-value pairs to associate with this webset",
advanced=True,
)
class Output(BlockSchema):
webset_id: str = SchemaField(
description="The unique identifier for the created webset"
)
status: str = SchemaField(description="The status of the webset")
external_id: Optional[str] = SchemaField(
description="The external identifier for the webset", default=None
)
created_at: str = SchemaField(
description="The date and time the webset was created"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="0cda29ff-c549-4a19-8805-c982b7d4ec34",
description="Create a new Exa Webset for persistent web search collections",
categories={BlockCategory.SEARCH},
input_schema=ExaCreateWebsetBlock.Input,
output_schema=ExaCreateWebsetBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/websets/v0/websets"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
# Build the payload
payload: dict[str, Any] = {
"search": input_data.search.model_dump(exclude_none=True),
}
# Convert enrichments to API format
if input_data.enrichments:
enrichments_data = []
for enrichment in input_data.enrichments:
enrichments_data.append(enrichment.model_dump(exclude_none=True))
payload["enrichments"] = enrichments_data
if input_data.external_id:
payload["externalId"] = input_data.external_id
if input_data.metadata:
payload["metadata"] = input_data.metadata
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "webset_id", data.get("id", "")
yield "status", data.get("status", "")
yield "external_id", data.get("externalId")
yield "created_at", data.get("createdAt", "")
except Exception as e:
yield "error", str(e)
yield "webset_id", ""
yield "status", ""
yield "created_at", ""
class ExaUpdateWebsetBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to update",
placeholder="webset-id-or-external-id",
)
metadata: Optional[dict] = SchemaField(
default=None,
description="Key-value pairs to associate with this webset (set to null to clear)",
)
class Output(BlockSchema):
webset_id: str = SchemaField(description="The unique identifier for the webset")
status: str = SchemaField(description="The status of the webset")
external_id: Optional[str] = SchemaField(
description="The external identifier for the webset", default=None
)
metadata: dict = SchemaField(
description="Updated metadata for the webset", default_factory=dict
)
updated_at: str = SchemaField(
description="The date and time the webset was updated"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="89ccd99a-3c2b-4fbf-9e25-0ffa398d0314",
description="Update metadata for an existing Webset",
categories={BlockCategory.SEARCH},
input_schema=ExaUpdateWebsetBlock.Input,
output_schema=ExaUpdateWebsetBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}"
headers = {
"Content-Type": "application/json",
"x-api-key": credentials.api_key.get_secret_value(),
}
# Build the payload
payload = {}
if input_data.metadata is not None:
payload["metadata"] = input_data.metadata
try:
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "webset_id", data.get("id", "")
yield "status", data.get("status", "")
yield "external_id", data.get("externalId")
yield "metadata", data.get("metadata", {})
yield "updated_at", data.get("updatedAt", "")
except Exception as e:
yield "error", str(e)
yield "webset_id", ""
yield "status", ""
yield "metadata", {}
yield "updated_at", ""
class ExaListWebsetsBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
cursor: Optional[str] = SchemaField(
default=None,
description="Cursor for pagination through results",
advanced=True,
)
limit: int = SchemaField(
default=25,
description="Number of websets to return (1-100)",
ge=1,
le=100,
advanced=True,
)
class Output(BlockSchema):
websets: list = SchemaField(description="List of websets", default_factory=list)
has_more: bool = SchemaField(
description="Whether there are more results to paginate through",
default=False,
)
next_cursor: Optional[str] = SchemaField(
description="Cursor for the next page of results", default=None
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="1dcd8fd6-c13f-4e6f-bd4c-654428fa4757",
description="List all Websets with pagination support",
categories={BlockCategory.SEARCH},
input_schema=ExaListWebsetsBlock.Input,
output_schema=ExaListWebsetsBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/websets/v0/websets"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
params: dict[str, Any] = {
"limit": input_data.limit,
}
if input_data.cursor:
params["cursor"] = input_data.cursor
try:
response = await Requests().get(url, headers=headers, params=params)
data = response.json()
yield "websets", data.get("data", [])
yield "has_more", data.get("hasMore", False)
yield "next_cursor", data.get("nextCursor")
except Exception as e:
yield "error", str(e)
yield "websets", []
yield "has_more", False
class ExaGetWebsetBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to retrieve",
placeholder="webset-id-or-external-id",
)
expand_items: bool = SchemaField(
default=False, description="Include items in the response", advanced=True
)
class Output(BlockSchema):
webset_id: str = SchemaField(description="The unique identifier for the webset")
status: str = SchemaField(description="The status of the webset")
external_id: Optional[str] = SchemaField(
description="The external identifier for the webset", default=None
)
searches: list[dict] = SchemaField(
description="The searches performed on the webset", default_factory=list
)
enrichments: list[dict] = SchemaField(
description="The enrichments applied to the webset", default_factory=list
)
monitors: list[dict] = SchemaField(
description="The monitors for the webset", default_factory=list
)
items: Optional[list[dict]] = SchemaField(
description="The items in the webset (if expand_items is true)",
default=None,
)
metadata: dict = SchemaField(
description="Key-value pairs associated with the webset",
default_factory=dict,
)
created_at: str = SchemaField(
description="The date and time the webset was created"
)
updated_at: str = SchemaField(
description="The date and time the webset was last updated"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="6ab8e12a-132c-41bf-b5f3-d662620fa832",
description="Retrieve a Webset by ID or external ID",
categories={BlockCategory.SEARCH},
input_schema=ExaGetWebsetBlock.Input,
output_schema=ExaGetWebsetBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
params = {}
if input_data.expand_items:
params["expand[]"] = "items"
try:
response = await Requests().get(url, headers=headers, params=params)
data = response.json()
yield "webset_id", data.get("id", "")
yield "status", data.get("status", "")
yield "external_id", data.get("externalId")
yield "searches", data.get("searches", [])
yield "enrichments", data.get("enrichments", [])
yield "monitors", data.get("monitors", [])
yield "items", data.get("items")
yield "metadata", data.get("metadata", {})
yield "created_at", data.get("createdAt", "")
yield "updated_at", data.get("updatedAt", "")
except Exception as e:
yield "error", str(e)
yield "webset_id", ""
yield "status", ""
yield "searches", []
yield "enrichments", []
yield "monitors", []
yield "metadata", {}
yield "created_at", ""
yield "updated_at", ""
class ExaDeleteWebsetBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to delete",
placeholder="webset-id-or-external-id",
)
class Output(BlockSchema):
webset_id: str = SchemaField(
description="The unique identifier for the deleted webset"
)
external_id: Optional[str] = SchemaField(
description="The external identifier for the deleted webset", default=None
)
status: str = SchemaField(description="The status of the deleted webset")
success: str = SchemaField(
description="Whether the deletion was successful", default="true"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="aa6994a2-e986-421f-8d4c-7671d3be7b7e",
description="Delete a Webset and all its items",
categories={BlockCategory.SEARCH},
input_schema=ExaDeleteWebsetBlock.Input,
output_schema=ExaDeleteWebsetBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
try:
response = await Requests().delete(url, headers=headers)
data = response.json()
yield "webset_id", data.get("id", "")
yield "external_id", data.get("externalId")
yield "status", data.get("status", "")
yield "success", "true"
except Exception as e:
yield "error", str(e)
yield "webset_id", ""
yield "status", ""
yield "success", "false"
class ExaCancelWebsetBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
webset_id: str = SchemaField(
description="The ID or external ID of the Webset to cancel",
placeholder="webset-id-or-external-id",
)
class Output(BlockSchema):
webset_id: str = SchemaField(description="The unique identifier for the webset")
status: str = SchemaField(
description="The status of the webset after cancellation"
)
external_id: Optional[str] = SchemaField(
description="The external identifier for the webset", default=None
)
success: str = SchemaField(
description="Whether the cancellation was successful", default="true"
)
error: str = SchemaField(
description="Error message if the request failed", default=""
)
def __init__(self):
super().__init__(
id="e40a6420-1db8-47bb-b00a-0e6aecd74176",
description="Cancel all operations being performed on a Webset",
categories={BlockCategory.SEARCH},
input_schema=ExaCancelWebsetBlock.Input,
output_schema=ExaCancelWebsetBlock.Output,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}/cancel"
headers = {
"x-api-key": credentials.api_key.get_secret_value(),
}
try:
response = await Requests().post(url, headers=headers)
data = response.json()
yield "webset_id", data.get("id", "")
yield "status", data.get("status", "")
yield "external_id", data.get("externalId")
yield "success", "true"
except Exception as e:
yield "error", str(e)
yield "webset_id", ""
yield "status", ""
yield "success", "false"

View File

@@ -0,0 +1,9 @@
# Import the provider builder to ensure it's registered
from backend.sdk.registry import AutoRegistry
from .triggers import GenericWebhookTriggerBlock, generic_webhook
# Ensure the SDK registry is patched to include our webhook manager
AutoRegistry.patch_integrations()
__all__ = ["GenericWebhookTriggerBlock", "generic_webhook"]

View File

@@ -3,10 +3,7 @@ import logging
from fastapi import Request
from strenum import StrEnum
from backend.data import integrations
from backend.integrations.providers import ProviderName
from ._manual_base import ManualWebhookManagerBase
from backend.sdk import ManualWebhookManagerBase, Webhook
logger = logging.getLogger(__name__)
@@ -16,12 +13,11 @@ class GenericWebhookType(StrEnum):
class GenericWebhooksManager(ManualWebhookManagerBase):
PROVIDER_NAME = ProviderName.GENERIC_WEBHOOK
WebhookType = GenericWebhookType
@classmethod
async def validate_payload(
cls, webhook: integrations.Webhook, request: Request
cls, webhook: Webhook, request: Request
) -> tuple[dict, str]:
payload = await request.json()
event_type = GenericWebhookType.PLAIN

View File

@@ -1,13 +1,21 @@
from backend.data.block import (
from backend.sdk import (
Block,
BlockCategory,
BlockManualWebhookConfig,
BlockOutput,
BlockSchema,
ProviderBuilder,
ProviderName,
SchemaField,
)
from ._webhook import GenericWebhooksManager, GenericWebhookType
generic_webhook = (
ProviderBuilder("generic_webhook")
.with_webhook_manager(GenericWebhooksManager)
.build()
)
from backend.data.model import SchemaField
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks.generic import GenericWebhookType
class GenericWebhookTriggerBlock(Block):
@@ -36,7 +44,7 @@ class GenericWebhookTriggerBlock(Block):
input_schema=GenericWebhookTriggerBlock.Input,
output_schema=GenericWebhookTriggerBlock.Output,
webhook_config=BlockManualWebhookConfig(
provider=ProviderName.GENERIC_WEBHOOK,
provider=ProviderName(generic_webhook.name),
webhook_type=GenericWebhookType.PLAIN,
),
test_input={"constants": {"key": "value"}, "payload": self.example_payload},

View File

@@ -498,6 +498,9 @@ class GithubListIssuesBlock(Block):
issue: IssueItem = SchemaField(
title="Issue", description="Issues with their title and URL"
)
issues: list[IssueItem] = SchemaField(
description="List of issues with their title and URL"
)
error: str = SchemaField(description="Error message if listing issues failed")
def __init__(self):
@@ -513,13 +516,22 @@ class GithubListIssuesBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"issues",
[
{
"title": "Issue 1",
"url": "https://github.com/owner/repo/issues/1",
}
],
),
(
"issue",
{
"title": "Issue 1",
"url": "https://github.com/owner/repo/issues/1",
},
)
),
],
test_mock={
"list_issues": lambda *args, **kwargs: [
@@ -551,10 +563,12 @@ class GithubListIssuesBlock(Block):
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
for issue in await self.list_issues(
issues = await self.list_issues(
credentials,
input_data.repo_url,
):
)
yield "issues", issues
for issue in issues:
yield "issue", issue

View File

@@ -31,7 +31,12 @@ class GithubListPullRequestsBlock(Block):
pull_request: PRItem = SchemaField(
title="Pull Request", description="PRs with their title and URL"
)
error: str = SchemaField(description="Error message if listing issues failed")
pull_requests: list[PRItem] = SchemaField(
description="List of pull requests with their title and URL"
)
error: str = SchemaField(
description="Error message if listing pull requests failed"
)
def __init__(self):
super().__init__(
@@ -46,13 +51,22 @@ class GithubListPullRequestsBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"pull_requests",
[
{
"title": "Pull request 1",
"url": "https://github.com/owner/repo/pull/1",
}
],
),
(
"pull_request",
{
"title": "Pull request 1",
"url": "https://github.com/owner/repo/pull/1",
},
)
),
],
test_mock={
"list_prs": lambda *args, **kwargs: [
@@ -88,6 +102,7 @@ class GithubListPullRequestsBlock(Block):
credentials,
input_data.repo_url,
)
yield "pull_requests", pull_requests
for pr in pull_requests:
yield "pull_request", pr
@@ -460,6 +475,9 @@ class GithubListPRReviewersBlock(Block):
title="Reviewer",
description="Reviewers with their username and profile URL",
)
reviewers: list[ReviewerItem] = SchemaField(
description="List of reviewers with their username and profile URL"
)
error: str = SchemaField(
description="Error message if listing reviewers failed"
)
@@ -477,13 +495,22 @@ class GithubListPRReviewersBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"reviewers",
[
{
"username": "reviewer1",
"url": "https://github.com/reviewer1",
}
],
),
(
"reviewer",
{
"username": "reviewer1",
"url": "https://github.com/reviewer1",
},
)
),
],
test_mock={
"list_reviewers": lambda *args, **kwargs: [
@@ -516,10 +543,12 @@ class GithubListPRReviewersBlock(Block):
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
for reviewer in await self.list_reviewers(
reviewers = await self.list_reviewers(
credentials,
input_data.pr_url,
):
)
yield "reviewers", reviewers
for reviewer in reviewers:
yield "reviewer", reviewer

View File

@@ -31,6 +31,9 @@ class GithubListTagsBlock(Block):
tag: TagItem = SchemaField(
title="Tag", description="Tags with their name and file tree browser URL"
)
tags: list[TagItem] = SchemaField(
description="List of tags with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing tags failed")
def __init__(self):
@@ -46,13 +49,22 @@ class GithubListTagsBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"tags",
[
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/tree/v1.0.0",
}
],
),
(
"tag",
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/tree/v1.0.0",
},
)
),
],
test_mock={
"list_tags": lambda *args, **kwargs: [
@@ -93,6 +105,7 @@ class GithubListTagsBlock(Block):
credentials,
input_data.repo_url,
)
yield "tags", tags
for tag in tags:
yield "tag", tag
@@ -114,6 +127,9 @@ class GithubListBranchesBlock(Block):
title="Branch",
description="Branches with their name and file tree browser URL",
)
branches: list[BranchItem] = SchemaField(
description="List of branches with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing branches failed")
def __init__(self):
@@ -129,13 +145,22 @@ class GithubListBranchesBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"branches",
[
{
"name": "main",
"url": "https://github.com/owner/repo/tree/main",
}
],
),
(
"branch",
{
"name": "main",
"url": "https://github.com/owner/repo/tree/main",
},
)
),
],
test_mock={
"list_branches": lambda *args, **kwargs: [
@@ -176,6 +201,7 @@ class GithubListBranchesBlock(Block):
credentials,
input_data.repo_url,
)
yield "branches", branches
for branch in branches:
yield "branch", branch
@@ -199,6 +225,9 @@ class GithubListDiscussionsBlock(Block):
discussion: DiscussionItem = SchemaField(
title="Discussion", description="Discussions with their title and URL"
)
discussions: list[DiscussionItem] = SchemaField(
description="List of discussions with their title and URL"
)
error: str = SchemaField(
description="Error message if listing discussions failed"
)
@@ -217,13 +246,22 @@ class GithubListDiscussionsBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"discussions",
[
{
"title": "Discussion 1",
"url": "https://github.com/owner/repo/discussions/1",
}
],
),
(
"discussion",
{
"title": "Discussion 1",
"url": "https://github.com/owner/repo/discussions/1",
},
)
),
],
test_mock={
"list_discussions": lambda *args, **kwargs: [
@@ -279,6 +317,7 @@ class GithubListDiscussionsBlock(Block):
input_data.repo_url,
input_data.num_discussions,
)
yield "discussions", discussions
for discussion in discussions:
yield "discussion", discussion
@@ -300,6 +339,9 @@ class GithubListReleasesBlock(Block):
title="Release",
description="Releases with their name and file tree browser URL",
)
releases: list[ReleaseItem] = SchemaField(
description="List of releases with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing releases failed")
def __init__(self):
@@ -315,13 +357,22 @@ class GithubListReleasesBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"releases",
[
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/releases/tag/v1.0.0",
}
],
),
(
"release",
{
"name": "v1.0.0",
"url": "https://github.com/owner/repo/releases/tag/v1.0.0",
},
)
),
],
test_mock={
"list_releases": lambda *args, **kwargs: [
@@ -357,6 +408,7 @@ class GithubListReleasesBlock(Block):
credentials,
input_data.repo_url,
)
yield "releases", releases
for release in releases:
yield "release", release
@@ -1041,6 +1093,9 @@ class GithubListStargazersBlock(Block):
title="Stargazer",
description="Stargazers with their username and profile URL",
)
stargazers: list[StargazerItem] = SchemaField(
description="List of stargazers with their username and profile URL"
)
error: str = SchemaField(
description="Error message if listing stargazers failed"
)
@@ -1058,13 +1113,22 @@ class GithubListStargazersBlock(Block):
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"stargazers",
[
{
"username": "octocat",
"url": "https://github.com/octocat",
}
],
),
(
"stargazer",
{
"username": "octocat",
"url": "https://github.com/octocat",
},
)
),
],
test_mock={
"list_stargazers": lambda *args, **kwargs: [
@@ -1104,5 +1168,6 @@ class GithubListStargazersBlock(Block):
credentials,
input_data.repo_url,
)
yield "stargazers", stargazers
for stargazer in stargazers:
yield "stargazer", stargazer

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
"""
Linear integration blocks for AutoGPT Platform.
"""
from .comment import LinearCreateCommentBlock
from .issues import LinearCreateIssueBlock, LinearSearchIssuesBlock
from .projects import LinearSearchProjectsBlock
__all__ = [
"LinearCreateCommentBlock",
"LinearCreateIssueBlock",
"LinearSearchIssuesBlock",
"LinearSearchProjectsBlock",
]

View File

@@ -1,16 +1,11 @@
from __future__ import annotations
import json
from typing import Any, Dict, Optional
from typing import Any, Dict, Optional, Union
from backend.blocks.linear._auth import LinearCredentials
from backend.blocks.linear.models import (
CreateCommentResponse,
CreateIssueResponse,
Issue,
Project,
)
from backend.util.request import Requests
from backend.sdk import APIKeyCredentials, OAuth2Credentials, Requests
from .models import CreateCommentResponse, CreateIssueResponse, Issue, Project
class LinearAPIException(Exception):
@@ -29,13 +24,12 @@ class LinearClient:
def __init__(
self,
credentials: LinearCredentials | None = None,
credentials: Union[OAuth2Credentials, APIKeyCredentials, None] = None,
custom_requests: Optional[Requests] = None,
):
if custom_requests:
self._requests = custom_requests
else:
headers: Dict[str, str] = {
"Content-Type": "application/json",
}

View File

@@ -1,31 +1,19 @@
"""
Shared configuration for all Linear blocks using the new SDK pattern.
"""
import os
from enum import Enum
from typing import Literal
from pydantic import SecretStr
from backend.data.model import (
from backend.sdk import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
BlockCostType,
OAuth2Credentials,
)
from backend.integrations.providers import ProviderName
from backend.util.settings import Secrets
secrets = Secrets()
LINEAR_OAUTH_IS_CONFIGURED = bool(
secrets.linear_client_id and secrets.linear_client_secret
ProviderBuilder,
SecretStr,
)
LinearCredentials = OAuth2Credentials | APIKeyCredentials
# LinearCredentialsInput = CredentialsMetaInput[
# Literal[ProviderName.LINEAR],
# Literal["oauth2", "api_key"] if LINEAR_OAUTH_IS_CONFIGURED else Literal["oauth2"],
# ]
LinearCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.LINEAR], Literal["oauth2"]
]
from ._oauth import LinearOAuthHandler
# (required) Comma separated list of scopes:
@@ -50,21 +38,35 @@ class LinearScope(str, Enum):
ADMIN = "admin"
def LinearCredentialsField(scopes: list[LinearScope]) -> LinearCredentialsInput:
"""
Creates a Linear credentials input on a block.
# Check if Linear OAuth is configured
client_id = os.getenv("LINEAR_CLIENT_ID")
client_secret = os.getenv("LINEAR_CLIENT_SECRET")
LINEAR_OAUTH_IS_CONFIGURED = bool(client_id and client_secret)
Params:
scope: The authorization scope needed for the block to work. ([list of available scopes](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps#available-scopes))
""" # noqa
return CredentialsField(
required_scopes=set([LinearScope.READ.value]).union(
set([scope.value for scope in scopes])
),
description="The Linear integration can be used with OAuth, "
"or any API key with sufficient permissions for the blocks it is used on.",
# Build the Linear provider
builder = (
ProviderBuilder("linear")
.with_api_key(env_var_name="LINEAR_API_KEY", title="Linear API Key")
.with_base_cost(1, BlockCostType.RUN)
)
# Linear only supports OAuth authentication
if LINEAR_OAUTH_IS_CONFIGURED:
builder = builder.with_oauth(
LinearOAuthHandler,
scopes=[
LinearScope.READ,
LinearScope.WRITE,
LinearScope.ISSUES_CREATE,
LinearScope.COMMENTS_CREATE,
],
client_id_env_var="LINEAR_CLIENT_ID",
client_secret_env_var="LINEAR_CLIENT_SECRET",
)
# Build the provider
linear = builder.build()
TEST_CREDENTIALS_OAUTH = OAuth2Credentials(
id="01234567-89ab-cdef-0123-456789abcdef",

View File

@@ -1,15 +1,27 @@
"""
Linear OAuth handler implementation.
"""
import json
from typing import Optional
from urllib.parse import urlencode
from pydantic import SecretStr
from backend.sdk import (
APIKeyCredentials,
BaseOAuthHandler,
OAuth2Credentials,
ProviderName,
Requests,
SecretStr,
)
from backend.blocks.linear._api import LinearAPIException
from backend.data.model import APIKeyCredentials, OAuth2Credentials
from backend.integrations.providers import ProviderName
from backend.util.request import Requests
from .base import BaseOAuthHandler
class LinearAPIException(Exception):
"""Exception for Linear API errors."""
def __init__(self, message: str, status_code: int):
super().__init__(message)
self.status_code = status_code
class LinearOAuthHandler(BaseOAuthHandler):
@@ -17,7 +29,9 @@ class LinearOAuthHandler(BaseOAuthHandler):
OAuth2 handler for Linear.
"""
PROVIDER_NAME = ProviderName.LINEAR
# Provider name will be set dynamically by the SDK when registered
# We use a placeholder that will be replaced by AutoRegistry.register_provider()
PROVIDER_NAME = ProviderName("linear")
def __init__(self, client_id: str, client_secret: str, redirect_uri: str):
self.client_id = client_id
@@ -30,7 +44,6 @@ class LinearOAuthHandler(BaseOAuthHandler):
def get_login_url(
self, scopes: list[str], state: str, code_challenge: Optional[str]
) -> str:
params = {
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
@@ -139,9 +152,10 @@ class LinearOAuthHandler(BaseOAuthHandler):
async def _request_username(self, access_token: str) -> Optional[str]:
# Use the LinearClient to fetch user details using GraphQL
from backend.blocks.linear._api import LinearClient
from ._api import LinearClient
try:
# Create a temporary OAuth2Credentials object for the LinearClient
linear_client = LinearClient(
APIKeyCredentials(
api_key=SecretStr(access_token),

View File

@@ -1,24 +1,32 @@
from backend.blocks.linear._api import LinearAPIException, LinearClient
from backend.blocks.linear._auth import (
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
OAuth2Credentials,
SchemaField,
)
from ._api import LinearAPIException, LinearClient
from ._config import (
LINEAR_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS_INPUT_OAUTH,
TEST_CREDENTIALS_OAUTH,
LinearCredentials,
LinearCredentialsField,
LinearCredentialsInput,
LinearScope,
linear,
)
from backend.blocks.linear.models import CreateCommentResponse
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from .models import CreateCommentResponse
class LinearCreateCommentBlock(Block):
"""Block for creating comments on Linear issues"""
class Input(BlockSchema):
credentials: LinearCredentialsInput = LinearCredentialsField(
scopes=[LinearScope.COMMENTS_CREATE],
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with comment creation permissions",
required_scopes={LinearScope.COMMENTS_CREATE},
)
issue_id: str = SchemaField(description="ID of the issue to comment on")
comment: str = SchemaField(description="Comment text to add to the issue")
@@ -55,7 +63,7 @@ class LinearCreateCommentBlock(Block):
@staticmethod
async def create_comment(
credentials: LinearCredentials, issue_id: str, comment: str
credentials: OAuth2Credentials | APIKeyCredentials, issue_id: str, comment: str
) -> tuple[str, str]:
client = LinearClient(credentials=credentials)
response: CreateCommentResponse = await client.try_create_comment(
@@ -64,7 +72,11 @@ class LinearCreateCommentBlock(Block):
return response.comment.id, response.comment.body
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
self,
input_data: Input,
*,
credentials: OAuth2Credentials | APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the comment creation"""
try:

View File

@@ -1,24 +1,32 @@
from backend.blocks.linear._api import LinearAPIException, LinearClient
from backend.blocks.linear._auth import (
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
OAuth2Credentials,
SchemaField,
)
from ._api import LinearAPIException, LinearClient
from ._config import (
LINEAR_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS_INPUT_OAUTH,
TEST_CREDENTIALS_OAUTH,
LinearCredentials,
LinearCredentialsField,
LinearCredentialsInput,
LinearScope,
linear,
)
from backend.blocks.linear.models import CreateIssueResponse, Issue
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from .models import CreateIssueResponse, Issue
class LinearCreateIssueBlock(Block):
"""Block for creating issues on Linear"""
class Input(BlockSchema):
credentials: LinearCredentialsInput = LinearCredentialsField(
scopes=[LinearScope.ISSUES_CREATE],
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with issue creation permissions",
required_scopes={LinearScope.ISSUES_CREATE},
)
title: str = SchemaField(description="Title of the issue")
description: str | None = SchemaField(description="Description of the issue")
@@ -68,7 +76,7 @@ class LinearCreateIssueBlock(Block):
@staticmethod
async def create_issue(
credentials: LinearCredentials,
credentials: OAuth2Credentials | APIKeyCredentials,
team_name: str,
title: str,
description: str | None = None,
@@ -94,7 +102,11 @@ class LinearCreateIssueBlock(Block):
return response.issue.identifier, response.issue.title
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
self,
input_data: Input,
*,
credentials: OAuth2Credentials,
**kwargs,
) -> BlockOutput:
"""Execute the issue creation"""
try:
@@ -121,8 +133,9 @@ class LinearSearchIssuesBlock(Block):
class Input(BlockSchema):
term: str = SchemaField(description="Term to search for issues")
credentials: LinearCredentialsInput = LinearCredentialsField(
scopes=[LinearScope.READ],
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with read permissions",
required_scopes={LinearScope.READ},
)
class Output(BlockSchema):
@@ -169,7 +182,7 @@ class LinearSearchIssuesBlock(Block):
@staticmethod
async def search_issues(
credentials: LinearCredentials,
credentials: OAuth2Credentials | APIKeyCredentials,
term: str,
) -> list[Issue]:
client = LinearClient(credentials=credentials)
@@ -177,7 +190,11 @@ class LinearSearchIssuesBlock(Block):
return response
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
self,
input_data: Input,
*,
credentials: OAuth2Credentials | APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the issue search"""
try:

View File

@@ -1,4 +1,4 @@
from pydantic import BaseModel
from backend.sdk import BaseModel
class Comment(BaseModel):

View File

@@ -1,24 +1,32 @@
from backend.blocks.linear._api import LinearAPIException, LinearClient
from backend.blocks.linear._auth import (
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
OAuth2Credentials,
SchemaField,
)
from ._api import LinearAPIException, LinearClient
from ._config import (
LINEAR_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS_INPUT_OAUTH,
TEST_CREDENTIALS_OAUTH,
LinearCredentials,
LinearCredentialsField,
LinearCredentialsInput,
LinearScope,
linear,
)
from backend.blocks.linear.models import Project
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from .models import Project
class LinearSearchProjectsBlock(Block):
"""Block for searching projects on Linear"""
class Input(BlockSchema):
credentials: LinearCredentialsInput = LinearCredentialsField(
scopes=[LinearScope.READ],
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with read permissions",
required_scopes={LinearScope.READ},
)
term: str = SchemaField(description="Term to search for projects")
@@ -70,7 +78,7 @@ class LinearSearchProjectsBlock(Block):
@staticmethod
async def search_projects(
credentials: LinearCredentials,
credentials: OAuth2Credentials | APIKeyCredentials,
term: str,
) -> list[Project]:
client = LinearClient(credentials=credentials)
@@ -78,7 +86,11 @@ class LinearSearchProjectsBlock(Block):
return response
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
self,
input_data: Input,
*,
credentials: OAuth2Credentials | APIKeyCredentials,
**kwargs,
) -> BlockOutput:
"""Execute the project search"""
try:

View File

@@ -127,6 +127,9 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE = (
"perplexity/llama-3.1-sonar-large-128k-online"
)
PERPLEXITY_SONAR = "perplexity/sonar"
PERPLEXITY_SONAR_PRO = "perplexity/sonar-pro"
PERPLEXITY_SONAR_DEEP_RESEARCH = "perplexity/sonar-deep-research"
QWEN_QWQ_32B_PREVIEW = "qwen/qwq-32b-preview"
NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B = "nousresearch/hermes-3-llama-3.1-405b"
NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B = "nousresearch/hermes-3-llama-3.1-70b"
@@ -229,6 +232,13 @@ MODEL_METADATA = {
LlmModel.PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE: ModelMetadata(
"open_router", 127072, 127072
),
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 127000),
LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata("open_router", 200000, 8000),
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata(
"open_router",
128000,
128000,
),
LlmModel.QWEN_QWQ_32B_PREVIEW: ModelMetadata("open_router", 32768, 32768),
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: ModelMetadata(
"open_router", 131000, 4096
@@ -273,6 +283,7 @@ class LLMResponse(BaseModel):
tool_calls: Optional[List[ToolContentBlock]] | None
prompt_tokens: int
completion_tokens: int
reasoning: Optional[str] = None
def convert_openai_tool_fmt_to_anthropic(
@@ -307,6 +318,46 @@ def convert_openai_tool_fmt_to_anthropic(
return anthropic_tools
def extract_openai_reasoning(response) -> str | None:
"""Extract reasoning from OpenAI-compatible response if available."""
"""Note: This will likely not working since the reasoning is not present in another Response API"""
reasoning = None
choice = response.choices[0]
if hasattr(choice, "reasoning") and getattr(choice, "reasoning", None):
reasoning = str(getattr(choice, "reasoning"))
elif hasattr(response, "reasoning") and getattr(response, "reasoning", None):
reasoning = str(getattr(response, "reasoning"))
elif hasattr(choice.message, "reasoning") and getattr(
choice.message, "reasoning", None
):
reasoning = str(getattr(choice.message, "reasoning"))
return reasoning
def extract_openai_tool_calls(response) -> list[ToolContentBlock] | None:
"""Extract tool calls from OpenAI-compatible response."""
if response.choices[0].message.tool_calls:
return [
ToolContentBlock(
id=tool.id,
type=tool.type,
function=ToolCall(
name=tool.function.name,
arguments=tool.function.arguments,
),
)
for tool in response.choices[0].message.tool_calls
]
return None
def get_parallel_tool_calls_param(llm_model: LlmModel, parallel_tool_calls):
"""Get the appropriate parallel_tool_calls parameter for OpenAI-compatible APIs."""
if llm_model.startswith("o") or parallel_tool_calls is None:
return openai.NOT_GIVEN
return parallel_tool_calls
async def llm_call(
credentials: APIKeyCredentials,
llm_model: LlmModel,
@@ -360,8 +411,9 @@ async def llm_call(
oai_client = openai.AsyncOpenAI(api_key=credentials.api_key.get_secret_value())
response_format = None
if llm_model.startswith("o") or parallel_tool_calls is None:
parallel_tool_calls = openai.NOT_GIVEN
parallel_tool_calls = get_parallel_tool_calls_param(
llm_model, parallel_tool_calls
)
if json_format:
response_format = {"type": "json_object"}
@@ -375,20 +427,8 @@ async def llm_call(
parallel_tool_calls=parallel_tool_calls,
)
if response.choices[0].message.tool_calls:
tool_calls = [
ToolContentBlock(
id=tool.id,
type=tool.type,
function=ToolCall(
name=tool.function.name,
arguments=tool.function.arguments,
),
)
for tool in response.choices[0].message.tool_calls
]
else:
tool_calls = None
tool_calls = extract_openai_tool_calls(response)
reasoning = extract_openai_reasoning(response)
return LLMResponse(
raw_response=response.choices[0].message,
@@ -397,6 +437,7 @@ async def llm_call(
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=reasoning,
)
elif provider == "anthropic":
@@ -458,6 +499,12 @@ async def llm_call(
f"Tool use stop reason but no tool calls found in content. {resp}"
)
reasoning = None
for content_block in resp.content:
if hasattr(content_block, "type") and content_block.type == "thinking":
reasoning = content_block.thinking
break
return LLMResponse(
raw_response=resp,
prompt=prompt,
@@ -469,6 +516,7 @@ async def llm_call(
tool_calls=tool_calls,
prompt_tokens=resp.usage.input_tokens,
completion_tokens=resp.usage.output_tokens,
reasoning=reasoning,
)
except anthropic.APIError as e:
error_message = f"Anthropic API error: {str(e)}"
@@ -493,6 +541,7 @@ async def llm_call(
tool_calls=None,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=None,
)
elif provider == "ollama":
if tools:
@@ -514,6 +563,7 @@ async def llm_call(
tool_calls=None,
prompt_tokens=response.get("prompt_eval_count") or 0,
completion_tokens=response.get("eval_count") or 0,
reasoning=None,
)
elif provider == "open_router":
tools_param = tools if tools else openai.NOT_GIVEN
@@ -522,6 +572,10 @@ async def llm_call(
api_key=credentials.api_key.get_secret_value(),
)
parallel_tool_calls_param = get_parallel_tool_calls_param(
llm_model, parallel_tool_calls
)
response = await client.chat.completions.create(
extra_headers={
"HTTP-Referer": "https://agpt.co",
@@ -531,6 +585,7 @@ async def llm_call(
messages=prompt, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=parallel_tool_calls_param,
)
# If there's no response, raise an error
@@ -540,19 +595,8 @@ async def llm_call(
else:
raise ValueError("No response from OpenRouter.")
if response.choices[0].message.tool_calls:
tool_calls = [
ToolContentBlock(
id=tool.id,
type=tool.type,
function=ToolCall(
name=tool.function.name, arguments=tool.function.arguments
),
)
for tool in response.choices[0].message.tool_calls
]
else:
tool_calls = None
tool_calls = extract_openai_tool_calls(response)
reasoning = extract_openai_reasoning(response)
return LLMResponse(
raw_response=response.choices[0].message,
@@ -561,6 +605,7 @@ async def llm_call(
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=reasoning,
)
elif provider == "llama_api":
tools_param = tools if tools else openai.NOT_GIVEN
@@ -569,6 +614,10 @@ async def llm_call(
api_key=credentials.api_key.get_secret_value(),
)
parallel_tool_calls_param = get_parallel_tool_calls_param(
llm_model, parallel_tool_calls
)
response = await client.chat.completions.create(
extra_headers={
"HTTP-Referer": "https://agpt.co",
@@ -578,9 +627,7 @@ async def llm_call(
messages=prompt, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=(
openai.NOT_GIVEN if parallel_tool_calls is None else parallel_tool_calls
),
parallel_tool_calls=parallel_tool_calls_param,
)
# If there's no response, raise an error
@@ -590,19 +637,8 @@ async def llm_call(
else:
raise ValueError("No response from Llama API.")
if response.choices[0].message.tool_calls:
tool_calls = [
ToolContentBlock(
id=tool.id,
type=tool.type,
function=ToolCall(
name=tool.function.name, arguments=tool.function.arguments
),
)
for tool in response.choices[0].message.tool_calls
]
else:
tool_calls = None
tool_calls = extract_openai_tool_calls(response)
reasoning = extract_openai_reasoning(response)
return LLMResponse(
raw_response=response.choices[0].message,
@@ -611,6 +647,7 @@ async def llm_call(
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
reasoning=reasoning,
)
elif provider == "aiml_api":
client = openai.OpenAI(
@@ -634,6 +671,7 @@ async def llm_call(
completion_tokens=(
completion.usage.completion_tokens if completion.usage else 0
),
reasoning=None,
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
@@ -747,6 +785,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
tool_calls=None,
prompt_tokens=0,
completion_tokens=0,
reasoning=None,
)
},
)

View File

@@ -67,17 +67,19 @@ class AddMemoryBlock(Block, Mem0Base):
metadata: dict[str, Any] = SchemaField(
description="Optional metadata for the memory", default_factory=dict
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
limit_memory_to_agent: bool = SchemaField(
description="Limit the memory to the agent", default=False
description="Limit the memory to the agent", default=True
)
class Output(BlockSchema):
action: str = SchemaField(description="Action of the operation")
memory: str = SchemaField(description="Memory created")
results: list[dict[str, str]] = SchemaField(
description="List of all results from the operation"
)
error: str = SchemaField(description="Error message if operation fails")
def __init__(self):
@@ -104,7 +106,14 @@ class AddMemoryBlock(Block, Mem0Base):
"credentials": TEST_CREDENTIALS_INPUT,
},
],
test_output=[("action", "NO_CHANGE"), ("action", "NO_CHANGE")],
test_output=[
("results", [{"event": "CREATED", "memory": "test memory"}]),
("action", "CREATED"),
("memory", "test memory"),
("results", [{"event": "CREATED", "memory": "test memory"}]),
("action", "CREATED"),
("memory", "test memory"),
],
test_credentials=TEST_CREDENTIALS,
test_mock={"_get_client": lambda credentials: MockMemoryClient()},
)
@@ -117,7 +126,7 @@ class AddMemoryBlock(Block, Mem0Base):
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
@@ -146,8 +155,11 @@ class AddMemoryBlock(Block, Mem0Base):
**params,
)
if len(result.get("results", [])) > 0:
for result in result.get("results", []):
results = result.get("results", [])
yield "results", results
if len(results) > 0:
for result in results:
yield "action", result["event"]
yield "memory", result["memory"]
else:
@@ -178,6 +190,10 @@ class SearchMemoryBlock(Block, Mem0Base):
default_factory=list,
advanced=True,
)
metadata_filter: Optional[dict[str, Any]] = SchemaField(
description="Optional metadata filters to apply",
default=None,
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
@@ -216,7 +232,7 @@ class SearchMemoryBlock(Block, Mem0Base):
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
@@ -235,6 +251,8 @@ class SearchMemoryBlock(Block, Mem0Base):
filters["AND"].append({"run_id": graph_exec_id})
if input_data.limit_memory_to_agent:
filters["AND"].append({"agent_id": graph_id})
if input_data.metadata_filter:
filters["AND"].append({"metadata": input_data.metadata_filter})
result: list[dict[str, Any]] = client.search(
input_data.query, version="v2", filters=filters
@@ -260,11 +278,15 @@ class GetAllMemoriesBlock(Block, Mem0Base):
categories: Optional[list[str]] = SchemaField(
description="Filter by categories", default=None
)
metadata_filter: Optional[dict[str, Any]] = SchemaField(
description="Optional metadata filters to apply",
default=None,
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
limit_memory_to_agent: bool = SchemaField(
description="Limit the memory to the agent", default=False
description="Limit the memory to the agent", default=True
)
class Output(BlockSchema):
@@ -274,11 +296,11 @@ class GetAllMemoriesBlock(Block, Mem0Base):
def __init__(self):
super().__init__(
id="45aee5bf-4767-45d1-a28b-e01c5aae9fc1",
description="Retrieve all memories from Mem0 with pagination",
description="Retrieve all memories from Mem0 with optional conversation filtering",
input_schema=GetAllMemoriesBlock.Input,
output_schema=GetAllMemoriesBlock.Output,
test_input={
"user_id": "test_user",
"metadata_filter": {"type": "test"},
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
@@ -296,7 +318,7 @@ class GetAllMemoriesBlock(Block, Mem0Base):
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
@@ -314,6 +336,8 @@ class GetAllMemoriesBlock(Block, Mem0Base):
filters["AND"].append(
{"categories": {"contains": input_data.categories}}
)
if input_data.metadata_filter:
filters["AND"].append({"metadata": input_data.metadata_filter})
memories: list[dict[str, Any]] = client.get_all(
filters=filters,
@@ -326,14 +350,116 @@ class GetAllMemoriesBlock(Block, Mem0Base):
yield "error", str(e)
class GetLatestMemoryBlock(Block, Mem0Base):
"""Block for retrieving the latest memory from Mem0"""
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.MEM0], Literal["api_key"]
] = CredentialsField(description="Mem0 API key credentials")
trigger: bool = SchemaField(
description="An unused field that is used to trigger the block when you have no other inputs",
default=False,
advanced=False,
)
categories: Optional[list[str]] = SchemaField(
description="Filter by categories", default=None
)
conversation_id: Optional[str] = SchemaField(
description="Optional conversation ID to retrieve the latest memory from (uses run_id)",
default=None,
)
metadata_filter: Optional[dict[str, Any]] = SchemaField(
description="Optional metadata filters to apply",
default=None,
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
limit_memory_to_agent: bool = SchemaField(
description="Limit the memory to the agent", default=True
)
class Output(BlockSchema):
memory: Optional[dict[str, Any]] = SchemaField(
description="Latest memory if found"
)
found: bool = SchemaField(description="Whether a memory was found")
error: str = SchemaField(description="Error message if operation fails")
def __init__(self):
super().__init__(
id="0f9d81b5-a145-4c23-b87f-01d6bf37b677",
description="Retrieve the latest memory from Mem0 with optional key filtering",
input_schema=GetLatestMemoryBlock.Input,
output_schema=GetLatestMemoryBlock.Output,
test_input={
"metadata_filter": {"type": "test"},
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("memory", {"id": "test-memory", "content": "test content"}),
("found", True),
],
test_credentials=TEST_CREDENTIALS,
test_mock={"_get_client": lambda credentials: MockMemoryClient()},
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
filters: Filter = {
"AND": [
{"user_id": user_id},
]
}
if input_data.limit_memory_to_run:
filters["AND"].append({"run_id": graph_exec_id})
if input_data.limit_memory_to_agent:
filters["AND"].append({"agent_id": graph_id})
if input_data.categories:
filters["AND"].append(
{"categories": {"contains": input_data.categories}}
)
if input_data.metadata_filter:
filters["AND"].append({"metadata": input_data.metadata_filter})
memories: list[dict[str, Any]] = client.get_all(
filters=filters,
version="v2",
)
if memories:
# Return the latest memory (first in the list as they're sorted by recency)
latest_memory = memories[0]
yield "memory", latest_memory
yield "found", True
else:
yield "memory", None
yield "found", False
except Exception as e:
yield "error", str(e)
# Mock client for testing
class MockMemoryClient:
"""Mock Mem0 client for testing"""
def add(self, *args, **kwargs):
return {"memory_id": "test-memory-id", "status": "success"}
return {"results": [{"event": "CREATED", "memory": "test memory"}]}
def search(self, *args, **kwargs) -> list[dict[str, str]]:
def search(self, *args, **kwargs) -> list[dict[str, Any]]:
return [{"id": "test-memory", "content": "test content"}]
def get_all(self, *args, **kwargs) -> list[dict[str, str]]:

View File

@@ -0,0 +1,155 @@
import logging
from typing import Any, Literal
from autogpt_libs.utils.cache import thread_cached
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
logger = logging.getLogger(__name__)
@thread_cached
def get_database_manager_client():
from backend.executor import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
return get_service_client(DatabaseManagerAsyncClient, health_check=False)
StorageScope = Literal["within_agent", "across_agents"]
def get_storage_key(key: str, scope: StorageScope, graph_id: str) -> str:
"""Generate the storage key based on scope"""
if scope == "across_agents":
return f"global#{key}"
else:
return f"agent#{graph_id}#{key}"
class PersistInformationBlock(Block):
"""Block for persisting key-value data for the current user with configurable scope"""
class Input(BlockSchema):
key: str = SchemaField(description="Key to store the information under")
value: Any = SchemaField(description="Value to store")
scope: StorageScope = SchemaField(
description="Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user)",
default="within_agent",
)
class Output(BlockSchema):
value: Any = SchemaField(description="Value that was stored")
def __init__(self):
super().__init__(
id="1d055e55-a2b9-4547-8311-907d05b0304d",
description="Persist key-value information for the current user",
categories={BlockCategory.DATA},
input_schema=PersistInformationBlock.Input,
output_schema=PersistInformationBlock.Output,
test_input={
"key": "user_preference",
"value": {"theme": "dark", "language": "en"},
"scope": "within_agent",
},
test_output=[
("value", {"theme": "dark", "language": "en"}),
],
test_mock={
"_store_data": lambda *args, **kwargs: {
"theme": "dark",
"language": "en",
}
},
)
async def run(
self,
input_data: Input,
*,
user_id: str,
graph_id: str,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Determine the storage key based on scope
storage_key = get_storage_key(input_data.key, input_data.scope, graph_id)
# Store the data
yield "value", await self._store_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=storage_key,
data=input_data.value,
)
async def _store_data(
self, user_id: str, node_exec_id: str, key: str, data: Any
) -> Any | None:
return await get_database_manager_client().set_execution_kv_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=key,
data=data,
)
class RetrieveInformationBlock(Block):
"""Block for retrieving key-value data for the current user with configurable scope"""
class Input(BlockSchema):
key: str = SchemaField(description="Key to retrieve the information for")
scope: StorageScope = SchemaField(
description="Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user)",
default="within_agent",
)
default_value: Any = SchemaField(
description="Default value to return if key is not found", default=None
)
class Output(BlockSchema):
value: Any = SchemaField(description="Retrieved value or default value")
def __init__(self):
super().__init__(
id="d8710fc9-6e29-481e-a7d5-165eb16f8471",
description="Retrieve key-value information for the current user",
categories={BlockCategory.DATA},
input_schema=RetrieveInformationBlock.Input,
output_schema=RetrieveInformationBlock.Output,
test_input={
"key": "user_preference",
"scope": "within_agent",
"default_value": {"theme": "light", "language": "en"},
},
test_output=[
("value", {"theme": "light", "language": "en"}),
],
test_mock={"_retrieve_data": lambda *args, **kwargs: None},
static_output=True,
)
async def run(
self, input_data: Input, *, user_id: str, graph_id: str, **kwargs
) -> BlockOutput:
# Determine the storage key based on scope
storage_key = get_storage_key(input_data.key, input_data.scope, graph_id)
# Retrieve the data
stored_value = await self._retrieve_data(
user_id=user_id,
key=storage_key,
)
if stored_value is not None:
yield "value", stored_value
else:
yield "value", input_data.default_value
async def _retrieve_data(self, user_id: str, key: str) -> Any | None:
return await get_database_manager_client().get_execution_kv_data(
user_id=user_id,
key=key,
)

View File

@@ -96,6 +96,7 @@ class GetRedditPostsBlock(Block):
class Output(BlockSchema):
post: RedditPost = SchemaField(description="Reddit post")
posts: list[RedditPost] = SchemaField(description="List of all Reddit posts")
def __init__(self):
super().__init__(
@@ -128,6 +129,23 @@ class GetRedditPostsBlock(Block):
id="id2", subreddit="subreddit", title="title2", body="body2"
),
),
(
"posts",
[
RedditPost(
id="id1",
subreddit="subreddit",
title="title1",
body="body1",
),
RedditPost(
id="id2",
subreddit="subreddit",
title="title2",
body="body2",
),
],
),
],
test_mock={
"get_posts": lambda input_data, credentials: [
@@ -150,6 +168,7 @@ class GetRedditPostsBlock(Block):
self, input_data: Input, *, credentials: RedditCredentials, **kwargs
) -> BlockOutput:
current_time = datetime.now(tz=timezone.utc)
all_posts = []
for post in self.get_posts(input_data=input_data, credentials=credentials):
if input_data.last_minutes:
post_datetime = datetime.fromtimestamp(
@@ -162,12 +181,16 @@ class GetRedditPostsBlock(Block):
if input_data.last_post and post.id == input_data.last_post:
break
yield "post", RedditPost(
reddit_post = RedditPost(
id=post.id,
subreddit=input_data.subreddit,
title=post.title,
body=post.selftext,
)
all_posts.append(reddit_post)
yield "post", reddit_post
yield "posts", all_posts
class PostRedditCommentBlock(Block):

View File

@@ -40,6 +40,7 @@ class ReadRSSFeedBlock(Block):
class Output(BlockSchema):
entry: RSSEntry = SchemaField(description="The RSS item")
entries: list[RSSEntry] = SchemaField(description="List of all RSS entries")
def __init__(self):
super().__init__(
@@ -66,6 +67,21 @@ class ReadRSSFeedBlock(Block):
categories=["Technology", "News"],
),
),
(
"entries",
[
RSSEntry(
title="Example RSS Item",
link="https://example.com/article",
description="This is an example RSS item description.",
pub_date=datetime(
2023, 6, 23, 12, 30, 0, tzinfo=timezone.utc
),
author="John Doe",
categories=["Technology", "News"],
),
],
),
],
test_mock={
"parse_feed": lambda *args, **kwargs: {
@@ -96,21 +112,22 @@ class ReadRSSFeedBlock(Block):
keep_going = input_data.run_continuously
feed = self.parse_feed(input_data.rss_url)
all_entries = []
for entry in feed["entries"]:
pub_date = datetime(*entry["published_parsed"][:6], tzinfo=timezone.utc)
if pub_date > start_time:
yield (
"entry",
RSSEntry(
title=entry["title"],
link=entry["link"],
description=entry.get("summary", ""),
pub_date=pub_date,
author=entry.get("author", ""),
categories=[tag["term"] for tag in entry.get("tags", [])],
),
rss_entry = RSSEntry(
title=entry["title"],
link=entry["link"],
description=entry.get("summary", ""),
pub_date=pub_date,
author=entry.get("author", ""),
categories=[tag["term"] for tag in entry.get("tags", [])],
)
all_entries.append(rss_entry)
yield "entry", rss_entry
yield "entries", all_entries
await asyncio.sleep(input_data.polling_rate)

View File

@@ -26,10 +26,10 @@ logger = logging.getLogger(__name__)
@thread_cached
def get_database_manager_client():
from backend.executor import DatabaseManagerClient
from backend.executor import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
return get_service_client(DatabaseManagerClient)
return get_service_client(DatabaseManagerAsyncClient, health_check=False)
def _get_tool_requests(entry: dict[str, Any]) -> list[str]:
@@ -273,7 +273,7 @@ class SmartDecisionMakerBlock(Block):
return re.sub(r"[^a-zA-Z0-9_-]", "_", s).lower()
@staticmethod
def _create_block_function_signature(
async def _create_block_function_signature(
sink_node: "Node", links: list["Link"]
) -> dict[str, Any]:
"""
@@ -312,7 +312,7 @@ class SmartDecisionMakerBlock(Block):
return {"type": "function", "function": tool_function}
@staticmethod
def _create_agent_function_signature(
async def _create_agent_function_signature(
sink_node: "Node", links: list["Link"]
) -> dict[str, Any]:
"""
@@ -334,7 +334,7 @@ class SmartDecisionMakerBlock(Block):
raise ValueError("Graph ID or Graph Version not found in sink node.")
db_client = get_database_manager_client()
sink_graph_meta = db_client.get_graph_metadata(graph_id, graph_version)
sink_graph_meta = await db_client.get_graph_metadata(graph_id, graph_version)
if not sink_graph_meta:
raise ValueError(
f"Sink graph metadata not found: {graph_id} {graph_version}"
@@ -374,7 +374,7 @@ class SmartDecisionMakerBlock(Block):
return {"type": "function", "function": tool_function}
@staticmethod
def _create_function_signature(node_id: str) -> list[dict[str, Any]]:
async def _create_function_signature(node_id: str) -> list[dict[str, Any]]:
"""
Creates function signatures for tools linked to a specified node within a graph.
@@ -396,13 +396,13 @@ class SmartDecisionMakerBlock(Block):
db_client = get_database_manager_client()
tools = [
(link, node)
for link, node in db_client.get_connected_output_nodes(node_id)
for link, node in await db_client.get_connected_output_nodes(node_id)
if link.source_name.startswith("tools_^_") and link.source_id == node_id
]
if not tools:
raise ValueError("There is no next node to execute.")
return_tool_functions = []
return_tool_functions: list[dict[str, Any]] = []
grouped_tool_links: dict[str, tuple["Node", list["Link"]]] = {}
for link, node in tools:
@@ -417,13 +417,13 @@ class SmartDecisionMakerBlock(Block):
if sink_node.block_id == AgentExecutorBlock().id:
return_tool_functions.append(
SmartDecisionMakerBlock._create_agent_function_signature(
await SmartDecisionMakerBlock._create_agent_function_signature(
sink_node, links
)
)
else:
return_tool_functions.append(
SmartDecisionMakerBlock._create_block_function_signature(
await SmartDecisionMakerBlock._create_block_function_signature(
sink_node, links
)
)
@@ -442,7 +442,7 @@ class SmartDecisionMakerBlock(Block):
user_id: str,
**kwargs,
) -> BlockOutput:
tool_functions = self._create_function_signature(node_id)
tool_functions = await self._create_function_signature(node_id)
yield "tool_functions", json.dumps(tool_functions)
input_data.conversation_history = input_data.conversation_history or []
@@ -452,28 +452,33 @@ class SmartDecisionMakerBlock(Block):
if pending_tool_calls and input_data.last_tool_output is None:
raise ValueError(f"Tool call requires an output for {pending_tool_calls}")
# Prefill all missing tool calls with the last tool output/
# TODO: we need a better way to handle this.
tool_output = [
_create_tool_response(pending_call_id, input_data.last_tool_output)
for pending_call_id, count in pending_tool_calls.items()
for _ in range(count)
]
# If the SDM block only calls 1 tool at a time, this should not happen.
if len(tool_output) > 1:
logger.warning(
f"[SmartDecisionMakerBlock-node_exec_id={node_exec_id}] "
f"Multiple pending tool calls are prefilled using a single output. "
f"Execution may not be accurate."
# Only assign the last tool output to the first pending tool call
tool_output = []
if pending_tool_calls and input_data.last_tool_output is not None:
# Get the first pending tool call ID
first_call_id = next(iter(pending_tool_calls.keys()))
tool_output.append(
_create_tool_response(first_call_id, input_data.last_tool_output)
)
# Add tool output to prompt right away
prompt.extend(tool_output)
# Check if there are still pending tool calls after handling the first one
remaining_pending_calls = get_pending_tool_calls(prompt)
# If there are still pending tool calls, yield the conversation and return early
if remaining_pending_calls:
yield "conversations", prompt
return
# Fallback on adding tool output in the conversation history as user prompt.
if len(tool_output) == 0 and input_data.last_tool_output:
logger.warning(
elif input_data.last_tool_output:
logger.error(
f"[SmartDecisionMakerBlock-node_exec_id={node_exec_id}] "
f"No pending tool calls found. This may indicate an issue with the "
f"conversation history, or an LLM calling two tools at the same time."
f"conversation history, or the tool giving response more than once."
f"This should not happen! Please check the conversation history for any inconsistencies."
)
tool_output.append(
{
@@ -481,8 +486,7 @@ class SmartDecisionMakerBlock(Block):
"content": f"Last tool output: {json.dumps(input_data.last_tool_output)}",
}
)
prompt.extend(tool_output)
prompt.extend(tool_output)
if input_data.multiple_tool_calls:
input_data.sys_prompt += "\nYou can call a tool (different tools) multiple times in a single response."
else:
@@ -550,5 +554,11 @@ class SmartDecisionMakerBlock(Block):
else:
yield f"tools_^_{tool_name}_~_{arg_name}", None
response.prompt.append(response.raw_response)
yield "conversations", response.prompt
# Add reasoning to conversation history if available
if response.reasoning:
prompt.append(
{"role": "assistant", "content": f"[Reasoning]: {response.reasoning}"}
)
prompt.append(response.raw_response)
yield "conversations", prompt

View File

@@ -9,3 +9,117 @@ from backend.util.test import execute_block_test
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name)
async def test_available_blocks(block: Type[Block]):
await execute_block_test(block())
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name)
async def test_block_ids_valid(block: Type[Block]):
# add the tests here to check they are uuid4
import uuid
# Skip list for blocks with known invalid UUIDs
skip_blocks = {
"GetWeatherInformationBlock",
"CodeExecutionBlock",
"CountdownTimerBlock",
"TwitterGetListTweetsBlock",
"TwitterRemoveListMemberBlock",
"TwitterAddListMemberBlock",
"TwitterGetListMembersBlock",
"TwitterGetListMembershipsBlock",
"TwitterUnfollowListBlock",
"TwitterFollowListBlock",
"TwitterUnpinListBlock",
"TwitterPinListBlock",
"TwitterGetPinnedListsBlock",
"TwitterDeleteListBlock",
"TwitterUpdateListBlock",
"TwitterCreateListBlock",
"TwitterGetListBlock",
"TwitterGetOwnedListsBlock",
"TwitterGetSpacesBlock",
"TwitterGetSpaceByIdBlock",
"TwitterGetSpaceBuyersBlock",
"TwitterGetSpaceTweetsBlock",
"TwitterSearchSpacesBlock",
"TwitterGetUserMentionsBlock",
"TwitterGetHomeTimelineBlock",
"TwitterGetUserTweetsBlock",
"TwitterGetTweetBlock",
"TwitterGetTweetsBlock",
"TwitterGetQuoteTweetsBlock",
"TwitterLikeTweetBlock",
"TwitterGetLikingUsersBlock",
"TwitterGetLikedTweetsBlock",
"TwitterUnlikeTweetBlock",
"TwitterBookmarkTweetBlock",
"TwitterGetBookmarkedTweetsBlock",
"TwitterRemoveBookmarkTweetBlock",
"TwitterRetweetBlock",
"TwitterRemoveRetweetBlock",
"TwitterGetRetweetersBlock",
"TwitterHideReplyBlock",
"TwitterUnhideReplyBlock",
"TwitterPostTweetBlock",
"TwitterDeleteTweetBlock",
"TwitterSearchRecentTweetsBlock",
"TwitterUnfollowUserBlock",
"TwitterFollowUserBlock",
"TwitterGetFollowersBlock",
"TwitterGetFollowingBlock",
"TwitterUnmuteUserBlock",
"TwitterGetMutedUsersBlock",
"TwitterMuteUserBlock",
"TwitterGetBlockedUsersBlock",
"TwitterGetUserBlock",
"TwitterGetUsersBlock",
"TodoistCreateLabelBlock",
"TodoistListLabelsBlock",
"TodoistGetLabelBlock",
"TodoistUpdateLabelBlock",
"TodoistDeleteLabelBlock",
"TodoistGetSharedLabelsBlock",
"TodoistRenameSharedLabelsBlock",
"TodoistRemoveSharedLabelsBlock",
"TodoistCreateTaskBlock",
"TodoistGetTasksBlock",
"TodoistGetTaskBlock",
"TodoistUpdateTaskBlock",
"TodoistCloseTaskBlock",
"TodoistReopenTaskBlock",
"TodoistDeleteTaskBlock",
"TodoistListSectionsBlock",
"TodoistGetSectionBlock",
"TodoistDeleteSectionBlock",
"TodoistCreateProjectBlock",
"TodoistGetProjectBlock",
"TodoistUpdateProjectBlock",
"TodoistDeleteProjectBlock",
"TodoistListCollaboratorsBlock",
"TodoistGetCommentsBlock",
"TodoistGetCommentBlock",
"TodoistUpdateCommentBlock",
"TodoistDeleteCommentBlock",
"GithubListStargazersBlock",
"Slant3DSlicerBlock",
}
block_instance = block()
# Skip blocks with known invalid UUIDs
if block_instance.__class__.__name__ in skip_blocks:
pytest.skip(
f"Skipping UUID check for {block_instance.__class__.__name__} - known invalid UUID"
)
# Check that the ID is not empty
assert block_instance.id, f"Block {block.name} has empty ID"
# Check that the ID is a valid UUID4
try:
parsed_uuid = uuid.UUID(block_instance.id)
# Verify it's specifically UUID version 4
assert (
parsed_uuid.version == 4
), f"Block {block.name} ID is UUID version {parsed_uuid.version}, expected version 4"
except ValueError:
pytest.fail(f"Block {block.name} has invalid UUID format: {block_instance.id}")

View File

@@ -161,7 +161,7 @@ async def test_smart_decision_maker_function_signature(server: SpinTestServer):
)
test_graph = await create_graph(server, test_graph, test_user)
tool_functions = SmartDecisionMakerBlock._create_function_signature(
tool_functions = await SmartDecisionMakerBlock._create_function_signature(
test_graph.nodes[0].id
)
assert tool_functions is not None, "Tool functions should not be None"

View File

@@ -0,0 +1,359 @@
import asyncio
import random
from datetime import datetime
from faker import Faker
from prisma import Prisma
faker = Faker()
async def check_cron_job(db):
"""Check if the pg_cron job for refreshing materialized views exists."""
print("\n1. Checking pg_cron job...")
print("-" * 40)
try:
# Check if pg_cron extension exists
extension_check = await db.query_raw("CREATE EXTENSION pg_cron;")
print(extension_check)
extension_check = await db.query_raw(
"SELECT COUNT(*) as count FROM pg_extension WHERE extname = 'pg_cron'"
)
if extension_check[0]["count"] == 0:
print("⚠️ pg_cron extension is not installed")
return False
# Check if the refresh job exists
job_check = await db.query_raw(
"""
SELECT jobname, schedule, command
FROM cron.job
WHERE jobname = 'refresh-store-views'
"""
)
if job_check:
job = job_check[0]
print("✅ pg_cron job found:")
print(f" Name: {job['jobname']}")
print(f" Schedule: {job['schedule']} (every 15 minutes)")
print(f" Command: {job['command']}")
return True
else:
print("⚠️ pg_cron job 'refresh-store-views' not found")
return False
except Exception as e:
print(f"❌ Error checking pg_cron: {e}")
return False
async def get_materialized_view_counts(db):
"""Get current counts from materialized views."""
print("\n2. Getting current materialized view data...")
print("-" * 40)
# Get counts from mv_agent_run_counts
agent_runs = await db.query_raw(
"""
SELECT COUNT(*) as total_agents,
SUM(run_count) as total_runs,
MAX(run_count) as max_runs,
MIN(run_count) as min_runs
FROM mv_agent_run_counts
"""
)
# Get counts from mv_review_stats
review_stats = await db.query_raw(
"""
SELECT COUNT(*) as total_listings,
SUM(review_count) as total_reviews,
AVG(avg_rating) as overall_avg_rating
FROM mv_review_stats
"""
)
# Get sample data from StoreAgent view
store_agents = await db.query_raw(
"""
SELECT COUNT(*) as total_store_agents,
AVG(runs) as avg_runs,
AVG(rating) as avg_rating
FROM "StoreAgent"
"""
)
agent_run_data = agent_runs[0] if agent_runs else {}
review_data = review_stats[0] if review_stats else {}
store_data = store_agents[0] if store_agents else {}
print("📊 mv_agent_run_counts:")
print(f" Total agents: {agent_run_data.get('total_agents', 0)}")
print(f" Total runs: {agent_run_data.get('total_runs', 0)}")
print(f" Max runs per agent: {agent_run_data.get('max_runs', 0)}")
print(f" Min runs per agent: {agent_run_data.get('min_runs', 0)}")
print("\n📊 mv_review_stats:")
print(f" Total listings: {review_data.get('total_listings', 0)}")
print(f" Total reviews: {review_data.get('total_reviews', 0)}")
print(f" Overall avg rating: {review_data.get('overall_avg_rating') or 0:.2f}")
print("\n📊 StoreAgent view:")
print(f" Total store agents: {store_data.get('total_store_agents', 0)}")
print(f" Average runs: {store_data.get('avg_runs') or 0:.2f}")
print(f" Average rating: {store_data.get('avg_rating') or 0:.2f}")
return {
"agent_runs": agent_run_data,
"reviews": review_data,
"store_agents": store_data,
}
async def add_test_data(db):
"""Add some test data to verify materialized view updates."""
print("\n3. Adding test data...")
print("-" * 40)
# Get some existing data
users = await db.user.find_many(take=5)
graphs = await db.agentgraph.find_many(take=5)
if not users or not graphs:
print("❌ No existing users or graphs found. Run test_data_creator.py first.")
return False
# Add new executions
print("Adding new agent graph executions...")
new_executions = 0
for graph in graphs:
for _ in range(random.randint(2, 5)):
await db.agentgraphexecution.create(
data={
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"userId": random.choice(users).id,
"executionStatus": "COMPLETED",
"startedAt": datetime.now(),
}
)
new_executions += 1
print(f"✅ Added {new_executions} new executions")
# Check if we need to create store listings first
store_versions = await db.storelistingversion.find_many(
where={"submissionStatus": "APPROVED"}, take=5
)
if not store_versions:
print("\nNo approved store listings found. Creating test store listings...")
# Create store listings for existing agent graphs
for i, graph in enumerate(graphs[:3]): # Create up to 3 store listings
# Create a store listing
listing = await db.storelisting.create(
data={
"slug": f"test-agent-{graph.id[:8]}",
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"hasApprovedVersion": True,
"owningUserId": graph.userId,
}
)
# Create an approved version
version = await db.storelistingversion.create(
data={
"storeListingId": listing.id,
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"name": f"Test Agent {i+1}",
"subHeading": faker.catch_phrase(),
"description": faker.paragraph(nb_sentences=5),
"imageUrls": [faker.image_url()],
"categories": ["productivity", "automation"],
"submissionStatus": "APPROVED",
"submittedAt": datetime.now(),
}
)
# Update listing with active version
await db.storelisting.update(
where={"id": listing.id}, data={"activeVersionId": version.id}
)
print("✅ Created test store listings")
# Re-fetch approved versions
store_versions = await db.storelistingversion.find_many(
where={"submissionStatus": "APPROVED"}, take=5
)
# Add new reviews
print("\nAdding new store listing reviews...")
new_reviews = 0
for version in store_versions:
# Find users who haven't reviewed this version
existing_reviews = await db.storelistingreview.find_many(
where={"storeListingVersionId": version.id}
)
reviewed_user_ids = {r.reviewByUserId for r in existing_reviews}
available_users = [u for u in users if u.id not in reviewed_user_ids]
if available_users:
user = random.choice(available_users)
await db.storelistingreview.create(
data={
"storeListingVersionId": version.id,
"reviewByUserId": user.id,
"score": random.randint(3, 5),
"comments": faker.text(max_nb_chars=100),
}
)
new_reviews += 1
print(f"✅ Added {new_reviews} new reviews")
return True
async def refresh_materialized_views(db):
"""Manually refresh the materialized views."""
print("\n4. Manually refreshing materialized views...")
print("-" * 40)
try:
await db.execute_raw("SELECT refresh_store_materialized_views();")
print("✅ Materialized views refreshed successfully")
return True
except Exception as e:
print(f"❌ Error refreshing views: {e}")
return False
async def compare_counts(before, after):
"""Compare counts before and after refresh."""
print("\n5. Comparing counts before and after refresh...")
print("-" * 40)
# Compare agent runs
print("🔍 Agent run changes:")
before_runs = before["agent_runs"].get("total_runs") or 0
after_runs = after["agent_runs"].get("total_runs") or 0
print(
f" Total runs: {before_runs}{after_runs} " f"(+{after_runs - before_runs})"
)
# Compare reviews
print("\n🔍 Review changes:")
before_reviews = before["reviews"].get("total_reviews") or 0
after_reviews = after["reviews"].get("total_reviews") or 0
print(
f" Total reviews: {before_reviews}{after_reviews} "
f"(+{after_reviews - before_reviews})"
)
# Compare store agents
print("\n🔍 StoreAgent view changes:")
before_avg_runs = before["store_agents"].get("avg_runs", 0) or 0
after_avg_runs = after["store_agents"].get("avg_runs", 0) or 0
print(
f" Average runs: {before_avg_runs:.2f}{after_avg_runs:.2f} "
f"(+{after_avg_runs - before_avg_runs:.2f})"
)
# Verify changes occurred
runs_changed = (after["agent_runs"].get("total_runs") or 0) > (
before["agent_runs"].get("total_runs") or 0
)
reviews_changed = (after["reviews"].get("total_reviews") or 0) > (
before["reviews"].get("total_reviews") or 0
)
if runs_changed and reviews_changed:
print("\n✅ Materialized views are updating correctly!")
return True
else:
print("\n⚠️ Some materialized views may not have updated:")
if not runs_changed:
print(" - Agent run counts did not increase")
if not reviews_changed:
print(" - Review counts did not increase")
return False
async def main():
db = Prisma()
await db.connect()
print("=" * 60)
print("Materialized Views Test")
print("=" * 60)
try:
# Check if data exists
user_count = await db.user.count()
if user_count == 0:
print("❌ No data in database. Please run test_data_creator.py first.")
await db.disconnect()
return
# 1. Check cron job
cron_exists = await check_cron_job(db)
# 2. Get initial counts
counts_before = await get_materialized_view_counts(db)
# 3. Add test data
data_added = await add_test_data(db)
refresh_success = False
if data_added:
# Wait a moment for data to be committed
print("\nWaiting for data to be committed...")
await asyncio.sleep(2)
# 4. Manually refresh views
refresh_success = await refresh_materialized_views(db)
if refresh_success:
# 5. Get counts after refresh
counts_after = await get_materialized_view_counts(db)
# 6. Compare results
await compare_counts(counts_before, counts_after)
# Summary
print("\n" + "=" * 60)
print("Test Summary")
print("=" * 60)
print(f"✓ pg_cron job exists: {'Yes' if cron_exists else 'No'}")
print(f"✓ Test data added: {'Yes' if data_added else 'No'}")
print(f"✓ Manual refresh worked: {'Yes' if refresh_success else 'No'}")
print(
f"✓ Views updated correctly: {'Yes' if data_added and refresh_success else 'Cannot verify'}"
)
if cron_exists:
print(
"\n💡 The materialized views will also refresh automatically every 15 minutes via pg_cron."
)
else:
print(
"\n⚠️ Automatic refresh is not configured. Views must be refreshed manually."
)
except Exception as e:
print(f"\n❌ Test failed with error: {e}")
import traceback
traceback.print_exc()
await db.disconnect()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env python3
"""Check store-related data in the database."""
import asyncio
from prisma import Prisma
async def check_store_data(db):
"""Check what store data exists in the database."""
print("============================================================")
print("Store Data Check")
print("============================================================")
# Check store listings
print("\n1. Store Listings:")
print("-" * 40)
listings = await db.storelisting.find_many()
print(f"Total store listings: {len(listings)}")
if listings:
for listing in listings[:5]:
print(f"\nListing ID: {listing.id}")
print(f" Name: {listing.name}")
print(f" Status: {listing.status}")
print(f" Slug: {listing.slug}")
# Check store listing versions
print("\n\n2. Store Listing Versions:")
print("-" * 40)
versions = await db.storelistingversion.find_many(include={"StoreListing": True})
print(f"Total store listing versions: {len(versions)}")
# Group by submission status
status_counts = {}
for version in versions:
status = version.submissionStatus
status_counts[status] = status_counts.get(status, 0) + 1
print("\nVersions by status:")
for status, count in status_counts.items():
print(f" {status}: {count}")
# Show approved versions
approved_versions = [v for v in versions if v.submissionStatus == "APPROVED"]
print(f"\nApproved versions: {len(approved_versions)}")
if approved_versions:
for version in approved_versions[:5]:
print(f"\n Version ID: {version.id}")
print(f" Listing: {version.StoreListing.name}")
print(f" Version: {version.version}")
# Check store listing reviews
print("\n\n3. Store Listing Reviews:")
print("-" * 40)
reviews = await db.storelistingreview.find_many(
include={"StoreListingVersion": {"include": {"StoreListing": True}}}
)
print(f"Total reviews: {len(reviews)}")
if reviews:
# Calculate average rating
total_score = sum(r.score for r in reviews)
avg_score = total_score / len(reviews) if reviews else 0
print(f"Average rating: {avg_score:.2f}")
# Show sample reviews
print("\nSample reviews:")
for review in reviews[:3]:
print(f"\n Review for: {review.StoreListingVersion.StoreListing.name}")
print(f" Score: {review.score}")
print(f" Comments: {review.comments[:100]}...")
# Check StoreAgent view data
print("\n\n4. StoreAgent View Data:")
print("-" * 40)
# Query the StoreAgent view
query = """
SELECT
sa.listing_id,
sa.slug,
sa.agent_name,
sa.description,
sa.featured,
sa.runs,
sa.rating,
sa.creator_username,
sa.categories,
sa.updated_at
FROM "StoreAgent" sa
LIMIT 10;
"""
store_agents = await db.query_raw(query)
print(f"Total store agents in view: {len(store_agents)}")
if store_agents:
for agent in store_agents[:5]:
print(f"\nStore Agent: {agent['agent_name']}")
print(f" Slug: {agent['slug']}")
print(f" Runs: {agent['runs']}")
print(f" Rating: {agent['rating']}")
print(f" Creator: {agent['creator_username']}")
# Check the underlying data that should populate StoreAgent
print("\n\n5. Data that should populate StoreAgent view:")
print("-" * 40)
# Check for any APPROVED store listing versions
query = """
SELECT COUNT(*) as count
FROM "StoreListingVersion"
WHERE "submissionStatus" = 'APPROVED'
"""
result = await db.query_raw(query)
approved_count = result[0]["count"] if result else 0
print(f"Approved store listing versions: {approved_count}")
# Check for store listings with hasApprovedVersion = true
query = """
SELECT COUNT(*) as count
FROM "StoreListing"
WHERE "hasApprovedVersion" = true AND "isDeleted" = false
"""
result = await db.query_raw(query)
has_approved_count = result[0]["count"] if result else 0
print(f"Store listings with approved versions: {has_approved_count}")
# Check agent graph executions
query = """
SELECT COUNT(DISTINCT "agentGraphId") as unique_agents,
COUNT(*) as total_executions
FROM "AgentGraphExecution"
"""
result = await db.query_raw(query)
if result:
print("\nAgent Graph Executions:")
print(f" Unique agents with executions: {result[0]['unique_agents']}")
print(f" Total executions: {result[0]['total_executions']}")
async def main():
"""Main function."""
db = Prisma()
await db.connect()
try:
await check_store_data(db)
finally:
await db.disconnect()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -425,28 +425,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
raise ValueError(f"{self.name} did not produce any output for {output}")
def merge_stats(self, stats: NodeExecutionStats) -> NodeExecutionStats:
stats_dict = stats.model_dump()
current_stats = self.execution_stats.model_dump()
for key, value in stats_dict.items():
if key not in current_stats:
# Field doesn't exist yet, just set it, but this will probably
# not happen, just in case though so we throw for invalid when
# converting back in
current_stats[key] = value
elif isinstance(value, dict) and isinstance(current_stats[key], dict):
current_stats[key].update(value)
elif isinstance(value, (int, float)) and isinstance(
current_stats[key], (int, float)
):
current_stats[key] += value
elif isinstance(value, list) and isinstance(current_stats[key], list):
current_stats[key].extend(value)
else:
current_stats[key] = value
self.execution_stats = NodeExecutionStats(**current_stats)
self.execution_stats += stats
return self.execution_stats
@property
@@ -513,6 +492,12 @@ def get_blocks() -> dict[str, Type[Block]]:
async def initialize_blocks() -> None:
# First, sync all provider costs to blocks
# Imported here to avoid circular import
from backend.sdk.cost_integration import sync_all_provider_costs
sync_all_provider_costs()
for cls in get_blocks().values():
block = cls()
existing_block = await AgentBlock.prisma().find_first(

View File

@@ -85,6 +85,9 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.EVA_QWEN_2_5_32B: 1,
LlmModel.DEEPSEEK_CHAT: 2,
LlmModel.PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE: 1,
LlmModel.PERPLEXITY_SONAR: 1,
LlmModel.PERPLEXITY_SONAR_PRO: 5,
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: 10,
LlmModel.QWEN_QWQ_32B_PREVIEW: 2,
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: 1,
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B: 1,

View File

@@ -93,6 +93,28 @@ async def locked_transaction(key: str):
yield tx
def get_database_schema() -> str:
"""Extract database schema from DATABASE_URL."""
parsed_url = urlparse(DATABASE_URL)
query_params = dict(parse_qsl(parsed_url.query))
return query_params.get("schema", "public")
async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
"""Execute raw SQL query with proper schema handling."""
schema = get_database_schema()
schema_prefix = f"{schema}." if schema != "public" else ""
formatted_query = query_template.format(schema_prefix=schema_prefix)
import prisma as prisma_module
result = await prisma_module.get_client().query_raw(
formatted_query, *args # type: ignore
)
return result
class BaseDbModel(BaseModel):
id: str = Field(default_factory=lambda: str(uuid4()))

View File

@@ -22,6 +22,7 @@ from prisma.models import (
AgentGraphExecution,
AgentNodeExecution,
AgentNodeExecutionInputOutput,
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
@@ -29,6 +30,7 @@ from prisma.types import (
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
AgentNodeExecutionInputOutputCreateInput,
AgentNodeExecutionKeyValueDataCreateInput,
AgentNodeExecutionUpdateInput,
AgentNodeExecutionWhereInput,
)
@@ -47,7 +49,7 @@ from .block import (
get_io_block_ids,
get_webhook_block_ids,
)
from .db import BaseDbModel
from .db import BaseDbModel, query_raw_with_schema
from .event_bus import AsyncRedisEventBus, RedisEventBus
from .includes import (
EXECUTION_RESULT_INCLUDE,
@@ -66,6 +68,21 @@ config = Config()
# -------------------------- Models -------------------------- #
class BlockErrorStats(BaseModel):
"""Typed data structure for block error statistics."""
block_id: str
total_executions: int
failed_executions: int
@property
def error_rate(self) -> float:
"""Calculate error rate as a percentage."""
if self.total_executions == 0:
return 0.0
return (self.failed_executions / self.total_executions) * 100
ExecutionStatus = AgentExecutionStatus
@@ -347,6 +364,7 @@ class NodeExecutionResult(BaseModel):
async def get_graph_executions(
graph_exec_id: str | None = None,
graph_id: str | None = None,
user_id: str | None = None,
statuses: list[ExecutionStatus] | None = None,
@@ -354,9 +372,12 @@ async def get_graph_executions(
created_time_lte: datetime | None = None,
limit: int | None = None,
) -> list[GraphExecutionMeta]:
"""⚠️ **Optional `user_id` check**: MUST USE check in user-facing endpoints."""
where_filter: AgentGraphExecutionWhereInput = {
"isDeleted": False,
}
if graph_exec_id:
where_filter["id"] = graph_exec_id
if user_id:
where_filter["userId"] = user_id
if graph_id:
@@ -717,6 +738,7 @@ async def delete_graph_execution(
async def get_node_execution(node_exec_id: str) -> NodeExecutionResult | None:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
execution = await AgentNodeExecution.prisma().find_first(
where={"id": node_exec_id},
include=EXECUTION_RESULT_INCLUDE,
@@ -727,15 +749,19 @@ async def get_node_execution(node_exec_id: str) -> NodeExecutionResult | None:
async def get_node_executions(
graph_exec_id: str,
graph_exec_id: str | None = None,
node_id: str | None = None,
block_ids: list[str] | None = None,
statuses: list[ExecutionStatus] | None = None,
limit: int | None = None,
created_time_gte: datetime | None = None,
created_time_lte: datetime | None = None,
include_exec_data: bool = True,
) -> list[NodeExecutionResult]:
where_clause: AgentNodeExecutionWhereInput = {
"agentGraphExecutionId": graph_exec_id,
}
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
where_clause: AgentNodeExecutionWhereInput = {}
if graph_exec_id:
where_clause["agentGraphExecutionId"] = graph_exec_id
if node_id:
where_clause["agentNodeId"] = node_id
if block_ids:
@@ -743,9 +769,19 @@ async def get_node_executions(
if statuses:
where_clause["OR"] = [{"executionStatus": status} for status in statuses]
if created_time_gte or created_time_lte:
where_clause["addedTime"] = {
"gte": created_time_gte or datetime.min.replace(tzinfo=timezone.utc),
"lte": created_time_lte or datetime.max.replace(tzinfo=timezone.utc),
}
executions = await AgentNodeExecution.prisma().find_many(
where=where_clause,
include=EXECUTION_RESULT_INCLUDE,
include=(
EXECUTION_RESULT_INCLUDE
if include_exec_data
else {"Node": True, "GraphExecution": True}
),
order=EXECUTION_RESULT_ORDER,
take=limit,
)
@@ -756,6 +792,7 @@ async def get_node_executions(
async def get_latest_node_execution(
node_id: str, graph_eid: str
) -> NodeExecutionResult | None:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
execution = await AgentNodeExecution.prisma().find_first(
where={
"agentGraphExecutionId": graph_eid,
@@ -904,3 +941,87 @@ class AsyncRedisExecutionEventBus(AsyncRedisEventBus[ExecutionEvent]):
) -> AsyncGenerator[ExecutionEvent, None]:
async for event in self.listen_events(f"{user_id}/{graph_id}/{graph_exec_id}"):
yield event
# --------------------- KV Data Functions --------------------- #
async def get_execution_kv_data(user_id: str, key: str) -> Any | None:
"""
Get key-value data for a user and key.
Args:
user_id: The id of the User.
key: The key to retrieve data for.
Returns:
The data associated with the key, or None if not found.
"""
kv_data = await AgentNodeExecutionKeyValueData.prisma().find_unique(
where={"userId_key": {"userId": user_id, "key": key}}
)
return (
type_utils.convert(kv_data.data, type[Any])
if kv_data and kv_data.data
else None
)
async def set_execution_kv_data(
user_id: str, node_exec_id: str, key: str, data: Any
) -> Any | None:
"""
Set key-value data for a user and key.
Args:
user_id: The id of the User.
node_exec_id: The id of the AgentNodeExecution.
key: The key to store data under.
data: The data to store.
"""
resp = await AgentNodeExecutionKeyValueData.prisma().upsert(
where={"userId_key": {"userId": user_id, "key": key}},
data={
"create": AgentNodeExecutionKeyValueDataCreateInput(
userId=user_id,
agentNodeExecutionId=node_exec_id,
key=key,
data=Json(data) if data is not None else None,
),
"update": {
"agentNodeExecutionId": node_exec_id,
"data": Json(data) if data is not None else None,
},
},
)
return type_utils.convert(resp.data, type[Any]) if resp and resp.data else None
async def get_block_error_stats(
start_time: datetime, end_time: datetime
) -> list[BlockErrorStats]:
"""Get block execution stats using efficient SQL aggregation."""
query_template = """
SELECT
n."agentBlockId" as block_id,
COUNT(*) as total_executions,
SUM(CASE WHEN ne."executionStatus" = 'FAILED' THEN 1 ELSE 0 END) as failed_executions
FROM {schema_prefix}"AgentNodeExecution" ne
JOIN {schema_prefix}"AgentNode" n ON ne."agentNodeId" = n.id
WHERE ne."addedTime" >= $1::timestamp AND ne."addedTime" <= $2::timestamp
GROUP BY n."agentBlockId"
HAVING COUNT(*) >= 10
"""
result = await query_raw_with_schema(query_template, start_time, end_time)
# Convert to typed data structures
return [
BlockErrorStats(
block_id=row["block_id"],
total_executions=int(row["total_executions"]),
failed_executions=int(row["failed_executions"]),
)
for row in result
]

View File

@@ -3,7 +3,6 @@ import uuid
from collections import defaultdict
from typing import TYPE_CHECKING, Any, Literal, Optional, cast
import prisma
from prisma import Json
from prisma.enums import SubmissionStatus
from prisma.models import AgentGraph, AgentNode, AgentNodeLink, StoreListingVersion
@@ -14,7 +13,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import JsonValue, create_model
from pydantic import Field, JsonValue, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -31,7 +30,7 @@ from backend.integrations.providers import ProviderName
from backend.util import type as type_utils
from .block import Block, BlockInput, BlockSchema, BlockType, get_block, get_blocks
from .db import BaseDbModel, transaction
from .db import BaseDbModel, query_raw_with_schema, transaction
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
if TYPE_CHECKING:
@@ -189,6 +188,23 @@ class BaseGraph(BaseDbModel):
)
)
@computed_field
@property
def has_external_trigger(self) -> bool:
return self.webhook_input_node is not None
@property
def webhook_input_node(self) -> Node | None:
return next(
(
node
for node in self.nodes
if node.block.block_type
in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
),
None,
)
@staticmethod
def _generate_schema(
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
@@ -326,11 +342,6 @@ class GraphModel(Graph):
user_id: str
nodes: list[NodeModel] = [] # type: ignore
@computed_field
@property
def has_webhook_trigger(self) -> bool:
return self.webhook_input_node is not None
@property
def starting_nodes(self) -> list[NodeModel]:
outbound_nodes = {link.sink_id for link in self.links}
@@ -343,17 +354,12 @@ class GraphModel(Graph):
if node.id not in outbound_nodes or node.id in input_nodes
]
@property
def webhook_input_node(self) -> NodeModel | None:
return next(
(
node
for node in self.nodes
if node.block.block_type
in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
),
None,
)
def meta(self) -> "GraphMeta":
"""
Returns a GraphMeta object with metadata about the graph.
This is used to return metadata about the graph without exposing nodes and links.
"""
return GraphMeta.from_graph(self)
def reassign_ids(self, user_id: str, reassign_graph_id: bool = False):
"""
@@ -389,8 +395,10 @@ class GraphModel(Graph):
# Reassign Link IDs
for link in graph.links:
link.source_id = id_map[link.source_id]
link.sink_id = id_map[link.sink_id]
if link.source_id in id_map:
link.source_id = id_map[link.source_id]
if link.sink_id in id_map:
link.sink_id = id_map[link.sink_id]
# Reassign User IDs for agent blocks
for node in graph.nodes:
@@ -610,6 +618,18 @@ class GraphModel(Graph):
)
class GraphMeta(Graph):
user_id: str
# Easy work-around to prevent exposing nodes and links in the API response
nodes: list[NodeModel] = Field(default=[], exclude=True) # type: ignore
links: list[Link] = Field(default=[], exclude=True)
@staticmethod
def from_graph(graph: GraphModel) -> "GraphMeta":
return GraphMeta(**graph.model_dump())
# --------------------- CRUD functions --------------------- #
@@ -638,10 +658,10 @@ async def set_node_webhook(node_id: str, webhook_id: str | None) -> NodeModel:
return NodeModel.from_db(node)
async def get_graphs(
async def list_graphs(
user_id: str,
filter_by: Literal["active"] | None = "active",
) -> list[GraphModel]:
) -> list[GraphMeta]:
"""
Retrieves graph metadata objects.
Default behaviour is to get all currently active graphs.
@@ -651,7 +671,7 @@ async def get_graphs(
user_id: The ID of the user that owns the graph.
Returns:
list[GraphModel]: A list of objects representing the retrieved graphs.
list[GraphMeta]: A list of objects representing the retrieved graphs.
"""
where_clause: AgentGraphWhereInput = {"userId": user_id}
@@ -665,13 +685,13 @@ async def get_graphs(
include=AGENT_GRAPH_INCLUDE,
)
graph_models = []
graph_models: list[GraphMeta] = []
for graph in graphs:
try:
graph_model = GraphModel.from_db(graph)
# Trigger serialization to validate that the graph is well formed.
graph_model.model_dump()
graph_models.append(graph_model)
graph_meta = GraphModel.from_db(graph).meta()
# Trigger serialization to validate that the graph is well formed
graph_meta.model_dump()
graph_models.append(graph_meta)
except Exception as e:
logger.error(f"Error processing graph {graph.id}: {e}")
continue
@@ -1038,13 +1058,13 @@ async def fix_llm_provider_credentials():
broken_nodes = []
try:
broken_nodes = await prisma.get_client().query_raw(
broken_nodes = await query_raw_with_schema(
"""
SELECT graph."userId" user_id,
node.id node_id,
node."constantInput" node_preset_input
FROM platform."AgentNode" node
LEFT JOIN platform."AgentGraph" graph
FROM {schema_prefix}"AgentNode" node
LEFT JOIN {schema_prefix}"AgentGraph" graph
ON node."agentGraphId" = graph.id
WHERE node."constantInput"::jsonb->'credentials'->>'provider' = 'llm'
ORDER BY graph."userId";

View File

@@ -42,6 +42,9 @@ from pydantic_core import (
from backend.integrations.providers import ProviderName
from backend.util.settings import Secrets
# Type alias for any provider name (including custom ones)
AnyProviderName = str # Will be validated as ProviderName at runtime
if TYPE_CHECKING:
from backend.data.block import BlockSchema
@@ -341,7 +344,7 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
type: CT
@classmethod
def allowed_providers(cls) -> tuple[ProviderName, ...]:
def allowed_providers(cls) -> tuple[ProviderName, ...] | None:
return get_args(cls.model_fields["provider"].annotation)
@classmethod
@@ -366,7 +369,12 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
f"{field_schema}"
) from e
if len(cls.allowed_providers()) > 1 and not schema_extra.discriminator:
providers = cls.allowed_providers()
if (
providers is not None
and len(providers) > 1
and not schema_extra.discriminator
):
raise TypeError(
f"Multi-provider CredentialsField '{field_name}' "
"requires discriminator!"
@@ -378,7 +386,12 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
if hasattr(model_class, "allowed_providers") and hasattr(
model_class, "allowed_cred_types"
):
schema["credentials_provider"] = model_class.allowed_providers()
allowed_providers = model_class.allowed_providers()
# If no specific providers (None), allow any string
if allowed_providers is None:
schema["credentials_provider"] = ["string"] # Allow any string provider
else:
schema["credentials_provider"] = allowed_providers
schema["credentials_types"] = model_class.allowed_cred_types()
# Do not return anything, just mutate schema in place
@@ -540,6 +553,11 @@ def CredentialsField(
if v is not None
}
# Merge any json_schema_extra passed in kwargs
if "json_schema_extra" in kwargs:
extra_schema = kwargs.pop("json_schema_extra")
field_schema_extra.update(extra_schema)
return Field(
title=title,
description=description,
@@ -618,6 +636,35 @@ class NodeExecutionStats(BaseModel):
llm_retry_count: int = 0
input_token_count: int = 0
output_token_count: int = 0
extra_cost: int = 0
extra_steps: int = 0
def __iadd__(self, other: "NodeExecutionStats") -> "NodeExecutionStats":
"""Mutate this instance by adding another NodeExecutionStats."""
if not isinstance(other, NodeExecutionStats):
return NotImplemented
stats_dict = other.model_dump()
current_stats = self.model_dump()
for key, value in stats_dict.items():
if key not in current_stats:
# Field doesn't exist yet, just set it
setattr(self, key, value)
elif isinstance(value, dict) and isinstance(current_stats[key], dict):
current_stats[key].update(value)
setattr(self, key, current_stats[key])
elif isinstance(value, (int, float)) and isinstance(
current_stats[key], (int, float)
):
setattr(self, key, current_stats[key] + value)
elif isinstance(value, list) and isinstance(current_stats[key], list):
current_stats[key].extend(value)
setattr(self, key, current_stats[key])
else:
setattr(self, key, value)
return self
class GraphExecutionStats(BaseModel):

View File

@@ -5,12 +5,15 @@ from backend.data import db
from backend.data.credit import UsageTransactionMetadata, get_user_credit_model
from backend.data.execution import (
create_graph_execution,
get_block_error_stats,
get_execution_kv_data,
get_graph_execution,
get_graph_execution_meta,
get_graph_executions,
get_latest_node_execution,
get_node_execution,
get_node_executions,
set_execution_kv_data,
update_graph_execution_start_time,
update_graph_execution_stats,
update_node_execution_stats,
@@ -101,6 +104,9 @@ class DatabaseManager(AppService):
update_node_execution_stats = _(update_node_execution_stats)
upsert_execution_input = _(upsert_execution_input)
upsert_execution_output = _(upsert_execution_output)
get_execution_kv_data = _(get_execution_kv_data)
set_execution_kv_data = _(set_execution_kv_data)
get_block_error_stats = _(get_block_error_stats)
# Graphs
get_node = _(get_node)
@@ -159,6 +165,8 @@ class DatabaseManagerClient(AppServiceClient):
update_node_execution_stats = _(d.update_node_execution_stats)
upsert_execution_input = _(d.upsert_execution_input)
upsert_execution_output = _(d.upsert_execution_output)
get_execution_kv_data = _(d.get_execution_kv_data)
set_execution_kv_data = _(d.set_execution_kv_data)
# Graphs
get_node = _(d.get_node)
@@ -193,6 +201,9 @@ class DatabaseManagerClient(AppServiceClient):
d.get_user_notification_oldest_message_in_batch
)
# Block error monitoring
get_block_error_stats = _(d.get_block_error_stats)
class DatabaseManagerAsyncClient(AppServiceClient):
d = DatabaseManager
@@ -202,8 +213,11 @@ class DatabaseManagerAsyncClient(AppServiceClient):
return DatabaseManager
create_graph_execution = d.create_graph_execution
get_connected_output_nodes = d.get_connected_output_nodes
get_latest_node_execution = d.get_latest_node_execution
get_graph = d.get_graph
get_graph_metadata = d.get_graph_metadata
get_graph_execution_meta = d.get_graph_execution_meta
get_node = d.get_node
get_node_execution = d.get_node_execution
get_node_executions = d.get_node_executions
@@ -215,3 +229,6 @@ class DatabaseManagerAsyncClient(AppServiceClient):
update_node_execution_status = d.update_node_execution_status
update_node_execution_status_batch = d.update_node_execution_status_batch
update_user_integrations = d.update_user_integrations
get_execution_kv_data = d.get_execution_kv_data
set_execution_kv_data = d.set_execution_kv_data
get_block_error_stats = d.get_block_error_stats

View File

@@ -24,7 +24,7 @@ from backend.data.notifications import (
NotificationType,
)
from backend.data.rabbitmq import SyncRabbitMQ
from backend.executor.utils import create_execution_queue_config
from backend.executor.utils import LogMetadata, create_execution_queue_config
from backend.notifications.notifications import queue_notification
from backend.util.exceptions import InsufficientBalanceError
@@ -98,35 +98,6 @@ utilization_gauge = Gauge(
)
class LogMetadata(TruncatedLogger):
def __init__(
self,
user_id: str,
graph_eid: str,
graph_id: str,
node_eid: str,
node_id: str,
block_name: str,
max_length: int = 1000,
):
metadata = {
"component": "ExecutionManager",
"user_id": user_id,
"graph_eid": graph_eid,
"graph_id": graph_id,
"node_eid": node_eid,
"node_id": node_id,
"block_name": block_name,
}
prefix = f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|neid:{node_eid}|{block_name}]"
super().__init__(
_logger,
max_length=max_length,
prefix=prefix,
metadata=metadata,
)
T = TypeVar("T")
@@ -158,6 +129,7 @@ async def execute_node(
node_block = node.block
log_metadata = LogMetadata(
logger=_logger,
user_id=user_id,
graph_eid=graph_exec_id,
graph_id=graph_id,
@@ -235,9 +207,7 @@ async def execute_node(
# Update execution stats
if execution_stats is not None:
execution_stats = execution_stats.model_copy(
update=node_block.execution_stats.model_dump()
)
execution_stats += node_block.execution_stats
execution_stats.input_size = input_size
execution_stats.output_size = output_size
@@ -429,6 +399,7 @@ class Executor:
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> NodeExecutionStats:
log_metadata = LogMetadata(
logger=_logger,
user_id=node_exec.user_id,
graph_eid=node_exec.graph_exec_id,
graph_id=node_exec.graph_id,
@@ -534,6 +505,7 @@ class Executor:
cls, graph_exec: GraphExecutionEntry, cancel: threading.Event
):
log_metadata = LogMetadata(
logger=_logger,
user_id=graph_exec.user_id,
graph_eid=graph_exec.graph_exec_id,
graph_id=graph_exec.graph_id,
@@ -674,9 +646,10 @@ class Executor:
return
nonlocal execution_stats
execution_stats.node_count += 1
execution_stats.node_count += 1 + result.extra_steps
execution_stats.nodes_cputime += result.cputime
execution_stats.nodes_walltime += result.walltime
execution_stats.cost += result.extra_cost
if (err := result.error) and isinstance(err, Exception):
execution_stats.node_error_count += 1
update_node_execution_status(
@@ -861,6 +834,7 @@ class Executor:
f"Failed graph execution {graph_exec.graph_exec_id}: {error}"
)
finally:
# Cancel and wait for all node executions to complete
for node_id, inflight_exec in running_node_execution.items():
if inflight_exec.is_done():
continue
@@ -873,6 +847,28 @@ class Executor:
log_metadata.info(f"Stopping node evaluation {node_id}")
inflight_eval.cancel()
for node_id, inflight_exec in running_node_execution.items():
if inflight_exec.is_done():
continue
try:
inflight_exec.wait_for_cancellation(timeout=60.0)
except TimeoutError:
log_metadata.exception(
f"Node execution #{node_id} did not stop in time, "
"it may be stuck or taking too long."
)
for node_id, inflight_eval in running_node_evaluation.items():
if inflight_eval.done():
continue
try:
inflight_eval.result(timeout=60.0)
except TimeoutError:
log_metadata.exception(
f"Node evaluation #{node_id} did not stop in time, "
"it may be stuck or taking too long."
)
if execution_status in [ExecutionStatus.TERMINATED, ExecutionStatus.FAILED]:
inflight_executions = db_client.get_node_executions(
graph_exec.graph_exec_id,
@@ -880,6 +876,7 @@ class Executor:
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
],
include_exec_data=False,
)
db_client.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in inflight_executions],

View File

@@ -7,7 +7,8 @@ from prisma.models import User
import backend.server.v2.library.model
import backend.server.v2.store.model
from backend.blocks.basic import FindInDictionaryBlock, StoreValueBlock
from backend.blocks.basic import StoreValueBlock
from backend.blocks.data_manipulation import FindInDictionaryBlock
from backend.blocks.io import AgentInputBlock
from backend.blocks.maths import CalculatorBlock, Operation
from backend.data import execution, graph

View File

@@ -1,7 +1,6 @@
import asyncio
import logging
import os
from datetime import datetime, timedelta, timezone
from enum import Enum
from typing import Optional
from urllib.parse import parse_qs, urlencode, urlparse, urlunparse
@@ -14,25 +13,23 @@ from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.triggers.cron import CronTrigger
from autogpt_libs.utils.cache import thread_cached
from dotenv import load_dotenv
from prisma.enums import NotificationType
from pydantic import BaseModel, Field, ValidationError
from sqlalchemy import MetaData, create_engine
from backend.data.block import BlockInput
from backend.data.execution import ExecutionStatus
from backend.data.execution import GraphExecutionWithNodes
from backend.data.model import CredentialsMetaInput
from backend.executor import utils as execution_utils
from backend.notifications.notifications import NotificationManagerClient
from backend.monitoring import (
NotificationJobArgs,
process_existing_batches,
process_weekly_summary,
report_block_error_rates,
report_late_executions,
)
from backend.util.exceptions import NotAuthorizedError, NotFoundError
from backend.util.logging import PrefixFilter
from backend.util.metrics import sentry_capture_error
from backend.util.service import (
AppService,
AppServiceClient,
endpoint_to_async,
expose,
get_service_client,
)
from backend.util.service import AppService, AppServiceClient, endpoint_to_async, expose
from backend.util.settings import Config
@@ -71,11 +68,6 @@ def job_listener(event):
logger.info(f"Job {event.job_id} completed successfully.")
@thread_cached
def get_notification_client():
return get_service_client(NotificationManagerClient)
@thread_cached
def get_event_loop():
return asyncio.new_event_loop()
@@ -89,7 +81,7 @@ async def _execute_graph(**kwargs):
args = GraphExecutionJobArgs(**kwargs)
try:
logger.info(f"Executing recurring job for graph #{args.graph_id}")
await execution_utils.add_graph_execution(
graph_exec: GraphExecutionWithNodes = await execution_utils.add_graph_execution(
user_id=args.user_id,
graph_id=args.graph_id,
graph_version=args.graph_version,
@@ -97,65 +89,14 @@ async def _execute_graph(**kwargs):
graph_credentials_inputs=args.input_credentials,
use_db_query=False,
)
logger.info(
f"Graph execution started with ID {graph_exec.id} for graph {args.graph_id}"
)
except Exception as e:
logger.error(f"Error executing graph {args.graph_id}: {e}")
class LateExecutionException(Exception):
pass
def report_late_executions() -> str:
late_executions = execution_utils.get_db_client().get_graph_executions(
statuses=[ExecutionStatus.QUEUED],
created_time_gte=datetime.now(timezone.utc)
- timedelta(seconds=config.execution_late_notification_checkrange_secs),
created_time_lte=datetime.now(timezone.utc)
- timedelta(seconds=config.execution_late_notification_threshold_secs),
limit=1000,
)
if not late_executions:
return "No late executions detected."
num_late_executions = len(late_executions)
num_users = len(set([r.user_id for r in late_executions]))
late_execution_details = [
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Created At: {exec.started_at.isoformat()}`"
for exec in late_executions
]
error = LateExecutionException(
f"Late executions detected: {num_late_executions} late executions from {num_users} users "
f"in the last {config.execution_late_notification_checkrange_secs} seconds. "
f"Graph has been queued for more than {config.execution_late_notification_threshold_secs} seconds. "
"Please check the executor status. Details:\n"
+ "\n".join(late_execution_details)
)
msg = str(error)
sentry_capture_error(error)
get_notification_client().discord_system_alert(msg)
return msg
def process_existing_batches(**kwargs):
args = NotificationJobArgs(**kwargs)
try:
logger.info(
f"Processing existing batches for notification type {args.notification_types}"
)
get_notification_client().process_existing_batches(args.notification_types)
except Exception as e:
logger.error(f"Error processing existing batches: {e}")
def process_weekly_summary(**kwargs):
try:
logger.info("Processing weekly summary")
get_notification_client().queue_weekly_summary()
except Exception as e:
logger.error(f"Error processing weekly summary: {e}")
# Monitoring functions are now imported from monitoring module
class Jobstores(Enum):
@@ -190,11 +131,6 @@ class GraphExecutionJobInfo(GraphExecutionJobArgs):
)
class NotificationJobArgs(BaseModel):
notification_types: list[NotificationType]
cron: str
class NotificationJobInfo(NotificationJobArgs):
id: str
name: str
@@ -287,6 +223,16 @@ class Scheduler(AppService):
jobstore=Jobstores.EXECUTION.value,
)
# Block Error Rate Monitoring
self.scheduler.add_job(
report_block_error_rates,
id="report_block_error_rates",
trigger="interval",
replace_existing=True,
seconds=config.block_error_rate_check_interval_secs,
jobstore=Jobstores.EXECUTION.value,
)
self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
self.scheduler.start()
@@ -379,6 +325,10 @@ class Scheduler(AppService):
def execute_report_late_executions(self):
return report_late_executions()
@expose
def execute_report_block_error_rates(self):
return report_block_error_rates()
class SchedulerClient(AppServiceClient):
@classmethod

View File

@@ -1,5 +1,6 @@
import asyncio
import logging
import time
from collections import defaultdict
from concurrent.futures import Future
from typing import TYPE_CHECKING, Any, Callable, Optional, cast
@@ -7,6 +8,8 @@ from typing import TYPE_CHECKING, Any, Callable, Optional, cast
from autogpt_libs.utils.cache import thread_cached
from pydantic import BaseModel, JsonValue
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.block import (
Block,
BlockData,
@@ -23,12 +26,8 @@ from backend.data.execution import (
GraphExecutionStats,
GraphExecutionWithNodes,
RedisExecutionEventBus,
create_graph_execution,
get_node_executions,
update_graph_execution_stats,
update_node_execution_status_batch,
)
from backend.data.graph import GraphModel, Node, get_graph
from backend.data.graph import GraphModel, Node
from backend.data.model import CredentialsMetaInput
from backend.data.rabbitmq import (
AsyncRabbitMQ,
@@ -55,6 +54,36 @@ logger = TruncatedLogger(logging.getLogger(__name__), prefix="[GraphExecutorUtil
# ============ Resource Helpers ============ #
class LogMetadata(TruncatedLogger):
def __init__(
self,
logger: logging.Logger,
user_id: str,
graph_eid: str,
graph_id: str,
node_eid: str,
node_id: str,
block_name: str,
max_length: int = 1000,
):
metadata = {
"component": "ExecutionManager",
"user_id": user_id,
"graph_eid": graph_eid,
"graph_id": graph_id,
"node_eid": node_eid,
"node_id": node_id,
"block_name": block_name,
}
prefix = f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|neid:{node_eid}|{block_name}]"
super().__init__(
logger,
max_length=max_length,
prefix=prefix,
metadata=metadata,
)
@thread_cached
def get_execution_event_bus() -> RedisExecutionEventBus:
return RedisExecutionEventBus()
@@ -653,8 +682,10 @@ def create_execution_queue_config() -> RabbitMQConfig:
async def stop_graph_execution(
user_id: str,
graph_exec_id: str,
use_db_query: bool = True,
wait_timeout: float = 60.0,
):
"""
Mechanism:
@@ -664,66 +695,57 @@ async def stop_graph_execution(
3. Update execution statuses in DB and set `error` outputs to `"TERMINATED"`.
"""
queue_client = await get_async_execution_queue()
db = execution_db if use_db_query else get_db_async_client()
await queue_client.publish_message(
routing_key="",
message=CancelExecutionEvent(graph_exec_id=graph_exec_id).model_dump_json(),
exchange=GRAPH_EXECUTION_CANCEL_EXCHANGE,
)
# Update the status of the graph execution
if use_db_query:
graph_execution = await update_graph_execution_stats(
graph_exec_id,
ExecutionStatus.TERMINATED,
)
else:
graph_execution = await get_db_async_client().update_graph_execution_stats(
graph_exec_id,
ExecutionStatus.TERMINATED,
if not wait_timeout:
return
start_time = time.time()
while time.time() - start_time < wait_timeout:
graph_exec = await db.get_graph_execution_meta(
execution_id=graph_exec_id, user_id=user_id
)
if graph_execution:
await get_async_execution_event_bus().publish(graph_execution)
else:
raise NotFoundError(
f"Graph execution #{graph_exec_id} not found for termination."
)
if not graph_exec:
raise NotFoundError(f"Graph execution #{graph_exec_id} not found.")
# Update the status of the node executions
if use_db_query:
node_executions = await get_node_executions(
graph_exec_id=graph_exec_id,
statuses=[
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
ExecutionStatus.INCOMPLETE,
],
)
await update_node_execution_status_batch(
[v.node_exec_id for v in node_executions],
if graph_exec.status in [
ExecutionStatus.TERMINATED,
)
else:
node_executions = await get_db_async_client().get_node_executions(
graph_exec_id=graph_exec_id,
statuses=[
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
ExecutionStatus.INCOMPLETE,
],
)
await get_db_async_client().update_node_execution_status_batch(
[v.node_exec_id for v in node_executions],
ExecutionStatus.TERMINATED,
)
ExecutionStatus.COMPLETED,
ExecutionStatus.FAILED,
]:
# If graph execution is terminated/completed/failed, cancellation is complete
return
await asyncio.gather(
*[
get_async_execution_event_bus().publish(
v.model_copy(update={"status": ExecutionStatus.TERMINATED})
elif graph_exec.status in [
ExecutionStatus.QUEUED,
ExecutionStatus.INCOMPLETE,
]:
# If the graph is still on the queue, we can prevent them from being executed
# by setting the status to TERMINATED.
node_execs = await db.get_node_executions(
graph_exec_id=graph_exec_id,
statuses=[ExecutionStatus.QUEUED, ExecutionStatus.INCOMPLETE],
include_exec_data=False,
)
for v in node_executions
]
await db.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in node_execs],
ExecutionStatus.TERMINATED,
)
await db.update_graph_execution_stats(
graph_exec_id=graph_exec_id,
status=ExecutionStatus.TERMINATED,
)
await asyncio.sleep(1.0)
raise TimeoutError(
f"Timed out waiting for graph execution #{graph_exec_id} to terminate."
)
@@ -753,22 +775,16 @@ async def add_graph_execution(
GraphExecutionEntry: The entry for the graph execution.
Raises:
ValueError: If the graph is not found or if there are validation errors.
""" # noqa
if use_db_query:
graph: GraphModel | None = await get_graph(
graph_id=graph_id,
user_id=user_id,
version=graph_version,
include_subgraphs=True,
)
else:
graph: GraphModel | None = await get_db_async_client().get_graph(
graph_id=graph_id,
user_id=user_id,
version=graph_version,
include_subgraphs=True,
)
"""
gdb = graph_db if use_db_query else get_db_async_client()
edb = execution_db if use_db_query else get_db_async_client()
graph: GraphModel | None = await gdb.get_graph(
graph_id=graph_id,
user_id=user_id,
version=graph_version,
include_subgraphs=True,
)
if not graph:
raise NotFoundError(f"Graph #{graph_id} not found.")
@@ -787,22 +803,13 @@ async def add_graph_execution(
nodes_input_masks=nodes_input_masks,
)
if use_db_query:
graph_exec = await create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
else:
graph_exec = await get_db_async_client().create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
graph_exec = await edb.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
try:
queue = await get_async_execution_queue()
@@ -821,28 +828,15 @@ async def add_graph_execution(
return graph_exec
except Exception as e:
logger.error(f"Unable to publish graph #{graph_id} exec #{graph_exec.id}: {e}")
if use_db_query:
await update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in graph_exec.node_executions],
ExecutionStatus.FAILED,
)
await update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.FAILED,
stats=GraphExecutionStats(error=str(e)),
)
else:
await get_db_async_client().update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in graph_exec.node_executions],
ExecutionStatus.FAILED,
)
await get_db_async_client().update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.FAILED,
stats=GraphExecutionStats(error=str(e)),
)
await edb.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in graph_exec.node_executions],
ExecutionStatus.FAILED,
)
await edb.update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.FAILED,
stats=GraphExecutionStats(error=str(e)),
)
raise
@@ -897,14 +891,10 @@ class NodeExecutionProgress:
try:
self.tasks[exec_id].result(wait_time)
except TimeoutError:
print(
">>>>>>> -- Timeout, after waiting for",
wait_time,
"seconds for node_id",
exec_id,
)
pass
except Exception as e:
logger.error(f"Task for exec ID {exec_id} failed with error: {str(e)}")
pass
return self.is_done(0)
def stop(self) -> list[str]:
@@ -921,6 +911,25 @@ class NodeExecutionProgress:
cancelled_ids.append(task_id)
return cancelled_ids
def wait_for_cancellation(self, timeout: float = 5.0):
"""
Wait for all cancelled tasks to complete cancellation.
Args:
timeout: Maximum time to wait for cancellation in seconds
"""
start_time = time.time()
while time.time() - start_time < timeout:
# Check if all tasks are done (either completed or cancelled)
if all(task.done() for task in self.tasks.values()):
return True
time.sleep(0.1) # Small delay to avoid busy waiting
raise TimeoutError(
f"Timeout waiting for cancellation of tasks: {list(self.tasks.keys())}"
)
def _pop_done_task(self, exec_id: str) -> bool:
task = self.tasks.get(exec_id)
if not task:
@@ -933,8 +942,10 @@ class NodeExecutionProgress:
return False
if task := self.tasks.pop(exec_id):
self.on_done_task(exec_id, task.result())
try:
self.on_done_task(exec_id, task.result())
except Exception as e:
logger.error(f"Task for exec ID {exec_id} failed with error: {str(e)}")
return True
def _next_exec(self) -> str | None:

View File

@@ -1,29 +1,226 @@
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional
from pydantic import BaseModel
from backend.integrations.oauth.todoist import TodoistOAuthHandler
from .github import GitHubOAuthHandler
from .google import GoogleOAuthHandler
from .linear import LinearOAuthHandler
from .notion import NotionOAuthHandler
from .twitter import TwitterOAuthHandler
if TYPE_CHECKING:
from ..providers import ProviderName
from .base import BaseOAuthHandler
# --8<-- [start:HANDLERS_BY_NAMEExample]
HANDLERS_BY_NAME: dict["ProviderName", type["BaseOAuthHandler"]] = {
handler.PROVIDER_NAME: handler
for handler in [
GitHubOAuthHandler,
GoogleOAuthHandler,
NotionOAuthHandler,
TwitterOAuthHandler,
LinearOAuthHandler,
TodoistOAuthHandler,
]
# Build handlers dict with string keys for compatibility with SDK auto-registration
_ORIGINAL_HANDLERS = [
GitHubOAuthHandler,
GoogleOAuthHandler,
NotionOAuthHandler,
TwitterOAuthHandler,
TodoistOAuthHandler,
]
# Start with original handlers
_handlers_dict = {
(
handler.PROVIDER_NAME.value
if hasattr(handler.PROVIDER_NAME, "value")
else str(handler.PROVIDER_NAME)
): handler
for handler in _ORIGINAL_HANDLERS
}
class SDKAwareCredentials(BaseModel):
"""OAuth credentials configuration."""
use_secrets: bool = True
client_id_env_var: Optional[str] = None
client_secret_env_var: Optional[str] = None
_credentials_by_provider = {}
# Add default credentials for original handlers
for handler in _ORIGINAL_HANDLERS:
provider_name = (
handler.PROVIDER_NAME.value
if hasattr(handler.PROVIDER_NAME, "value")
else str(handler.PROVIDER_NAME)
)
_credentials_by_provider[provider_name] = SDKAwareCredentials(
use_secrets=True, client_id_env_var=None, client_secret_env_var=None
)
# Create a custom dict class that includes SDK handlers
class SDKAwareHandlersDict(dict):
"""Dictionary that automatically includes SDK-registered OAuth handlers."""
def __getitem__(self, key):
# First try the original handlers
if key in _handlers_dict:
return _handlers_dict[key]
# Then try SDK handlers
try:
from backend.sdk import AutoRegistry
sdk_handlers = AutoRegistry.get_oauth_handlers()
if key in sdk_handlers:
return sdk_handlers[key]
except ImportError:
pass
# If not found, raise KeyError
raise KeyError(key)
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __contains__(self, key):
if key in _handlers_dict:
return True
try:
from backend.sdk import AutoRegistry
sdk_handlers = AutoRegistry.get_oauth_handlers()
return key in sdk_handlers
except ImportError:
return False
def keys(self):
# Combine all keys into a single dict and return its keys view
combined = dict(_handlers_dict)
try:
from backend.sdk import AutoRegistry
sdk_handlers = AutoRegistry.get_oauth_handlers()
combined.update(sdk_handlers)
except ImportError:
pass
return combined.keys()
def values(self):
combined = dict(_handlers_dict)
try:
from backend.sdk import AutoRegistry
sdk_handlers = AutoRegistry.get_oauth_handlers()
combined.update(sdk_handlers)
except ImportError:
pass
return combined.values()
def items(self):
combined = dict(_handlers_dict)
try:
from backend.sdk import AutoRegistry
sdk_handlers = AutoRegistry.get_oauth_handlers()
combined.update(sdk_handlers)
except ImportError:
pass
return combined.items()
class SDKAwareCredentialsDict(dict):
"""Dictionary that automatically includes SDK-registered OAuth credentials."""
def __getitem__(self, key):
# First try the original handlers
if key in _credentials_by_provider:
return _credentials_by_provider[key]
# Then try SDK credentials
try:
from backend.sdk import AutoRegistry
sdk_credentials = AutoRegistry.get_oauth_credentials()
if key in sdk_credentials:
# Convert from SDKOAuthCredentials to SDKAwareCredentials
sdk_cred = sdk_credentials[key]
return SDKAwareCredentials(
use_secrets=sdk_cred.use_secrets,
client_id_env_var=sdk_cred.client_id_env_var,
client_secret_env_var=sdk_cred.client_secret_env_var,
)
except ImportError:
pass
# If not found, raise KeyError
raise KeyError(key)
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __contains__(self, key):
if key in _credentials_by_provider:
return True
try:
from backend.sdk import AutoRegistry
sdk_credentials = AutoRegistry.get_oauth_credentials()
return key in sdk_credentials
except ImportError:
return False
def keys(self):
# Combine all keys into a single dict and return its keys view
combined = dict(_credentials_by_provider)
try:
from backend.sdk import AutoRegistry
sdk_credentials = AutoRegistry.get_oauth_credentials()
combined.update(sdk_credentials)
except ImportError:
pass
return combined.keys()
def values(self):
combined = dict(_credentials_by_provider)
try:
from backend.sdk import AutoRegistry
sdk_credentials = AutoRegistry.get_oauth_credentials()
# Convert SDK credentials to SDKAwareCredentials
for key, sdk_cred in sdk_credentials.items():
combined[key] = SDKAwareCredentials(
use_secrets=sdk_cred.use_secrets,
client_id_env_var=sdk_cred.client_id_env_var,
client_secret_env_var=sdk_cred.client_secret_env_var,
)
except ImportError:
pass
return combined.values()
def items(self):
combined = dict(_credentials_by_provider)
try:
from backend.sdk import AutoRegistry
sdk_credentials = AutoRegistry.get_oauth_credentials()
# Convert SDK credentials to SDKAwareCredentials
for key, sdk_cred in sdk_credentials.items():
combined[key] = SDKAwareCredentials(
use_secrets=sdk_cred.use_secrets,
client_id_env_var=sdk_cred.client_id_env_var,
client_secret_env_var=sdk_cred.client_secret_env_var,
)
except ImportError:
pass
return combined.items()
HANDLERS_BY_NAME: dict[str, type["BaseOAuthHandler"]] = SDKAwareHandlersDict()
CREDENTIALS_BY_PROVIDER: dict[str, SDKAwareCredentials] = SDKAwareCredentialsDict()
# --8<-- [end:HANDLERS_BY_NAMEExample]
__all__ = ["HANDLERS_BY_NAME"]

View File

@@ -11,7 +11,7 @@ logger = logging.getLogger(__name__)
class BaseOAuthHandler(ABC):
# --8<-- [start:BaseOAuthHandler1]
PROVIDER_NAME: ClassVar[ProviderName]
PROVIDER_NAME: ClassVar[ProviderName | str]
DEFAULT_SCOPES: ClassVar[list[str]] = []
# --8<-- [end:BaseOAuthHandler1]
@@ -81,8 +81,6 @@ class BaseOAuthHandler(ABC):
"""Handles the default scopes for the provider"""
# If scopes are empty, use the default scopes for the provider
if not scopes:
logger.debug(
f"Using default scopes for provider {self.PROVIDER_NAME.value}"
)
logger.debug(f"Using default scopes for provider {str(self.PROVIDER_NAME)}")
scopes = self.DEFAULT_SCOPES
return scopes

View File

@@ -1,8 +1,16 @@
from enum import Enum
from typing import Any
# --8<-- [start:ProviderName]
class ProviderName(str, Enum):
"""
Provider names for integrations.
This enum extends str to accept any string value while maintaining
backward compatibility with existing provider constants.
"""
AIML_API = "aiml_api"
ANTHROPIC = "anthropic"
APOLLO = "apollo"
@@ -10,9 +18,7 @@ class ProviderName(str, Enum):
DISCORD = "discord"
D_ID = "d_id"
E2B = "e2b"
EXA = "exa"
FAL = "fal"
GENERIC_WEBHOOK = "generic_webhook"
GITHUB = "github"
GOOGLE = "google"
GOOGLE_MAPS = "google_maps"
@@ -21,7 +27,6 @@ class ProviderName(str, Enum):
HUBSPOT = "hubspot"
IDEOGRAM = "ideogram"
JINA = "jina"
LINEAR = "linear"
LLAMA_API = "llama_api"
MEDIUM = "medium"
MEM0 = "mem0"
@@ -43,4 +48,57 @@ class ProviderName(str, Enum):
TODOIST = "todoist"
UNREAL_SPEECH = "unreal_speech"
ZEROBOUNCE = "zerobounce"
@classmethod
def _missing_(cls, value: Any) -> "ProviderName":
"""
Allow any string value to be used as a ProviderName.
This enables SDK users to define custom providers without
modifying the enum.
"""
if isinstance(value, str):
# Create a pseudo-member that behaves like an enum member
pseudo_member = str.__new__(cls, value)
pseudo_member._name_ = value.upper()
pseudo_member._value_ = value
return pseudo_member
return None # type: ignore
@classmethod
def __get_pydantic_json_schema__(cls, schema, handler):
"""
Custom JSON schema generation that allows any string value,
not just the predefined enum values.
"""
# Get the default schema
json_schema = handler(schema)
# Remove the enum constraint to allow any string
if "enum" in json_schema:
del json_schema["enum"]
# Keep the type as string
json_schema["type"] = "string"
# Update description to indicate custom providers are allowed
json_schema["description"] = (
"Provider name for integrations. "
"Can be any string value, including custom provider names."
)
return json_schema
@classmethod
def __get_pydantic_core_schema__(cls, source_type, handler):
"""
Pydantic v2 core schema that allows any string value.
"""
from pydantic_core import core_schema
# Create a string schema that validates any string
return core_schema.no_info_after_validator_function(
cls,
core_schema.str_schema(),
)
# --8<-- [end:ProviderName]

View File

@@ -12,7 +12,6 @@ def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]
webhook_managers = {}
from .compass import CompassWebhookManager
from .generic import GenericWebhooksManager
from .github import GithubWebhooksManager
from .slant3d import Slant3DWebhooksManager
@@ -23,7 +22,6 @@ def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]
CompassWebhookManager,
GithubWebhooksManager,
Slant3DWebhooksManager,
GenericWebhooksManager,
]
}
)

View File

@@ -0,0 +1,24 @@
"""Monitoring module for platform health and alerting."""
from .block_error_monitor import BlockErrorMonitor, report_block_error_rates
from .late_execution_monitor import (
LateExecutionException,
LateExecutionMonitor,
report_late_executions,
)
from .notification_monitor import (
NotificationJobArgs,
process_existing_batches,
process_weekly_summary,
)
__all__ = [
"BlockErrorMonitor",
"LateExecutionMonitor",
"LateExecutionException",
"NotificationJobArgs",
"report_block_error_rates",
"report_late_executions",
"process_existing_batches",
"process_weekly_summary",
]

View File

@@ -0,0 +1,291 @@
"""Block error rate monitoring module."""
import logging
import re
from datetime import datetime, timedelta, timezone
from pydantic import BaseModel
from backend.data.block import get_block
from backend.data.execution import ExecutionStatus, NodeExecutionResult
from backend.executor import utils as execution_utils
from backend.notifications.notifications import NotificationManagerClient
from backend.util.metrics import sentry_capture_error
from backend.util.service import get_service_client
from backend.util.settings import Config
logger = logging.getLogger(__name__)
config = Config()
class BlockStatsWithSamples(BaseModel):
"""Enhanced block stats with error samples."""
block_id: str
block_name: str
total_executions: int
failed_executions: int
error_samples: list[str] = []
@property
def error_rate(self) -> float:
"""Calculate error rate as a percentage."""
if self.total_executions == 0:
return 0.0
return (self.failed_executions / self.total_executions) * 100
class BlockErrorMonitor:
"""Monitor block error rates and send alerts when thresholds are exceeded."""
def __init__(self, include_top_blocks: int | None = None):
self.config = config
self.notification_client = get_service_client(NotificationManagerClient)
self.include_top_blocks = (
include_top_blocks
if include_top_blocks is not None
else config.block_error_include_top_blocks
)
def check_block_error_rates(self) -> str:
"""Check block error rates and send Discord alerts if thresholds are exceeded."""
try:
logger.info("Checking block error rates")
# Get executions from the last 24 hours
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24)
# Use SQL aggregation to efficiently count totals and failures by block
block_stats = self._get_block_stats_from_db(start_time, end_time)
# For blocks with high error rates, fetch error samples
threshold = self.config.block_error_rate_threshold
for block_name, stats in block_stats.items():
if stats.total_executions >= 10 and stats.error_rate >= threshold * 100:
# Only fetch error samples for blocks that exceed threshold
error_samples = self._get_error_samples_for_block(
stats.block_id, start_time, end_time, limit=3
)
stats.error_samples = error_samples
# Check thresholds and send alerts
critical_alerts = self._generate_critical_alerts(block_stats, threshold)
if critical_alerts:
msg = "Block Error Rate Alert:\n\n" + "\n\n".join(critical_alerts)
self.notification_client.discord_system_alert(msg)
logger.info(
f"Sent block error rate alert for {len(critical_alerts)} blocks"
)
return f"Alert sent for {len(critical_alerts)} blocks with high error rates"
# If no critical alerts, check if we should show top blocks
if self.include_top_blocks > 0:
top_blocks_msg = self._generate_top_blocks_alert(
block_stats, start_time, end_time
)
if top_blocks_msg:
self.notification_client.discord_system_alert(top_blocks_msg)
logger.info("Sent top blocks summary")
return "Sent top blocks summary"
logger.info("No blocks exceeded error rate threshold")
return "No errors reported for today"
except Exception as e:
logger.exception(f"Error checking block error rates: {e}")
error = Exception(f"Error checking block error rates: {e}")
msg = str(error)
sentry_capture_error(error)
self.notification_client.discord_system_alert(msg)
return msg
def _get_block_stats_from_db(
self, start_time: datetime, end_time: datetime
) -> dict[str, BlockStatsWithSamples]:
"""Get block execution stats using efficient SQL aggregation."""
result = execution_utils.get_db_client().get_block_error_stats(
start_time, end_time
)
block_stats = {}
for stats in result:
block_name = b.name if (b := get_block(stats.block_id)) else "Unknown"
block_stats[block_name] = BlockStatsWithSamples(
block_id=stats.block_id,
block_name=block_name,
total_executions=stats.total_executions,
failed_executions=stats.failed_executions,
error_samples=[],
)
return block_stats
def _generate_critical_alerts(
self, block_stats: dict[str, BlockStatsWithSamples], threshold: float
) -> list[str]:
"""Generate alerts for blocks that exceed the error rate threshold."""
alerts = []
for block_name, stats in block_stats.items():
if stats.total_executions >= 10 and stats.error_rate >= threshold * 100:
error_groups = self._group_similar_errors(stats.error_samples)
alert_msg = (
f"🚨 Block '{block_name}' has {stats.error_rate:.1f}% error rate "
f"({stats.failed_executions}/{stats.total_executions}) in the last 24 hours"
)
if error_groups:
alert_msg += "\n\n📊 Error Types:"
for error_pattern, count in error_groups.items():
alert_msg += f"\n{error_pattern} ({count}x)"
alerts.append(alert_msg)
return alerts
def _generate_top_blocks_alert(
self,
block_stats: dict[str, BlockStatsWithSamples],
start_time: datetime,
end_time: datetime,
) -> str | None:
"""Generate top blocks summary when no critical alerts exist."""
top_error_blocks = sorted(
[
(name, stats)
for name, stats in block_stats.items()
if stats.total_executions >= 10 and stats.failed_executions > 0
],
key=lambda x: x[1].failed_executions,
reverse=True,
)[: self.include_top_blocks]
if not top_error_blocks:
return "✅ No errors reported for today - all blocks are running smoothly!"
# Get error samples for top blocks
for block_name, stats in top_error_blocks:
if not stats.error_samples:
stats.error_samples = self._get_error_samples_for_block(
stats.block_id, start_time, end_time, limit=2
)
count_text = (
f"top {self.include_top_blocks}" if self.include_top_blocks > 1 else "top"
)
alert_msg = f"📊 Daily Error Summary - {count_text} blocks with most errors:"
for block_name, stats in top_error_blocks:
alert_msg += f"\n{block_name}: {stats.failed_executions} errors ({stats.error_rate:.1f}% of {stats.total_executions})"
if stats.error_samples:
error_groups = self._group_similar_errors(stats.error_samples)
if error_groups:
# Show most common error
most_common_error = next(iter(error_groups.items()))
alert_msg += f"\n └ Most common: {most_common_error[0]}"
return alert_msg
def _get_error_samples_for_block(
self, block_id: str, start_time: datetime, end_time: datetime, limit: int = 3
) -> list[str]:
"""Get error samples for a specific block - just a few recent ones."""
# Only fetch a small number of recent failed executions for this specific block
executions = execution_utils.get_db_client().get_node_executions(
block_ids=[block_id],
statuses=[ExecutionStatus.FAILED],
created_time_gte=start_time,
created_time_lte=end_time,
limit=limit, # Just get the limit we need
)
error_samples = []
for execution in executions:
if error_message := self._extract_error_message(execution):
masked_error = self._mask_sensitive_data(error_message)
error_samples.append(masked_error)
if len(error_samples) >= limit: # Stop once we have enough samples
break
return error_samples
def _extract_error_message(self, execution: NodeExecutionResult) -> str | None:
"""Extract error message from execution output."""
try:
if execution.output_data and (
error_msg := execution.output_data.get("error")
):
return str(error_msg[0])
return None
except Exception:
return None
def _mask_sensitive_data(self, error_message):
"""Mask sensitive data in error messages to enable grouping."""
if not error_message:
return ""
# Convert to string if not already
error_str = str(error_message)
# Mask numbers (replace with X)
error_str = re.sub(r"\d+", "X", error_str)
# Mask all caps words (likely constants/IDs)
error_str = re.sub(r"\b[A-Z_]{3,}\b", "MASKED", error_str)
# Mask words with underscores (likely internal variables)
error_str = re.sub(r"\b\w*_\w*\b", "MASKED", error_str)
# Mask UUIDs and long alphanumeric strings
error_str = re.sub(
r"\b[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}\b",
"UUID",
error_str,
)
error_str = re.sub(r"\b[a-f0-9]{20,}\b", "HASH", error_str)
# Mask file paths
error_str = re.sub(r"(/[^/\s]+)+", "/MASKED/path", error_str)
# Mask URLs
error_str = re.sub(r"https?://[^\s]+", "URL", error_str)
# Mask email addresses
error_str = re.sub(
r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b", "EMAIL", error_str
)
# Truncate if too long
if len(error_str) > 100:
error_str = error_str[:97] + "..."
return error_str.strip()
def _group_similar_errors(self, error_samples):
"""Group similar error messages and return counts."""
if not error_samples:
return {}
error_groups = {}
for error in error_samples:
if error in error_groups:
error_groups[error] += 1
else:
error_groups[error] = 1
# Sort by frequency, most common first
return dict(sorted(error_groups.items(), key=lambda x: x[1], reverse=True))
def report_block_error_rates(include_top_blocks: int | None = None):
"""Check block error rates and send Discord alerts if thresholds are exceeded."""
monitor = BlockErrorMonitor(include_top_blocks=include_top_blocks)
return monitor.check_block_error_rates()

View File

@@ -0,0 +1,71 @@
"""Late execution monitoring module."""
import logging
from datetime import datetime, timedelta, timezone
from backend.data.execution import ExecutionStatus
from backend.executor import utils as execution_utils
from backend.notifications.notifications import NotificationManagerClient
from backend.util.metrics import sentry_capture_error
from backend.util.service import get_service_client
from backend.util.settings import Config
logger = logging.getLogger(__name__)
config = Config()
class LateExecutionException(Exception):
"""Exception raised when late executions are detected."""
pass
class LateExecutionMonitor:
"""Monitor late executions and send alerts when thresholds are exceeded."""
def __init__(self):
self.config = config
self.notification_client = get_service_client(NotificationManagerClient)
def check_late_executions(self) -> str:
"""Check for late executions and send alerts if found."""
late_executions = execution_utils.get_db_client().get_graph_executions(
statuses=[ExecutionStatus.QUEUED],
created_time_gte=datetime.now(timezone.utc)
- timedelta(
seconds=self.config.execution_late_notification_checkrange_secs
),
created_time_lte=datetime.now(timezone.utc)
- timedelta(seconds=self.config.execution_late_notification_threshold_secs),
limit=1000,
)
if not late_executions:
return "No late executions detected."
num_late_executions = len(late_executions)
num_users = len(set([r.user_id for r in late_executions]))
late_execution_details = [
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Created At: {exec.started_at.isoformat()}`"
for exec in late_executions
]
error = LateExecutionException(
f"Late executions detected: {num_late_executions} late executions from {num_users} users "
f"in the last {self.config.execution_late_notification_checkrange_secs} seconds. "
f"Graph has been queued for more than {self.config.execution_late_notification_threshold_secs} seconds. "
"Please check the executor status. Details:\n"
+ "\n".join(late_execution_details)
)
msg = str(error)
sentry_capture_error(error)
self.notification_client.discord_system_alert(msg)
return msg
def report_late_executions() -> str:
"""Check for late executions and send Discord alerts if found."""
monitor = LateExecutionMonitor()
return monitor.check_late_executions()

View File

@@ -0,0 +1,39 @@
"""Notification processing monitoring module."""
import logging
from prisma.enums import NotificationType
from pydantic import BaseModel
from backend.notifications.notifications import NotificationManagerClient
from backend.util.service import get_service_client
logger = logging.getLogger(__name__)
class NotificationJobArgs(BaseModel):
notification_types: list[NotificationType]
cron: str
def process_existing_batches(**kwargs):
"""Process existing notification batches."""
args = NotificationJobArgs(**kwargs)
try:
logging.info(
f"Processing existing batches for notification type {args.notification_types}"
)
get_service_client(NotificationManagerClient).process_existing_batches(
args.notification_types
)
except Exception as e:
logger.exception(f"Error processing existing batches: {e}")
def process_weekly_summary(**kwargs):
"""Process weekly summary notifications."""
try:
logging.info("Processing weekly summary")
get_service_client(NotificationManagerClient).queue_weekly_summary()
except Exception as e:
logger.exception(f"Error processing weekly summary: {e}")

View File

@@ -0,0 +1,169 @@
"""
AutoGPT Platform Block Development SDK
Complete re-export of all dependencies needed for block development.
Usage: from backend.sdk import *
This module provides:
- All block base classes and types
- All credential and authentication components
- All cost tracking components
- All webhook components
- All utility functions
- Auto-registration decorators
"""
# Third-party imports
from pydantic import BaseModel, Field, SecretStr
# === CORE BLOCK SYSTEM ===
from backend.data.block import (
Block,
BlockCategory,
BlockManualWebhookConfig,
BlockOutput,
BlockSchema,
BlockType,
BlockWebhookConfig,
)
from backend.data.integrations import Webhook
from backend.data.model import APIKeyCredentials, Credentials, CredentialsField
from backend.data.model import CredentialsMetaInput as _CredentialsMetaInput
from backend.data.model import (
NodeExecutionStats,
OAuth2Credentials,
SchemaField,
UserPasswordCredentials,
)
# === INTEGRATIONS ===
from backend.integrations.providers import ProviderName
from backend.sdk.builder import ProviderBuilder
from backend.sdk.cost_integration import cost
from backend.sdk.provider import Provider
# === NEW SDK COMPONENTS (imported early for patches) ===
from backend.sdk.registry import AutoRegistry, BlockConfiguration
# === UTILITIES ===
from backend.util import json
from backend.util.request import Requests
# === OPTIONAL IMPORTS WITH TRY/EXCEPT ===
# Webhooks
try:
from backend.integrations.webhooks._base import BaseWebhooksManager
except ImportError:
BaseWebhooksManager = None
try:
from backend.integrations.webhooks._manual_base import ManualWebhookManagerBase
except ImportError:
ManualWebhookManagerBase = None
# Cost System
try:
from backend.data.cost import BlockCost, BlockCostType
except ImportError:
from backend.data.block_cost_config import BlockCost, BlockCostType
try:
from backend.data.credit import UsageTransactionMetadata
except ImportError:
UsageTransactionMetadata = None
try:
from backend.executor.utils import block_usage_cost
except ImportError:
block_usage_cost = None
# Utilities
try:
from backend.util.file import store_media_file
except ImportError:
store_media_file = None
try:
from backend.util.type import MediaFileType, convert
except ImportError:
MediaFileType = None
convert = None
try:
from backend.util.text import TextFormatter
except ImportError:
TextFormatter = None
try:
from backend.util.logging import TruncatedLogger
except ImportError:
TruncatedLogger = None
# OAuth handlers
try:
from backend.integrations.oauth.base import BaseOAuthHandler
except ImportError:
BaseOAuthHandler = None
# Credential type with proper provider name
from typing import Literal as _Literal
CredentialsMetaInput = _CredentialsMetaInput[
ProviderName, _Literal["api_key", "oauth2", "user_password"]
]
# === COMPREHENSIVE __all__ EXPORT ===
__all__ = [
# Core Block System
"Block",
"BlockCategory",
"BlockOutput",
"BlockSchema",
"BlockType",
"BlockWebhookConfig",
"BlockManualWebhookConfig",
# Schema and Model Components
"SchemaField",
"Credentials",
"CredentialsField",
"CredentialsMetaInput",
"APIKeyCredentials",
"OAuth2Credentials",
"UserPasswordCredentials",
"NodeExecutionStats",
# Cost System
"BlockCost",
"BlockCostType",
"UsageTransactionMetadata",
"block_usage_cost",
# Integrations
"ProviderName",
"BaseWebhooksManager",
"ManualWebhookManagerBase",
"Webhook",
# Provider-Specific (when available)
"BaseOAuthHandler",
# Utilities
"json",
"store_media_file",
"MediaFileType",
"convert",
"TextFormatter",
"TruncatedLogger",
"BaseModel",
"Field",
"SecretStr",
"Requests",
# SDK Components
"AutoRegistry",
"BlockConfiguration",
"Provider",
"ProviderBuilder",
"cost",
]
# Remove None values from __all__
__all__ = [name for name in __all__ if globals().get(name) is not None]

View File

@@ -0,0 +1,161 @@
"""
Builder class for creating provider configurations with a fluent API.
"""
import os
from typing import Callable, List, Optional, Type
from pydantic import SecretStr
from backend.data.cost import BlockCost, BlockCostType
from backend.data.model import APIKeyCredentials, Credentials, UserPasswordCredentials
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.webhooks._base import BaseWebhooksManager
from backend.sdk.provider import OAuthConfig, Provider
from backend.sdk.registry import AutoRegistry
from backend.util.settings import Settings
class ProviderBuilder:
"""Builder for creating provider configurations."""
def __init__(self, name: str):
self.name = name
self._oauth_config: Optional[OAuthConfig] = None
self._webhook_manager: Optional[Type[BaseWebhooksManager]] = None
self._default_credentials: List[Credentials] = []
self._base_costs: List[BlockCost] = []
self._supported_auth_types: set = set()
self._api_client_factory: Optional[Callable] = None
self._error_handler: Optional[Callable[[Exception], str]] = None
self._default_scopes: Optional[List[str]] = None
self._client_id_env_var: Optional[str] = None
self._client_secret_env_var: Optional[str] = None
self._extra_config: dict = {}
def with_oauth(
self,
handler_class: Type[BaseOAuthHandler],
scopes: Optional[List[str]] = None,
client_id_env_var: Optional[str] = None,
client_secret_env_var: Optional[str] = None,
) -> "ProviderBuilder":
"""Add OAuth support."""
self._oauth_config = OAuthConfig(
oauth_handler=handler_class,
scopes=scopes,
client_id_env_var=client_id_env_var,
client_secret_env_var=client_secret_env_var,
)
self._supported_auth_types.add("oauth2")
return self
def with_api_key(self, env_var_name: str, title: str) -> "ProviderBuilder":
"""Add API key support with environment variable name."""
self._supported_auth_types.add("api_key")
# Register the API key mapping
AutoRegistry.register_api_key(self.name, env_var_name)
# Check if API key exists in environment
api_key = os.getenv(env_var_name)
if api_key:
self._default_credentials.append(
APIKeyCredentials(
id=f"{self.name}-default",
provider=self.name,
api_key=SecretStr(api_key),
title=title,
)
)
return self
def with_api_key_from_settings(
self, settings_attr: str, title: str
) -> "ProviderBuilder":
"""Use existing API key from settings."""
self._supported_auth_types.add("api_key")
# Try to get the API key from settings
settings = Settings()
api_key = getattr(settings.secrets, settings_attr, None)
if api_key:
self._default_credentials.append(
APIKeyCredentials(
id=f"{self.name}-default",
provider=self.name,
api_key=api_key,
title=title,
)
)
return self
def with_user_password(
self, username_env_var: str, password_env_var: str, title: str
) -> "ProviderBuilder":
"""Add username/password support with environment variable names."""
self._supported_auth_types.add("user_password")
# Check if credentials exist in environment
username = os.getenv(username_env_var)
password = os.getenv(password_env_var)
if username and password:
self._default_credentials.append(
UserPasswordCredentials(
id=f"{self.name}-default",
provider=self.name,
username=SecretStr(username),
password=SecretStr(password),
title=title,
)
)
return self
def with_webhook_manager(
self, manager_class: Type[BaseWebhooksManager]
) -> "ProviderBuilder":
"""Register webhook manager for this provider."""
self._webhook_manager = manager_class
return self
def with_base_cost(
self, amount: int, cost_type: BlockCostType
) -> "ProviderBuilder":
"""Set base cost for all blocks using this provider."""
self._base_costs.append(BlockCost(cost_amount=amount, cost_type=cost_type))
return self
def with_api_client(self, factory: Callable) -> "ProviderBuilder":
"""Register API client factory."""
self._api_client_factory = factory
return self
def with_error_handler(
self, handler: Callable[[Exception], str]
) -> "ProviderBuilder":
"""Register error handler for provider-specific errors."""
self._error_handler = handler
return self
def with_config(self, **kwargs) -> "ProviderBuilder":
"""Add additional configuration options."""
self._extra_config.update(kwargs)
return self
def build(self) -> Provider:
"""Build and register the provider configuration."""
provider = Provider(
name=self.name,
oauth_config=self._oauth_config,
webhook_manager=self._webhook_manager,
default_credentials=self._default_credentials,
base_costs=self._base_costs,
supported_auth_types=self._supported_auth_types,
api_client_factory=self._api_client_factory,
error_handler=self._error_handler,
**self._extra_config,
)
# Auto-registration happens here
AutoRegistry.register_provider(provider)
return provider

View File

@@ -0,0 +1,163 @@
"""
Integration between SDK provider costs and the execution cost system.
This module provides the glue between provider-defined base costs and the
BLOCK_COSTS configuration used by the execution system.
"""
import logging
from typing import List, Type
from backend.data.block import Block
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.sdk.registry import AutoRegistry
logger = logging.getLogger(__name__)
def register_provider_costs_for_block(block_class: Type[Block]) -> None:
"""
Register provider base costs for a specific block in BLOCK_COSTS.
This function checks if the block uses credentials from a provider that has
base costs defined, and automatically registers those costs for the block.
Args:
block_class: The block class to register costs for
"""
# Skip if block already has custom costs defined
if block_class in BLOCK_COSTS:
logger.debug(
f"Block {block_class.__name__} already has costs defined, skipping provider costs"
)
return
# Get the block's input schema
# We need to instantiate the block to get its input schema
try:
block_instance = block_class()
input_schema = block_instance.input_schema
except Exception as e:
logger.debug(f"Block {block_class.__name__} cannot be instantiated: {e}")
return
# Look for credentials fields
# The cost system works of filtering on credentials fields,
# without credentials fields, we can not apply costs
# TODO: Improve cost system to allow for costs witout a provider
credentials_fields = input_schema.get_credentials_fields()
if not credentials_fields:
logger.debug(f"Block {block_class.__name__} has no credentials fields")
return
# Get provider information from credentials fields
for field_name, field_info in credentials_fields.items():
# Get the field schema to extract provider information
field_schema = input_schema.get_field_schema(field_name)
# Extract provider names from json_schema_extra
providers = field_schema.get("credentials_provider", [])
if not providers:
continue
# For each provider, check if it has base costs
block_costs: List[BlockCost] = []
for provider_name in providers:
provider = AutoRegistry.get_provider(provider_name)
if not provider:
logger.debug(f"Provider {provider_name} not found in registry")
continue
# Add provider's base costs to the block
if provider.base_costs:
logger.info(
f"Registering {len(provider.base_costs)} base costs from provider {provider_name} for block {block_class.__name__}"
)
block_costs.extend(provider.base_costs)
# Register costs if any were found
if block_costs:
BLOCK_COSTS[block_class] = block_costs
logger.info(
f"Registered {len(block_costs)} total costs for block {block_class.__name__}"
)
def sync_all_provider_costs() -> None:
"""
Sync all provider base costs to blocks that use them.
This should be called after all providers and blocks are registered,
typically during application startup.
"""
from backend.blocks import load_all_blocks
logger.info("Syncing provider costs to blocks...")
blocks_with_costs = 0
total_costs = 0
for block_id, block_class in load_all_blocks().items():
initial_count = len(BLOCK_COSTS.get(block_class, []))
register_provider_costs_for_block(block_class)
final_count = len(BLOCK_COSTS.get(block_class, []))
if final_count > initial_count:
blocks_with_costs += 1
total_costs += final_count - initial_count
logger.info(f"Synced {total_costs} costs to {blocks_with_costs} blocks")
def get_block_costs(block_class: Type[Block]) -> List[BlockCost]:
"""
Get all costs for a block, including both explicit and provider costs.
Args:
block_class: The block class to get costs for
Returns:
List of BlockCost objects for the block
"""
# First ensure provider costs are registered
register_provider_costs_for_block(block_class)
# Return all costs for the block
return BLOCK_COSTS.get(block_class, [])
def cost(*costs: BlockCost):
"""
Decorator to set custom costs for a block.
This decorator allows blocks to define their own costs, which will override
any provider base costs. Multiple costs can be specified with different
filters for different pricing tiers (e.g., different models).
Example:
@cost(
BlockCost(cost_type=BlockCostType.RUN, cost_amount=10),
BlockCost(
cost_type=BlockCostType.RUN,
cost_amount=20,
cost_filter={"model": "premium"}
)
)
class MyBlock(Block):
...
Args:
*costs: Variable number of BlockCost objects
"""
def decorator(block_class: Type[Block]) -> Type[Block]:
# Register the costs for this block
if costs:
BLOCK_COSTS[block_class] = list(costs)
logger.info(
f"Registered {len(costs)} custom costs for block {block_class.__name__}"
)
return block_class
return decorator

View File

@@ -0,0 +1,114 @@
"""
Provider configuration class that holds all provider-related settings.
"""
from typing import Any, Callable, List, Optional, Set, Type
from pydantic import BaseModel
from backend.data.cost import BlockCost
from backend.data.model import Credentials, CredentialsField, CredentialsMetaInput
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.webhooks._base import BaseWebhooksManager
class OAuthConfig(BaseModel):
"""Configuration for OAuth authentication."""
oauth_handler: Type[BaseOAuthHandler]
scopes: Optional[List[str]] = None
client_id_env_var: Optional[str] = None
client_secret_env_var: Optional[str] = None
class Provider:
"""A configured provider that blocks can use.
A Provider represents a service or platform that blocks can integrate with, like Linear, OpenAI, etc.
It contains configuration for:
- Authentication (OAuth, API keys)
- Default credentials
- Base costs for using the provider
- Webhook handling
- Error handling
- API client factory
Blocks use Provider instances to handle authentication, make API calls, and manage service-specific logic.
"""
def __init__(
self,
name: str,
oauth_config: Optional[OAuthConfig] = None,
webhook_manager: Optional[Type[BaseWebhooksManager]] = None,
default_credentials: Optional[List[Credentials]] = None,
base_costs: Optional[List[BlockCost]] = None,
supported_auth_types: Optional[Set[str]] = None,
api_client_factory: Optional[Callable] = None,
error_handler: Optional[Callable[[Exception], str]] = None,
**kwargs,
):
self.name = name
self.oauth_config = oauth_config
self.webhook_manager = webhook_manager
self.default_credentials = default_credentials or []
self.base_costs = base_costs or []
self.supported_auth_types = supported_auth_types or set()
self._api_client_factory = api_client_factory
self._error_handler = error_handler
# Store any additional configuration
self._extra_config = kwargs
def credentials_field(self, **kwargs) -> CredentialsMetaInput:
"""Return a CredentialsField configured for this provider."""
# Extract known CredentialsField parameters
title = kwargs.pop("title", None)
description = kwargs.pop("description", f"{self.name.title()} credentials")
required_scopes = kwargs.pop("required_scopes", set())
discriminator = kwargs.pop("discriminator", None)
discriminator_mapping = kwargs.pop("discriminator_mapping", None)
discriminator_values = kwargs.pop("discriminator_values", None)
# Create json_schema_extra with provider information
json_schema_extra = {
"credentials_provider": [self.name],
"credentials_types": (
list(self.supported_auth_types)
if self.supported_auth_types
else ["api_key"]
),
}
# Merge any existing json_schema_extra
if "json_schema_extra" in kwargs:
json_schema_extra.update(kwargs.pop("json_schema_extra"))
# Add json_schema_extra to kwargs
kwargs["json_schema_extra"] = json_schema_extra
return CredentialsField(
required_scopes=required_scopes,
discriminator=discriminator,
discriminator_mapping=discriminator_mapping,
discriminator_values=discriminator_values,
title=title,
description=description,
**kwargs,
)
def get_api(self, credentials: Credentials) -> Any:
"""Get API client instance for the given credentials."""
if self._api_client_factory:
return self._api_client_factory(credentials)
raise NotImplementedError(f"No API client factory registered for {self.name}")
def handle_error(self, error: Exception) -> str:
"""Handle provider-specific errors."""
if self._error_handler:
return self._error_handler(error)
return str(error)
def get_config(self, key: str, default: Any = None) -> Any:
"""Get additional configuration value."""
return self._extra_config.get(key, default)

View File

@@ -0,0 +1,220 @@
"""
Auto-registration system for blocks, providers, and their configurations.
"""
import logging
import threading
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Type
from pydantic import BaseModel, SecretStr
from backend.blocks.basic import Block
from backend.data.model import APIKeyCredentials, Credentials
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.webhooks._base import BaseWebhooksManager
if TYPE_CHECKING:
from backend.sdk.provider import Provider
class SDKOAuthCredentials(BaseModel):
"""OAuth credentials configuration for SDK providers."""
use_secrets: bool = False
client_id_env_var: Optional[str] = None
client_secret_env_var: Optional[str] = None
class BlockConfiguration:
"""Configuration associated with a block."""
def __init__(
self,
provider: str,
costs: List[Any],
default_credentials: List[Credentials],
webhook_manager: Optional[Type[BaseWebhooksManager]] = None,
oauth_handler: Optional[Type[BaseOAuthHandler]] = None,
):
self.provider = provider
self.costs = costs
self.default_credentials = default_credentials
self.webhook_manager = webhook_manager
self.oauth_handler = oauth_handler
class AutoRegistry:
"""Central registry for all block-related configurations."""
_lock = threading.Lock()
_providers: Dict[str, "Provider"] = {}
_default_credentials: List[Credentials] = []
_oauth_handlers: Dict[str, Type[BaseOAuthHandler]] = {}
_oauth_credentials: Dict[str, SDKOAuthCredentials] = {}
_webhook_managers: Dict[str, Type[BaseWebhooksManager]] = {}
_block_configurations: Dict[Type[Block], BlockConfiguration] = {}
_api_key_mappings: Dict[str, str] = {} # provider -> env_var_name
@classmethod
def register_provider(cls, provider: "Provider") -> None:
"""Auto-register provider and all its configurations."""
with cls._lock:
cls._providers[provider.name] = provider
# Register OAuth handler if provided
if provider.oauth_config:
# Dynamically set PROVIDER_NAME if not already set
if (
not hasattr(provider.oauth_config.oauth_handler, "PROVIDER_NAME")
or provider.oauth_config.oauth_handler.PROVIDER_NAME is None
):
# Import ProviderName to create dynamic enum value
from backend.integrations.providers import ProviderName
# This works because ProviderName has _missing_ method
provider.oauth_config.oauth_handler.PROVIDER_NAME = ProviderName(
provider.name
)
cls._oauth_handlers[provider.name] = provider.oauth_config.oauth_handler
# Register OAuth credentials configuration
oauth_creds = SDKOAuthCredentials(
use_secrets=False, # SDK providers use custom env vars
client_id_env_var=provider.oauth_config.client_id_env_var,
client_secret_env_var=provider.oauth_config.client_secret_env_var,
)
cls._oauth_credentials[provider.name] = oauth_creds
# Register webhook manager if provided
if provider.webhook_manager:
# Dynamically set PROVIDER_NAME if not already set
if (
not hasattr(provider.webhook_manager, "PROVIDER_NAME")
or provider.webhook_manager.PROVIDER_NAME is None
):
# Import ProviderName to create dynamic enum value
from backend.integrations.providers import ProviderName
# This works because ProviderName has _missing_ method
provider.webhook_manager.PROVIDER_NAME = ProviderName(provider.name)
cls._webhook_managers[provider.name] = provider.webhook_manager
# Register default credentials
cls._default_credentials.extend(provider.default_credentials)
@classmethod
def register_api_key(cls, provider: str, env_var_name: str) -> None:
"""Register an environment variable as an API key for a provider."""
with cls._lock:
cls._api_key_mappings[provider] = env_var_name
# Dynamically check if the env var exists and create credential
import os
api_key = os.getenv(env_var_name)
if api_key:
credential = APIKeyCredentials(
id=f"{provider}-default",
provider=provider,
api_key=SecretStr(api_key),
title=f"Default {provider} credentials",
)
# Check if credential already exists to avoid duplicates
if not any(c.id == credential.id for c in cls._default_credentials):
cls._default_credentials.append(credential)
@classmethod
def get_all_credentials(cls) -> List[Credentials]:
"""Replace hardcoded get_all_creds() in credentials_store.py."""
with cls._lock:
return cls._default_credentials.copy()
@classmethod
def get_oauth_handlers(cls) -> Dict[str, Type[BaseOAuthHandler]]:
"""Replace HANDLERS_BY_NAME in oauth/__init__.py."""
with cls._lock:
return cls._oauth_handlers.copy()
@classmethod
def get_oauth_credentials(cls) -> Dict[str, SDKOAuthCredentials]:
"""Get OAuth credentials configuration for SDK providers."""
with cls._lock:
return cls._oauth_credentials.copy()
@classmethod
def get_webhook_managers(cls) -> Dict[str, Type[BaseWebhooksManager]]:
"""Replace load_webhook_managers() in webhooks/__init__.py."""
with cls._lock:
return cls._webhook_managers.copy()
@classmethod
def register_block_configuration(
cls, block_class: Type[Block], config: BlockConfiguration
) -> None:
"""Register configuration for a specific block class."""
with cls._lock:
cls._block_configurations[block_class] = config
@classmethod
def get_provider(cls, name: str) -> Optional["Provider"]:
"""Get a registered provider by name."""
with cls._lock:
return cls._providers.get(name)
@classmethod
def get_all_provider_names(cls) -> List[str]:
"""Get all registered provider names."""
with cls._lock:
return list(cls._providers.keys())
@classmethod
def clear(cls) -> None:
"""Clear all registrations (useful for testing)."""
with cls._lock:
cls._providers.clear()
cls._default_credentials.clear()
cls._oauth_handlers.clear()
cls._webhook_managers.clear()
cls._block_configurations.clear()
cls._api_key_mappings.clear()
@classmethod
def patch_integrations(cls) -> None:
"""Patch existing integration points to use AutoRegistry."""
# OAuth handlers are handled by SDKAwareHandlersDict in oauth/__init__.py
# No patching needed for OAuth handlers
# Patch webhook managers
try:
import sys
from typing import Any
# Get the module from sys.modules to respect mocking
if "backend.integrations.webhooks" in sys.modules:
webhooks: Any = sys.modules["backend.integrations.webhooks"]
else:
import backend.integrations.webhooks
webhooks: Any = backend.integrations.webhooks
if hasattr(webhooks, "load_webhook_managers"):
original_load = webhooks.load_webhook_managers
def patched_load():
# Get original managers
managers = original_load()
# Add SDK-registered managers
sdk_managers = cls.get_webhook_managers()
if isinstance(sdk_managers, dict):
# Import ProviderName for conversion
from backend.integrations.providers import ProviderName
# Convert string keys to ProviderName for consistency
for provider_str, manager in sdk_managers.items():
provider_name = ProviderName(provider_str)
managers[provider_name] = manager
return managers
webhooks.load_webhook_managers = patched_load
except Exception as e:
logging.warning(f"Failed to patch webhook managers: {e}")

View File

@@ -1,6 +1,6 @@
import logging
from collections import defaultdict
from typing import Annotated, Any, Dict, List, Optional, Sequence
from typing import Annotated, Any, Optional, Sequence
from fastapi import APIRouter, Body, Depends, HTTPException
from prisma.enums import AgentExecutionStatus, APIKeyPermission
@@ -11,7 +11,6 @@ from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.api_key import APIKey
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.data.execution import NodeExecutionResult
from backend.executor.utils import add_graph_execution
from backend.server.external.middleware import require_permission
from backend.util.settings import Settings
@@ -30,30 +29,19 @@ class NodeOutput(TypedDict):
class ExecutionNode(TypedDict):
node_id: str
input: Any
output: Dict[str, Any]
output: dict[str, Any]
class ExecutionNodeOutput(TypedDict):
node_id: str
outputs: List[NodeOutput]
outputs: list[NodeOutput]
class GraphExecutionResult(TypedDict):
execution_id: str
status: str
nodes: List[ExecutionNode]
output: Optional[List[Dict[str, str]]]
def get_outputs_with_names(results: list[NodeExecutionResult]) -> list[dict[str, str]]:
outputs = []
for result in results:
if "output" in result.output_data:
output_value = result.output_data["output"][0]
name = result.output_data.get("name", [None])[0]
if output_value and name:
outputs.append({name: output_value})
return outputs
nodes: list[ExecutionNode]
output: Optional[list[dict[str, str]]]
@v1_router.get(
@@ -122,23 +110,34 @@ async def get_graph_execution_results(
if not graph:
raise HTTPException(status_code=404, detail=f"Graph #{graph_id} not found.")
results = await execution_db.get_node_executions(graph_exec_id)
last_result = results[-1] if results else None
execution_status = (
last_result.status if last_result else AgentExecutionStatus.INCOMPLETE
graph_exec = await execution_db.get_graph_execution(
user_id=api_key.user_id,
execution_id=graph_exec_id,
include_node_executions=True,
)
outputs = get_outputs_with_names(results)
if not graph_exec:
raise HTTPException(
status_code=404, detail=f"Graph execution #{graph_exec_id} not found."
)
return GraphExecutionResult(
execution_id=graph_exec_id,
status=execution_status,
status=graph_exec.status.value,
nodes=[
ExecutionNode(
node_id=result.node_id,
input=result.input_data.get("value", result.input_data),
output={k: v for k, v in result.output_data.items()},
node_id=node_exec.node_id,
input=node_exec.input_data.get("value", node_exec.input_data),
output={k: v for k, v in node_exec.output_data.items()},
)
for result in results
for node_exec in graph_exec.node_executions
],
output=outputs if execution_status == AgentExecutionStatus.COMPLETED else None,
output=(
[
{name: value}
for name, values in graph_exec.outputs.items()
for value in values
]
if graph_exec.status == AgentExecutionStatus.COMPLETED
else None
),
)

View File

@@ -0,0 +1,74 @@
"""
Models for integration-related data structures that need to be exposed in the OpenAPI schema.
This module provides models that will be included in the OpenAPI schema generation,
allowing frontend code generators like Orval to create corresponding TypeScript types.
"""
from pydantic import BaseModel, Field
from backend.integrations.providers import ProviderName
from backend.sdk.registry import AutoRegistry
def get_all_provider_names() -> list[str]:
"""
Collect all provider names from both ProviderName enum and AutoRegistry.
This function should be called at runtime to ensure we get all
dynamically registered providers.
Returns:
A sorted list of unique provider names.
"""
# Get static providers from enum
static_providers = [member.value for member in ProviderName]
# Get dynamic providers from registry
dynamic_providers = AutoRegistry.get_all_provider_names()
# Combine and deduplicate
all_providers = list(set(static_providers + dynamic_providers))
all_providers.sort()
return all_providers
# Note: We don't create a static enum here because providers are registered dynamically.
# Instead, we expose provider names through API endpoints that can be fetched at runtime.
class ProviderNamesResponse(BaseModel):
"""Response containing list of all provider names."""
providers: list[str] = Field(
description="List of all available provider names",
default_factory=get_all_provider_names,
)
class ProviderConstants(BaseModel):
"""
Model that exposes all provider names as a constant in the OpenAPI schema.
This is designed to be converted by Orval into a TypeScript constant.
"""
PROVIDER_NAMES: dict[str, str] = Field(
description="All available provider names as a constant mapping",
default_factory=lambda: {
name.upper().replace("-", "_"): name for name in get_all_provider_names()
},
)
class Config:
schema_extra = {
"example": {
"PROVIDER_NAMES": {
"OPENAI": "openai",
"ANTHROPIC": "anthropic",
"EXA": "exa",
"GEM": "gem",
"EXAMPLE_SERVICE": "example-service",
}
}
}

View File

@@ -1,6 +1,6 @@
import asyncio
import logging
from typing import TYPE_CHECKING, Annotated, Awaitable, Literal
from typing import TYPE_CHECKING, Annotated, Awaitable, List, Literal
from fastapi import (
APIRouter,
@@ -30,9 +30,14 @@ from backend.data.model import (
)
from backend.executor.utils import add_graph_execution
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import HANDLERS_BY_NAME
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
from backend.server.integrations.models import (
ProviderConstants,
ProviderNamesResponse,
get_all_provider_names,
)
from backend.server.v2.library.db import set_preset_webhook, update_preset
from backend.util.exceptions import NeedConfirmation, NotFoundError
from backend.util.settings import Settings
@@ -472,14 +477,49 @@ async def remove_all_webhooks_for_credentials(
def _get_provider_oauth_handler(
req: Request, provider_name: ProviderName
) -> "BaseOAuthHandler":
if provider_name not in HANDLERS_BY_NAME:
# Ensure blocks are loaded so SDK providers are available
try:
from backend.blocks import load_all_blocks
load_all_blocks() # This is cached, so it only runs once
except Exception as e:
logger.warning(f"Failed to load blocks: {e}")
# Convert provider_name to string for lookup
provider_key = (
provider_name.value if hasattr(provider_name, "value") else str(provider_name)
)
if provider_key not in HANDLERS_BY_NAME:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Provider '{provider_name.value}' does not support OAuth",
detail=f"Provider '{provider_key}' does not support OAuth",
)
# Check if this provider has custom OAuth credentials
oauth_credentials = CREDENTIALS_BY_PROVIDER.get(provider_key)
if oauth_credentials and not oauth_credentials.use_secrets:
# SDK provider with custom env vars
import os
client_id = (
os.getenv(oauth_credentials.client_id_env_var)
if oauth_credentials.client_id_env_var
else None
)
client_secret = (
os.getenv(oauth_credentials.client_secret_env_var)
if oauth_credentials.client_secret_env_var
else None
)
else:
# Original provider using settings.secrets
client_id = getattr(settings.secrets, f"{provider_name.value}_client_id", None)
client_secret = getattr(
settings.secrets, f"{provider_name.value}_client_secret", None
)
client_id = getattr(settings.secrets, f"{provider_name.value}_client_id")
client_secret = getattr(settings.secrets, f"{provider_name.value}_client_secret")
if not (client_id and client_secret):
logger.error(
f"Attempt to use unconfigured {provider_name.value} OAuth integration"
@@ -492,14 +532,84 @@ def _get_provider_oauth_handler(
},
)
handler_class = HANDLERS_BY_NAME[provider_name]
frontend_base_url = (
settings.config.frontend_base_url
or settings.config.platform_base_url
or str(req.base_url)
)
handler_class = HANDLERS_BY_NAME[provider_key]
frontend_base_url = settings.config.frontend_base_url
if not frontend_base_url:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Frontend base URL is not configured",
)
return handler_class(
client_id=client_id,
client_secret=client_secret,
redirect_uri=f"{frontend_base_url}/auth/integrations/oauth_callback",
)
# === PROVIDER DISCOVERY ENDPOINTS ===
@router.get("/providers", response_model=List[str])
async def list_providers() -> List[str]:
"""
Get a list of all available provider names.
Returns both statically defined providers (from ProviderName enum)
and dynamically registered providers (from SDK decorators).
Note: The complete list of provider names is also available as a constant
in the generated TypeScript client via PROVIDER_NAMES.
"""
# Get all providers at runtime
all_providers = get_all_provider_names()
return all_providers
@router.get("/providers/names", response_model=ProviderNamesResponse)
async def get_provider_names() -> ProviderNamesResponse:
"""
Get all provider names in a structured format.
This endpoint is specifically designed to expose the provider names
in the OpenAPI schema so that code generators like Orval can create
appropriate TypeScript constants.
"""
return ProviderNamesResponse()
@router.get("/providers/constants", response_model=ProviderConstants)
async def get_provider_constants() -> ProviderConstants:
"""
Get provider names as constants.
This endpoint returns a model with provider names as constants,
specifically designed for OpenAPI code generation tools to create
TypeScript constants.
"""
return ProviderConstants()
class ProviderEnumResponse(BaseModel):
"""Response containing a provider from the enum."""
provider: str = Field(
description="A provider name from the complete list of providers"
)
@router.get("/providers/enum-example", response_model=ProviderEnumResponse)
async def get_provider_enum_example() -> ProviderEnumResponse:
"""
Example endpoint that uses the CompleteProviderNames enum.
This endpoint exists to ensure that the CompleteProviderNames enum is included
in the OpenAPI schema, which will cause Orval to generate it as a
TypeScript enum/constant.
"""
# Return the first provider as an example
all_providers = get_all_provider_names()
return ProviderEnumResponse(
provider=all_providers[0] if all_providers else "openai"
)

View File

@@ -62,6 +62,10 @@ def launch_darkly_context():
async def lifespan_context(app: fastapi.FastAPI):
await backend.data.db.connect()
await backend.data.block.initialize_blocks()
# SDK auto-registration is now handled by AutoRegistry.patch_integrations()
# which is called when the SDK module is imported
await backend.data.user.migrate_and_encrypt_user_integrations()
await backend.data.graph.fix_llm_provider_credentials()
await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)

View File

@@ -448,10 +448,10 @@ class DeleteGraphResponse(TypedDict):
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
)
async def get_graphs(
async def list_graphs(
user_id: Annotated[str, Depends(get_user_id)],
) -> Sequence[graph_db.GraphModel]:
return await graph_db.get_graphs(filter_by="active", user_id=user_id)
) -> Sequence[graph_db.GraphMeta]:
return await graph_db.list_graphs(filter_by="active", user_id=user_id)
@v1_router.get(
@@ -669,24 +669,38 @@ async def execute_graph(
)
async def stop_graph_run(
graph_id: str, graph_exec_id: str, user_id: Annotated[str, Depends(get_user_id)]
) -> execution_db.GraphExecution:
if not await execution_db.get_graph_execution_meta(
user_id=user_id, execution_id=graph_exec_id
):
raise HTTPException(404, detail=f"Agent execution #{graph_exec_id} not found")
await execution_utils.stop_graph_execution(graph_exec_id)
# Retrieve & return canceled graph execution in its final state
result = await execution_db.get_graph_execution(
execution_id=graph_exec_id, user_id=user_id
) -> execution_db.GraphExecutionMeta | None:
res = await _stop_graph_run(
user_id=user_id,
graph_id=graph_id,
graph_exec_id=graph_exec_id,
)
if not result:
raise HTTPException(
500,
detail=f"Could not fetch graph execution #{graph_exec_id} after stopping",
)
return result
if not res:
return None
return res[0]
async def _stop_graph_run(
user_id: str,
graph_id: Optional[str] = None,
graph_exec_id: Optional[str] = None,
) -> list[execution_db.GraphExecutionMeta]:
graph_execs = await execution_db.get_graph_executions(
user_id=user_id,
graph_id=graph_id,
graph_exec_id=graph_exec_id,
statuses=[
execution_db.ExecutionStatus.INCOMPLETE,
execution_db.ExecutionStatus.QUEUED,
execution_db.ExecutionStatus.RUNNING,
],
)
stopped_execs = [
execution_utils.stop_graph_execution(graph_exec_id=exec.id, user_id=user_id)
for exec in graph_execs
]
await asyncio.gather(*stopped_execs)
return graph_execs
@v1_router.get(

View File

@@ -270,7 +270,7 @@ def test_get_graphs(
)
mocker.patch(
"backend.server.routers.v1.graph_db.get_graphs",
"backend.server.routers.v1.graph_db.list_graphs",
return_value=[mock_graph],
)

View File

@@ -170,7 +170,14 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent
if not library_agent:
raise NotFoundError(f"Library agent #{id} not found")
return library_model.LibraryAgent.from_db(library_agent)
return library_model.LibraryAgent.from_db(
library_agent,
sub_graphs=(
await graph_db.get_sub_graphs(library_agent.AgentGraph)
if library_agent.AgentGraph
else None
),
)
except prisma.errors.PrismaError as e:
logger.error(f"Database error fetching library agent: {e}")
@@ -180,7 +187,7 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent
async def get_library_agent_by_store_version_id(
store_listing_version_id: str,
user_id: str,
):
) -> library_model.LibraryAgent | None:
"""
Get the library agent metadata for a given store listing version ID and user ID.
"""
@@ -195,7 +202,7 @@ async def get_library_agent_by_store_version_id(
)
if not store_listing_version:
logger.warning(f"Store listing version not found: {store_listing_version_id}")
raise store_exceptions.AgentNotFoundError(
raise NotFoundError(
f"Store listing version {store_listing_version_id} not found or invalid"
)
@@ -207,12 +214,9 @@ async def get_library_agent_by_store_version_id(
"agentGraphVersion": store_listing_version.agentGraphVersion,
"isDeleted": False,
},
include={"AgentGraph": True},
include=library_agent_include(user_id),
)
if agent:
return library_model.LibraryAgent.from_db(agent)
else:
return None
return library_model.LibraryAgent.from_db(agent) if agent else None
async def get_library_agent_by_graph_id(

View File

@@ -51,7 +51,7 @@ class LibraryAgent(pydantic.BaseModel):
description: str
input_schema: dict[str, Any] # Should be BlockIOObjectSubSchema in frontend
credentials_input_schema: dict[str, Any] = pydantic.Field(
credentials_input_schema: dict[str, Any] | None = pydantic.Field(
description="Input schema for credentials required by the agent",
)
@@ -70,7 +70,10 @@ class LibraryAgent(pydantic.BaseModel):
is_latest_version: bool
@staticmethod
def from_db(agent: prisma.models.LibraryAgent) -> "LibraryAgent":
def from_db(
agent: prisma.models.LibraryAgent,
sub_graphs: Optional[list[prisma.models.AgentGraph]] = None,
) -> "LibraryAgent":
"""
Factory method that constructs a LibraryAgent from a Prisma LibraryAgent
model instance.
@@ -78,7 +81,7 @@ class LibraryAgent(pydantic.BaseModel):
if not agent.AgentGraph:
raise ValueError("Associated Agent record is required.")
graph = graph_model.GraphModel.from_db(agent.AgentGraph)
graph = graph_model.GraphModel.from_db(agent.AgentGraph, sub_graphs=sub_graphs)
agent_updated_at = agent.AgentGraph.updatedAt
lib_agent_updated_at = agent.updatedAt
@@ -123,8 +126,10 @@ class LibraryAgent(pydantic.BaseModel):
name=graph.name,
description=graph.description,
input_schema=graph.input_schema,
credentials_input_schema=graph.credentials_input_schema,
has_external_trigger=graph.has_webhook_trigger,
credentials_input_schema=(
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
trigger_setup_info=(
LibraryAgentTriggerInfo(
provider=trigger_block.webhook_config.provider,
@@ -257,6 +262,19 @@ class LibraryAgentPresetUpdatable(pydantic.BaseModel):
is_active: Optional[bool] = None
class TriggeredPresetSetupRequest(pydantic.BaseModel):
name: str
description: str = ""
graph_id: str
graph_version: int
trigger_config: dict[str, Any]
agent_credentials: dict[str, CredentialsMetaInput] = pydantic.Field(
default_factory=dict
)
class LibraryAgentPreset(LibraryAgentPresetCreatable):
"""Represents a preset configuration for a library agent."""

View File

@@ -1,18 +1,13 @@
import logging
from typing import Any, Optional
from typing import Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, Depends, HTTPException, Path, Query, status
from fastapi import APIRouter, Body, Depends, HTTPException, Query, status
from fastapi.responses import Response
from pydantic import BaseModel, Field
import backend.server.v2.library.db as library_db
import backend.server.v2.library.model as library_model
import backend.server.v2.store.exceptions as store_exceptions
from backend.data.graph import get_graph
from backend.data.model import CredentialsMetaInput
from backend.executor.utils import make_node_credentials_input_map
from backend.integrations.webhooks.utils import setup_webhook_for_block
from backend.util.exceptions import NotFoundError
logger = logging.getLogger(__name__)
@@ -113,12 +108,11 @@ async def get_library_agent_by_graph_id(
"/marketplace/{store_listing_version_id}",
summary="Get Agent By Store ID",
tags=["store, library"],
response_model=library_model.LibraryAgent | None,
)
async def get_library_agent_by_store_listing_version_id(
store_listing_version_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
):
) -> library_model.LibraryAgent | None:
"""
Get Library Agent from Store Listing Version ID.
"""
@@ -295,81 +289,3 @@ async def fork_library_agent(
library_agent_id=library_agent_id,
user_id=user_id,
)
class TriggeredPresetSetupParams(BaseModel):
name: str
description: str = ""
trigger_config: dict[str, Any]
agent_credentials: dict[str, CredentialsMetaInput] = Field(default_factory=dict)
@router.post("/{library_agent_id}/setup-trigger")
async def setup_trigger(
library_agent_id: str = Path(..., description="ID of the library agent"),
params: TriggeredPresetSetupParams = Body(),
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
) -> library_model.LibraryAgentPreset:
"""
Sets up a webhook-triggered `LibraryAgentPreset` for a `LibraryAgent`.
Returns the correspondingly created `LibraryAgentPreset` with `webhook_id` set.
"""
library_agent = await library_db.get_library_agent(
id=library_agent_id, user_id=user_id
)
if not library_agent:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Library agent #{library_agent_id} not found",
)
graph = await get_graph(
library_agent.graph_id, version=library_agent.graph_version, user_id=user_id
)
if not graph:
raise HTTPException(
status.HTTP_410_GONE,
f"Graph #{library_agent.graph_id} not accessible (anymore)",
)
if not (trigger_node := graph.webhook_input_node):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Graph #{library_agent.graph_id} does not have a webhook node",
)
trigger_config_with_credentials = {
**params.trigger_config,
**(
make_node_credentials_input_map(graph, params.agent_credentials).get(
trigger_node.id
)
or {}
),
}
new_webhook, feedback = await setup_webhook_for_block(
user_id=user_id,
trigger_block=trigger_node.block,
trigger_config=trigger_config_with_credentials,
)
if not new_webhook:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Could not set up webhook: {feedback}",
)
new_preset = await library_db.create_preset(
user_id=user_id,
preset=library_model.LibraryAgentPresetCreatable(
graph_id=library_agent.graph_id,
graph_version=library_agent.graph_version,
name=params.name,
description=params.description,
inputs=trigger_config_with_credentials,
credentials=params.agent_credentials,
webhook_id=new_webhook.id,
is_active=True,
),
)
return new_preset

View File

@@ -138,6 +138,66 @@ async def create_preset(
)
@router.post("/presets/setup-trigger")
async def setup_trigger(
params: models.TriggeredPresetSetupRequest = Body(),
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
) -> models.LibraryAgentPreset:
"""
Sets up a webhook-triggered `LibraryAgentPreset` for a `LibraryAgent`.
Returns the correspondingly created `LibraryAgentPreset` with `webhook_id` set.
"""
graph = await get_graph(
params.graph_id, version=params.graph_version, user_id=user_id
)
if not graph:
raise HTTPException(
status.HTTP_410_GONE,
f"Graph #{params.graph_id} not accessible (anymore)",
)
if not (trigger_node := graph.webhook_input_node):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Graph #{params.graph_id} does not have a webhook node",
)
trigger_config_with_credentials = {
**params.trigger_config,
**(
make_node_credentials_input_map(graph, params.agent_credentials).get(
trigger_node.id
)
or {}
),
}
new_webhook, feedback = await setup_webhook_for_block(
user_id=user_id,
trigger_block=trigger_node.block,
trigger_config=trigger_config_with_credentials,
)
if not new_webhook:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Could not set up webhook: {feedback}",
)
new_preset = await db.create_preset(
user_id=user_id,
preset=models.LibraryAgentPresetCreatable(
graph_id=params.graph_id,
graph_version=params.graph_version,
name=params.name,
description=params.description,
inputs=trigger_config_with_credentials,
credentials=params.agent_credentials,
webhook_id=new_webhook.id,
is_active=True,
),
)
return new_preset
@router.patch(
"/presets/{preset_id}",
summary="Update an existing preset",

View File

@@ -7,10 +7,15 @@ import prisma.errors
import prisma.models
import prisma.types
import backend.data.graph
import backend.server.v2.store.exceptions
import backend.server.v2.store.model
from backend.data.graph import GraphModel, get_sub_graphs
from backend.data.graph import (
GraphMeta,
GraphModel,
get_graph,
get_graph_as_admin,
get_sub_graphs,
)
from backend.data.includes import AGENT_GRAPH_INCLUDE
logger = logging.getLogger(__name__)
@@ -193,9 +198,7 @@ async def get_store_agent_details(
) from e
async def get_available_graph(
store_listing_version_id: str,
):
async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
try:
# Get avaialble, non-deleted store listing version
store_listing_version = (
@@ -215,18 +218,7 @@ async def get_available_graph(
detail=f"Store listing version {store_listing_version_id} not found",
)
graph = GraphModel.from_db(store_listing_version.AgentGraph)
# We return graph meta, without nodes, they cannot be just removed
# because then input_schema would be empty
return {
"id": graph.id,
"version": graph.version,
"is_active": graph.is_active,
"name": graph.name,
"description": graph.description,
"input_schema": graph.input_schema,
"output_schema": graph.output_schema,
}
return GraphModel.from_db(store_listing_version.AgentGraph).meta()
except Exception as e:
logger.error(f"Error getting agent: {e}")
@@ -1024,7 +1016,7 @@ async def get_agent(
if not store_listing_version:
raise ValueError(f"Store listing version {store_listing_version_id} not found")
graph = await backend.data.graph.get_graph(
graph = await get_graph(
user_id=user_id,
graph_id=store_listing_version.agentGraphId,
version=store_listing_version.agentGraphVersion,
@@ -1383,7 +1375,7 @@ async def get_agent_as_admin(
if not store_listing_version:
raise ValueError(f"Store listing version {store_listing_version_id} not found")
graph = await backend.data.graph.get_graph_as_admin(
graph = await get_graph_as_admin(
user_id=user_id,
graph_id=store_listing_version.agentGraphId,
version=store_listing_version.agentGraphVersion,

View File

@@ -12,7 +12,7 @@ from backend.util import json
def _tok_len(text: str, enc) -> int:
"""True token length of *text* in tokenizer *enc* (no wrapper cost)."""
return len(enc.encode(text))
return len(enc.encode(str(text)))
def _msg_tokens(msg: dict, enc) -> int:
@@ -29,7 +29,7 @@ def _truncate_middle_tokens(text: str, enc, max_tok: int) -> str:
Return *text* shortened to ≈max_tok tokens by keeping the head & tail
and inserting an ellipsis token in the middle.
"""
ids = enc.encode(text)
ids = enc.encode(str(text))
if len(ids) <= max_tok:
return text # nothing to do

View File

@@ -353,6 +353,10 @@ class Requests:
max_redirects: int = 10,
**kwargs,
) -> Response:
# Convert auth tuple to aiohttp.BasicAuth if necessary
if "auth" in kwargs and isinstance(kwargs["auth"], tuple):
kwargs["auth"] = aiohttp.BasicAuth(*kwargs["auth"])
if files is not None:
if json is not None:
raise ValueError(

View File

@@ -124,6 +124,19 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
description="Time in seconds for how far back to check for the late executions.",
)
block_error_rate_threshold: float = Field(
default=0.5,
description="Error rate threshold (0.0-1.0) for triggering block error alerts.",
)
block_error_rate_check_interval_secs: int = Field(
default=24 * 60 * 60, # 24 hours
description="Interval in seconds between block error rate checks.",
)
block_error_include_top_blocks: int = Field(
default=3,
description="Number of top blocks with most errors to show when no blocks exceed threshold (0 to disable).",
)
model_config = SettingsConfigDict(
env_file=".env",
extra="allow",
@@ -263,6 +276,11 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
description="Whether to mark failed scans as clean or not",
)
enable_example_blocks: bool = Field(
default=False,
description="Whether to enable example blocks in production",
)
@field_validator("platform_base_url", "frontend_base_url")
@classmethod
def validate_platform_base_url(cls, v: str, info: ValidationInfo) -> str:

View File

@@ -1,5 +1,6 @@
import json
from typing import Any, Type, TypeVar, cast, get_args, get_origin
import types
from typing import Any, Type, TypeVar, Union, cast, get_args, get_origin, overload
from prisma import Json as PrismaJson
@@ -104,9 +105,37 @@ def __convert_bool(value: Any) -> bool:
return bool(value)
def _try_convert(value: Any, target_type: Type, raise_on_mismatch: bool) -> Any:
def _try_convert(value: Any, target_type: Any, raise_on_mismatch: bool) -> Any:
origin = get_origin(target_type)
args = get_args(target_type)
# Handle Union types (including Optional which is Union[T, None])
if origin is Union or origin is types.UnionType:
# Handle None values for Optional types
if value is None:
if type(None) in args:
return None
elif raise_on_mismatch:
raise TypeError(f"Value {value} is not of expected type {target_type}")
else:
return value
# Try to convert to each type in the union, excluding None
non_none_types = [arg for arg in args if arg is not type(None)]
# Try each type in the union, using the original raise_on_mismatch behavior
for arg_type in non_none_types:
try:
return _try_convert(value, arg_type, raise_on_mismatch)
except (TypeError, ValueError, ConversionError):
continue
# If no conversion succeeded
if raise_on_mismatch:
raise TypeError(f"Value {value} is not of expected type {target_type}")
else:
return value
if origin is None:
origin = target_type
if origin not in [list, dict, tuple, str, set, int, float, bool]:
@@ -189,11 +218,19 @@ def type_match(value: Any, target_type: Type[T]) -> T:
return cast(T, _try_convert(value, target_type, raise_on_mismatch=True))
def convert(value: Any, target_type: Type[T]) -> T:
@overload
def convert(value: Any, target_type: Type[T]) -> T: ...
@overload
def convert(value: Any, target_type: Any) -> Any: ...
def convert(value: Any, target_type: Any) -> Any:
try:
if isinstance(value, PrismaJson):
value = value.data
return cast(T, _try_convert(value, target_type, raise_on_mismatch=False))
return _try_convert(value, target_type, raise_on_mismatch=False)
except Exception as e:
raise ConversionError(f"Failed to convert {value} to {target_type}") from e
@@ -203,6 +240,7 @@ class FormattedStringType(str):
@classmethod
def __get_pydantic_core_schema__(cls, source_type, handler):
_ = source_type # unused parameter required by pydantic
return handler(str)
@classmethod

View File

@@ -1,3 +1,5 @@
from typing import List, Optional
from backend.util.type import convert
@@ -5,6 +7,8 @@ def test_type_conversion():
assert convert(5.5, int) == 5
assert convert("5.5", int) == 5
assert convert([1, 2, 3], int) == 3
assert convert("7", Optional[int]) == 7
assert convert("7", int | None) == 7
assert convert("5.5", float) == 5.5
assert convert(5, float) == 5.0
@@ -25,8 +29,6 @@ def test_type_conversion():
assert convert([1, 2, 3], dict) == {0: 1, 1: 2, 2: 3}
assert convert((1, 2, 3), dict) == {0: 1, 1: 2, 2: 3}
from typing import List
assert convert("5", List[int]) == [5]
assert convert("[5,4,2]", List[int]) == [5, 4, 2]
assert convert([5, 4, 2], List[str]) == ["5", "4", "2"]

View File

@@ -0,0 +1,101 @@
#!/usr/bin/env python3
"""
Clean the test database by removing all data while preserving the schema.
Usage:
poetry run python clean_test_db.py [--yes]
Options:
--yes Skip confirmation prompt
"""
import asyncio
import sys
from prisma import Prisma
async def main():
db = Prisma()
await db.connect()
print("=" * 60)
print("Cleaning Test Database")
print("=" * 60)
print()
# Get initial counts
user_count = await db.user.count()
agent_count = await db.agentgraph.count()
print(f"Current data: {user_count} users, {agent_count} agent graphs")
if user_count == 0 and agent_count == 0:
print("Database is already clean!")
await db.disconnect()
return
# Check for --yes flag
skip_confirm = "--yes" in sys.argv
if not skip_confirm:
response = input("\nDo you want to clean all data? (yes/no): ")
if response.lower() != "yes":
print("Aborted.")
await db.disconnect()
return
print("\nCleaning database...")
# Delete in reverse order of dependencies
tables = [
("UserNotificationBatch", db.usernotificationbatch),
("NotificationEvent", db.notificationevent),
("CreditRefundRequest", db.creditrefundrequest),
("StoreListingReview", db.storelistingreview),
("StoreListingVersion", db.storelistingversion),
("StoreListing", db.storelisting),
("AgentNodeExecutionInputOutput", db.agentnodeexecutioninputoutput),
("AgentNodeExecution", db.agentnodeexecution),
("AgentGraphExecution", db.agentgraphexecution),
("AgentNodeLink", db.agentnodelink),
("LibraryAgent", db.libraryagent),
("AgentPreset", db.agentpreset),
("IntegrationWebhook", db.integrationwebhook),
("AgentNode", db.agentnode),
("AgentGraph", db.agentgraph),
("AgentBlock", db.agentblock),
("APIKey", db.apikey),
("CreditTransaction", db.credittransaction),
("AnalyticsMetrics", db.analyticsmetrics),
("AnalyticsDetails", db.analyticsdetails),
("Profile", db.profile),
("UserOnboarding", db.useronboarding),
("User", db.user),
]
for table_name, table in tables:
try:
count = await table.count()
if count > 0:
await table.delete_many()
print(f"✓ Deleted {count} records from {table_name}")
except Exception as e:
print(f"⚠ Error cleaning {table_name}: {e}")
# Refresh materialized views (they should be empty now)
try:
await db.execute_raw("SELECT refresh_store_materialized_views();")
print("\n✓ Refreshed materialized views")
except Exception as e:
print(f"\n⚠ Could not refresh materialized views: {e}")
await db.disconnect()
print("\n" + "=" * 60)
print("Database cleaned successfully!")
print("=" * 60)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,35 +1,60 @@
networks:
app-network:
name: app-network
shared-network:
name: shared-network
volumes:
supabase-config:
x-agpt-services:
&agpt-services
networks:
- app-network
- shared-network
x-supabase-services:
&supabase-services
networks:
- app-network
- shared-network
volumes:
clamav-data:
services:
postgres-test:
image: ankane/pgvector:latest
environment:
- POSTGRES_USER=${DB_USER:-postgres}
- POSTGRES_PASSWORD=${DB_PASS:-postgres}
- POSTGRES_DB=${DB_NAME:-postgres}
- POSTGRES_PORT=${DB_PORT:-5432}
healthcheck:
test: pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB
interval: 10s
timeout: 5s
retries: 5
db:
<<: *supabase-services
extends:
file: ../db/docker/docker-compose.yml
service: db
ports:
- "${DB_PORT:-5432}:5432"
networks:
- app-network-test
redis-test:
- ${POSTGRES_PORT}:5432 # We don't use Supavisor locally, so we expose the db directly.
vector:
<<: *supabase-services
extends:
file: ../db/docker/docker-compose.yml
service: vector
redis:
<<: *agpt-services
image: redis:latest
command: redis-server --requirepass password
ports:
- "6379:6379"
networks:
- app-network-test
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
rabbitmq-test:
rabbitmq:
<<: *agpt-services
image: rabbitmq:management
container_name: rabbitmq-test
container_name: rabbitmq
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
@@ -38,11 +63,28 @@ services:
start_period: 10s
environment:
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7 # CHANGE THIS TO A RANDOM PASSWORD IN PRODUCTION -- everywhere lol
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
ports:
- "5672:5672"
- "15672:15672"
clamav:
image: clamav/clamav-debian:latest
ports:
- "3310:3310"
volumes:
- clamav-data:/var/lib/clamav
environment:
- CLAMAV_NO_FRESHCLAMD=false
- CLAMD_CONF_StreamMaxLength=50M
- CLAMD_CONF_MaxFileSize=100M
- CLAMD_CONF_MaxScanSize=100M
- CLAMD_CONF_MaxThreads=12
- CLAMD_CONF_ReadTimeout=300
healthcheck:
test: ["CMD-SHELL", "clamdscan --version || exit 1"]
interval: 30s
timeout: 10s
retries: 3
networks:
app-network-test:
driver: bridge

View File

@@ -0,0 +1,254 @@
-- This migration creates materialized views for performance optimization
--
-- IMPORTANT: For production environments, pg_cron is REQUIRED for automatic refresh
-- Prerequisites for production:
-- 1. pg_cron extension must be installed: CREATE EXTENSION pg_cron;
-- 2. pg_cron must be configured in postgresql.conf:
-- shared_preload_libraries = 'pg_cron'
-- cron.database_name = 'your_database_name'
--
-- For development environments without pg_cron:
-- The migration will succeed but you must manually refresh views with:
-- SELECT refresh_store_materialized_views();
-- Check if pg_cron extension is installed and set a flag
DO $$
DECLARE
has_pg_cron BOOLEAN;
BEGIN
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pg_cron') INTO has_pg_cron;
IF NOT has_pg_cron THEN
RAISE WARNING 'pg_cron extension is not installed!';
RAISE WARNING 'Materialized views will be created but WILL NOT refresh automatically.';
RAISE WARNING 'For production use, install pg_cron with: CREATE EXTENSION pg_cron;';
RAISE WARNING 'For development, manually refresh with: SELECT refresh_store_materialized_views();';
-- For production deployments, uncomment the following line to make pg_cron mandatory:
-- RAISE EXCEPTION 'pg_cron is required for production deployments';
END IF;
-- Store the flag for later use in the migration
PERFORM set_config('migration.has_pg_cron', has_pg_cron::text, false);
END
$$;
-- CreateIndex
-- Optimized: Only include owningUserId in index columns since isDeleted and hasApprovedVersion are in WHERE clause
CREATE INDEX IF NOT EXISTS "idx_store_listing_approved" ON "StoreListing"("owningUserId") WHERE "isDeleted" = false AND "hasApprovedVersion" = true;
-- CreateIndex
-- Optimized: Only include storeListingId since submissionStatus is in WHERE clause
CREATE INDEX IF NOT EXISTS "idx_store_listing_version_status" ON "StoreListingVersion"("storeListingId") WHERE "submissionStatus" = 'APPROVED';
-- CreateIndex
CREATE INDEX IF NOT EXISTS "idx_slv_categories_gin" ON "StoreListingVersion" USING GIN ("categories") WHERE "submissionStatus" = 'APPROVED';
-- CreateIndex
CREATE INDEX IF NOT EXISTS "idx_slv_agent" ON "StoreListingVersion"("agentGraphId", "agentGraphVersion") WHERE "submissionStatus" = 'APPROVED';
-- CreateIndex
CREATE INDEX IF NOT EXISTS "idx_store_listing_review_version" ON "StoreListingReview"("storeListingVersionId");
-- CreateIndex
CREATE INDEX IF NOT EXISTS "idx_agent_graph_execution_agent" ON "AgentGraphExecution"("agentGraphId");
-- CreateIndex
CREATE INDEX IF NOT EXISTS "idx_profile_user" ON "Profile"("userId");
-- Additional performance indexes
CREATE INDEX IF NOT EXISTS "idx_store_listing_version_approved_listing" ON "StoreListingVersion"("storeListingId", "version") WHERE "submissionStatus" = 'APPROVED';
-- Create materialized view for agent run counts
CREATE MATERIALIZED VIEW IF NOT EXISTS "mv_agent_run_counts" AS
SELECT
"agentGraphId",
COUNT(*) AS run_count
FROM "AgentGraphExecution"
GROUP BY "agentGraphId";
-- CreateIndex
CREATE UNIQUE INDEX IF NOT EXISTS "idx_mv_agent_run_counts" ON "mv_agent_run_counts"("agentGraphId");
-- Create materialized view for review statistics
CREATE MATERIALIZED VIEW IF NOT EXISTS "mv_review_stats" AS
SELECT
sl.id AS "storeListingId",
COUNT(sr.id) AS review_count,
AVG(sr.score::numeric) AS avg_rating
FROM "StoreListing" sl
JOIN "StoreListingVersion" slv ON slv."storeListingId" = sl.id
LEFT JOIN "StoreListingReview" sr ON sr."storeListingVersionId" = slv.id
WHERE sl."isDeleted" = false
AND slv."submissionStatus" = 'APPROVED'
GROUP BY sl.id;
-- CreateIndex
CREATE UNIQUE INDEX IF NOT EXISTS "idx_mv_review_stats" ON "mv_review_stats"("storeListingId");
-- DropForeignKey (if any exist on the views)
-- None needed as views don't have foreign keys
-- DropView
DROP VIEW IF EXISTS "Creator";
-- DropView
DROP VIEW IF EXISTS "StoreAgent";
-- CreateView
CREATE OR REPLACE VIEW "StoreAgent" AS
WITH agent_versions AS (
SELECT
"storeListingId",
array_agg(DISTINCT version::text ORDER BY version::text) AS versions
FROM "StoreListingVersion"
WHERE "submissionStatus" = 'APPROVED'
GROUP BY "storeListingId"
)
SELECT
sl.id AS listing_id,
slv.id AS "storeListingVersionId",
slv."createdAt" AS updated_at,
sl.slug,
COALESCE(slv.name, '') AS agent_name,
slv."videoUrl" AS agent_video,
COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image,
slv."isFeatured" AS featured,
p.username AS creator_username,
p."avatarUrl" AS creator_avatar,
slv."subHeading" AS sub_heading,
slv.description,
slv.categories,
COALESCE(ar.run_count, 0::bigint) AS runs,
COALESCE(rs.avg_rating, 0.0)::double precision AS rating,
COALESCE(av.versions, ARRAY[slv.version::text]) AS versions
FROM "StoreListing" sl
INNER JOIN "StoreListingVersion" slv
ON slv."storeListingId" = sl.id
AND slv."submissionStatus" = 'APPROVED'
JOIN "AgentGraph" a
ON slv."agentGraphId" = a.id
AND slv."agentGraphVersion" = a.version
LEFT JOIN "Profile" p
ON sl."owningUserId" = p."userId"
LEFT JOIN "mv_review_stats" rs
ON sl.id = rs."storeListingId"
LEFT JOIN "mv_agent_run_counts" ar
ON a.id = ar."agentGraphId"
LEFT JOIN agent_versions av
ON sl.id = av."storeListingId"
WHERE sl."isDeleted" = false
AND sl."hasApprovedVersion" = true;
-- CreateView
CREATE OR REPLACE VIEW "Creator" AS
WITH creator_listings AS (
SELECT
sl."owningUserId",
sl.id AS listing_id,
slv."agentGraphId",
slv.categories,
sr.score,
ar.run_count
FROM "StoreListing" sl
INNER JOIN "StoreListingVersion" slv
ON slv."storeListingId" = sl.id
AND slv."submissionStatus" = 'APPROVED'
LEFT JOIN "StoreListingReview" sr
ON sr."storeListingVersionId" = slv.id
LEFT JOIN "mv_agent_run_counts" ar
ON ar."agentGraphId" = slv."agentGraphId"
WHERE sl."isDeleted" = false
AND sl."hasApprovedVersion" = true
),
creator_stats AS (
SELECT
cl."owningUserId",
COUNT(DISTINCT cl.listing_id) AS num_agents,
AVG(COALESCE(cl.score, 0)::numeric) AS agent_rating,
SUM(DISTINCT COALESCE(cl.run_count, 0)) AS agent_runs,
array_agg(DISTINCT cat ORDER BY cat) FILTER (WHERE cat IS NOT NULL) AS all_categories
FROM creator_listings cl
LEFT JOIN LATERAL unnest(COALESCE(cl.categories, ARRAY[]::text[])) AS cat ON true
GROUP BY cl."owningUserId"
)
SELECT
p.username,
p.name,
p."avatarUrl" AS avatar_url,
p.description,
cs.all_categories AS top_categories,
p.links,
p."isFeatured" AS is_featured,
COALESCE(cs.num_agents, 0::bigint) AS num_agents,
COALESCE(cs.agent_rating, 0.0) AS agent_rating,
COALESCE(cs.agent_runs, 0::numeric) AS agent_runs
FROM "Profile" p
LEFT JOIN creator_stats cs ON cs."owningUserId" = p."userId";
-- Create refresh function that works with the current schema
CREATE OR REPLACE FUNCTION refresh_store_materialized_views()
RETURNS void
LANGUAGE plpgsql
AS $$
DECLARE
current_schema_name text;
BEGIN
-- Get the current schema
current_schema_name := current_schema();
-- Use CONCURRENTLY for better performance during refresh
EXECUTE format('REFRESH MATERIALIZED VIEW CONCURRENTLY %I."mv_agent_run_counts"', current_schema_name);
EXECUTE format('REFRESH MATERIALIZED VIEW CONCURRENTLY %I."mv_review_stats"', current_schema_name);
RAISE NOTICE 'Materialized views refreshed in schema % at %', current_schema_name, NOW();
EXCEPTION
WHEN OTHERS THEN
-- Fallback to non-concurrent refresh if concurrent fails
EXECUTE format('REFRESH MATERIALIZED VIEW %I."mv_agent_run_counts"', current_schema_name);
EXECUTE format('REFRESH MATERIALIZED VIEW %I."mv_review_stats"', current_schema_name);
RAISE NOTICE 'Materialized views refreshed (non-concurrent) in schema % at % due to: %', current_schema_name, NOW(), SQLERRM;
END;
$$;
-- Initial refresh of materialized views
SELECT refresh_store_materialized_views();
-- Schedule automatic refresh every 15 minutes (only if pg_cron is available)
DO $$
DECLARE
has_pg_cron BOOLEAN;
current_schema_name text;
job_name text;
BEGIN
-- Get the flag we set earlier
has_pg_cron := current_setting('migration.has_pg_cron', true)::boolean;
-- Get current schema name
current_schema_name := current_schema();
-- Create a unique job name for this schema
job_name := format('refresh-store-views-%s', current_schema_name);
IF has_pg_cron THEN
-- Try to unschedule existing job (ignore errors if it doesn't exist)
BEGIN
PERFORM cron.unschedule(job_name);
EXCEPTION WHEN OTHERS THEN
-- Job doesn't exist, that's fine
NULL;
END;
-- Schedule the refresh job with schema-specific command
PERFORM cron.schedule(
job_name,
'*/15 * * * *',
format('SELECT %I.refresh_store_materialized_views();', current_schema_name)
);
RAISE NOTICE 'Scheduled automatic refresh of materialized views every 15 minutes for schema %', current_schema_name;
ELSE
RAISE WARNING '⚠️ Automatic refresh NOT configured - pg_cron is not available';
RAISE WARNING '⚠️ You must manually refresh views with: SELECT refresh_store_materialized_views();';
RAISE WARNING '⚠️ Or install pg_cron for automatic refresh in production';
END IF;
END;
$$;

View File

@@ -0,0 +1,155 @@
-- Unschedule cron job (if it exists)
DO $$
BEGIN
IF EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pg_cron') THEN
PERFORM cron.unschedule('refresh-store-views');
RAISE NOTICE 'Unscheduled automatic refresh of materialized views';
END IF;
EXCEPTION
WHEN OTHERS THEN
RAISE NOTICE 'Could not unschedule cron job (may not exist): %', SQLERRM;
END;
$$;
-- DropView
DROP VIEW IF EXISTS "Creator";
-- DropView
DROP VIEW IF EXISTS "StoreAgent";
-- CreateView (restore original StoreAgent)
CREATE VIEW "StoreAgent" AS
WITH reviewstats AS (
SELECT sl_1.id AS "storeListingId",
count(sr.id) AS review_count,
avg(sr.score::numeric) AS avg_rating
FROM "StoreListing" sl_1
JOIN "StoreListingVersion" slv_1
ON slv_1."storeListingId" = sl_1.id
JOIN "StoreListingReview" sr
ON sr."storeListingVersionId" = slv_1.id
WHERE sl_1."isDeleted" = false
GROUP BY sl_1.id
), agentruns AS (
SELECT "AgentGraphExecution"."agentGraphId",
count(*) AS run_count
FROM "AgentGraphExecution"
GROUP BY "AgentGraphExecution"."agentGraphId"
)
SELECT sl.id AS listing_id,
slv.id AS "storeListingVersionId",
slv."createdAt" AS updated_at,
sl.slug,
COALESCE(slv.name, '') AS agent_name,
slv."videoUrl" AS agent_video,
COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image,
slv."isFeatured" AS featured,
p.username AS creator_username,
p."avatarUrl" AS creator_avatar,
slv."subHeading" AS sub_heading,
slv.description,
slv.categories,
COALESCE(ar.run_count, 0::bigint) AS runs,
COALESCE(rs.avg_rating, 0.0)::double precision AS rating,
array_agg(DISTINCT slv.version::text) AS versions
FROM "StoreListing" sl
JOIN "StoreListingVersion" slv
ON slv."storeListingId" = sl.id
JOIN "AgentGraph" a
ON slv."agentGraphId" = a.id
AND slv."agentGraphVersion" = a.version
LEFT JOIN "Profile" p
ON sl."owningUserId" = p."userId"
LEFT JOIN reviewstats rs
ON sl.id = rs."storeListingId"
LEFT JOIN agentruns ar
ON a.id = ar."agentGraphId"
WHERE sl."isDeleted" = false
AND sl."hasApprovedVersion" = true
AND slv."submissionStatus" = 'APPROVED'
GROUP BY sl.id, slv.id, sl.slug, slv."createdAt", slv.name, slv."videoUrl",
slv."imageUrls", slv."isFeatured", p.username, p."avatarUrl",
slv."subHeading", slv.description, slv.categories, ar.run_count,
rs.avg_rating;
-- CreateView (restore original Creator)
CREATE VIEW "Creator" AS
WITH agentstats AS (
SELECT p_1.username,
count(DISTINCT sl.id) AS num_agents,
avg(COALESCE(sr.score, 0)::numeric) AS agent_rating,
sum(COALESCE(age.run_count, 0::bigint)) AS agent_runs
FROM "Profile" p_1
LEFT JOIN "StoreListing" sl
ON sl."owningUserId" = p_1."userId"
LEFT JOIN "StoreListingVersion" slv
ON slv."storeListingId" = sl.id
LEFT JOIN "StoreListingReview" sr
ON sr."storeListingVersionId" = slv.id
LEFT JOIN (
SELECT "AgentGraphExecution"."agentGraphId",
count(*) AS run_count
FROM "AgentGraphExecution"
GROUP BY "AgentGraphExecution"."agentGraphId"
) age ON age."agentGraphId" = slv."agentGraphId"
WHERE sl."isDeleted" = false
AND sl."hasApprovedVersion" = true
AND slv."submissionStatus" = 'APPROVED'
GROUP BY p_1.username
)
SELECT p.username,
p.name,
p."avatarUrl" AS avatar_url,
p.description,
array_agg(DISTINCT cats.c) FILTER (WHERE cats.c IS NOT NULL) AS top_categories,
p.links,
p."isFeatured" AS is_featured,
COALESCE(ast.num_agents, 0::bigint) AS num_agents,
COALESCE(ast.agent_rating, 0.0) AS agent_rating,
COALESCE(ast.agent_runs, 0::numeric) AS agent_runs
FROM "Profile" p
LEFT JOIN agentstats ast
ON ast.username = p.username
LEFT JOIN LATERAL (
SELECT unnest(slv.categories) AS c
FROM "StoreListing" sl
JOIN "StoreListingVersion" slv
ON slv."storeListingId" = sl.id
WHERE sl."owningUserId" = p."userId"
AND sl."isDeleted" = false
AND sl."hasApprovedVersion" = true
AND slv."submissionStatus" = 'APPROVED'
) cats ON true
GROUP BY p.username, p.name, p."avatarUrl", p.description, p.links,
p."isFeatured", ast.num_agents, ast.agent_rating, ast.agent_runs;
-- Drop function
DROP FUNCTION IF EXISTS platform.refresh_store_materialized_views();
-- Drop materialized views
DROP MATERIALIZED VIEW IF EXISTS "mv_review_stats";
DROP MATERIALIZED VIEW IF EXISTS "mv_agent_run_counts";
-- DropIndex
DROP INDEX IF EXISTS "idx_profile_user";
-- DropIndex
DROP INDEX IF EXISTS "idx_agent_graph_execution_agent";
-- DropIndex
DROP INDEX IF EXISTS "idx_store_listing_review_version";
-- DropIndex
DROP INDEX IF EXISTS "idx_slv_agent";
-- DropIndex
DROP INDEX IF EXISTS "idx_slv_categories_gin";
-- DropIndex
DROP INDEX IF EXISTS "idx_store_listing_version_status";
-- DropIndex
DROP INDEX IF EXISTS "idx_store_listing_approved";
-- DropIndex
DROP INDEX IF EXISTS "idx_store_listing_version_approved_listing";

View File

@@ -0,0 +1,11 @@
-- CreateTable
CREATE TABLE "AgentNodeExecutionKeyValueData" (
"userId" TEXT NOT NULL,
"key" TEXT NOT NULL,
"agentNodeExecutionId" TEXT NOT NULL,
"data" JSONB,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3),
CONSTRAINT "AgentNodeExecutionKeyValueData_pkey" PRIMARY KEY ("userId","key")
);

View File

@@ -31,18 +31,18 @@ files = [
[[package]]
name = "aiodns"
version = "3.4.0"
version = "3.5.0"
description = "Simple DNS resolver for asyncio"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "aiodns-3.4.0-py3-none-any.whl", hash = "sha256:4da2b25f7475343f3afbb363a2bfe46afa544f2b318acb9a945065e622f4ed24"},
{file = "aiodns-3.4.0.tar.gz", hash = "sha256:24b0ae58410530367f21234d0c848e4de52c1f16fbddc111726a4ab536ec1b2f"},
{file = "aiodns-3.5.0-py3-none-any.whl", hash = "sha256:6d0404f7d5215849233f6ee44854f2bb2481adf71b336b2279016ea5990ca5c5"},
{file = "aiodns-3.5.0.tar.gz", hash = "sha256:11264edbab51896ecf546c18eb0dd56dff0428c6aa6d2cd87e643e07300eb310"},
]
[package.dependencies]
pycares = ">=4.0.0"
pycares = ">=4.9.0"
[[package]]
name = "aiofiles"
@@ -222,14 +222,14 @@ files = [
[[package]]
name = "anthropic"
version = "0.51.0"
version = "0.57.1"
description = "The official Python library for the anthropic API"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "anthropic-0.51.0-py3-none-any.whl", hash = "sha256:b8b47d482c9aa1f81b923555cebb687c2730309a20d01be554730c8302e0f62a"},
{file = "anthropic-0.51.0.tar.gz", hash = "sha256:6f824451277992af079554430d5b2c8ff5bc059cc2c968cdc3f06824437da201"},
{file = "anthropic-0.57.1-py3-none-any.whl", hash = "sha256:33afc1f395af207d07ff1bffc0a3d1caac53c371793792569c5d2f09283ea306"},
{file = "anthropic-0.57.1.tar.gz", hash = "sha256:7815dd92245a70d21f65f356f33fc80c5072eada87fb49437767ea2918b2c4b0"},
]
[package.dependencies]
@@ -242,6 +242,7 @@ sniffio = "*"
typing-extensions = ">=4.10,<5"
[package.extras]
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.6)"]
bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"]
vertex = ["google-auth[requests] (>=2,<3)"]
@@ -1005,14 +1006,14 @@ pgp = ["gpg"]
[[package]]
name = "e2b"
version = "1.5.0"
version = "1.5.4"
description = "E2B SDK that give agents cloud environments"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "e2b-1.5.0-py3-none-any.whl", hash = "sha256:875a843d1d314a9945e24bfb78c9b1b5cac7e2ecb1e799664d827a26a0b2276a"},
{file = "e2b-1.5.0.tar.gz", hash = "sha256:905730eea5c07f271d073d4b5d2a9ef44c8ac04b9b146a99fa0235db77bf6854"},
{file = "e2b-1.5.4-py3-none-any.whl", hash = "sha256:9c8d22f9203311dff890e037823596daaba3d793300238117f2efc5426888f2c"},
{file = "e2b-1.5.4.tar.gz", hash = "sha256:49f1c115d0198244beef5854d19cc857fda9382e205f137b98d3dae0e7e0b2d2"},
]
[package.dependencies]
@@ -1026,19 +1027,19 @@ typing-extensions = ">=4.1.0"
[[package]]
name = "e2b-code-interpreter"
version = "1.5.0"
version = "1.5.2"
description = "E2B Code Interpreter - Stateful code execution"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "e2b_code_interpreter-1.5.0-py3-none-any.whl", hash = "sha256:299f5641a3754264a07f8edc3cccb744d6b009f10dc9285789a9352e24989a9b"},
{file = "e2b_code_interpreter-1.5.0.tar.gz", hash = "sha256:cd6028b6f20c4231e88a002de86484b9d4a99ea588b5be183b9ec7189a0f3cf6"},
{file = "e2b_code_interpreter-1.5.2-py3-none-any.whl", hash = "sha256:5c3188d8f25226b28fef4b255447cc6a4c36afb748bdd5180b45be486d5169f3"},
{file = "e2b_code_interpreter-1.5.2.tar.gz", hash = "sha256:3bd6ea70596290e85aaf0a2f19f28bf37a5e73d13086f5e6a0080bb591c5a547"},
]
[package.dependencies]
attrs = ">=21.3.0"
e2b = ">=1.4.0,<2.0.0"
e2b = ">=1.5.4,<2.0.0"
httpx = ">=0.20.0,<1.0.0"
[[package]]
@@ -1109,14 +1110,14 @@ typing-extensions = "*"
[[package]]
name = "fastapi"
version = "0.115.12"
version = "0.115.14"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "fastapi-0.115.12-py3-none-any.whl", hash = "sha256:e94613d6c05e27be7ffebdd6ea5f388112e5e430c8f7d6494a9d1d88d43e814d"},
{file = "fastapi-0.115.12.tar.gz", hash = "sha256:1e2c2a2646905f9e83d32f04a3f86aff4a286669c6c950ca95b5fd68c2602681"},
{file = "fastapi-0.115.14-py3-none-any.whl", hash = "sha256:6c0c8bf9420bd58f565e585036d971872472b4f7d3f6c73b698e10cffdefb3ca"},
{file = "fastapi-0.115.14.tar.gz", hash = "sha256:b1de15cdc1c499a4da47914db35d0e4ef8f1ce62b624e94e0e5824421df99739"},
]
[package.dependencies]
@@ -1192,20 +1193,20 @@ packaging = ">=20"
[[package]]
name = "flake8"
version = "7.2.0"
version = "7.3.0"
description = "the modular source code checker: pep8 pyflakes and co"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "flake8-7.2.0-py2.py3-none-any.whl", hash = "sha256:93b92ba5bdb60754a6da14fa3b93a9361fd00a59632ada61fd7b130436c40343"},
{file = "flake8-7.2.0.tar.gz", hash = "sha256:fa558ae3f6f7dbf2b4f22663e5343b6b6023620461f8d4ff2019ef4b5ee70426"},
{file = "flake8-7.3.0-py2.py3-none-any.whl", hash = "sha256:b9696257b9ce8beb888cdbe31cf885c90d31928fe202be0889a7cdafad32f01e"},
{file = "flake8-7.3.0.tar.gz", hash = "sha256:fe044858146b9fc69b551a4b490d69cf960fcb78ad1edcb84e7fbb1b4a8e3872"},
]
[package.dependencies]
mccabe = ">=0.7.0,<0.8.0"
pycodestyle = ">=2.13.0,<2.14.0"
pyflakes = ">=3.3.0,<3.4.0"
pycodestyle = ">=2.14.0,<2.15.0"
pyflakes = ">=3.4.0,<3.5.0"
[[package]]
name = "frozenlist"
@@ -1356,14 +1357,14 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.0)"]
[[package]]
name = "google-api-python-client"
version = "2.170.0"
version = "2.176.0"
description = "Google API Client Library for Python"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "google_api_python_client-2.170.0-py3-none-any.whl", hash = "sha256:7bf518a0527ad23322f070fa69f4f24053170d5c766821dc970ff0571ec22748"},
{file = "google_api_python_client-2.170.0.tar.gz", hash = "sha256:75f3a1856f11418ea3723214e0abc59d9b217fd7ed43dcf743aab7f06ab9e2b1"},
{file = "google_api_python_client-2.176.0-py3-none-any.whl", hash = "sha256:e22239797f1d085341e12cd924591fc65c56d08e0af02549d7606092e6296510"},
{file = "google_api_python_client-2.176.0.tar.gz", hash = "sha256:2b451cdd7fd10faeb5dd20f7d992f185e1e8f4124c35f2cdcc77c843139a4cf1"},
]
[package.dependencies]
@@ -1516,27 +1517,27 @@ protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4
[[package]]
name = "google-cloud-storage"
version = "3.1.0"
version = "3.2.0"
description = "Google Cloud Storage API client library"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "google_cloud_storage-3.1.0-py2.py3-none-any.whl", hash = "sha256:eaf36966b68660a9633f03b067e4a10ce09f1377cae3ff9f2c699f69a81c66c6"},
{file = "google_cloud_storage-3.1.0.tar.gz", hash = "sha256:944273179897c7c8a07ee15f2e6466a02da0c7c4b9ecceac2a26017cb2972049"},
{file = "google_cloud_storage-3.2.0-py3-none-any.whl", hash = "sha256:ff7a9a49666954a7c3d1598291220c72d3b9e49d9dfcf9dfaecb301fc4fb0b24"},
{file = "google_cloud_storage-3.2.0.tar.gz", hash = "sha256:decca843076036f45633198c125d1861ffbf47ebf5c0e3b98dcb9b2db155896c"},
]
[package.dependencies]
google-api-core = ">=2.15.0,<3.0.0dev"
google-auth = ">=2.26.1,<3.0dev"
google-cloud-core = ">=2.4.2,<3.0dev"
google-crc32c = ">=1.0,<2.0dev"
google-resumable-media = ">=2.7.2"
requests = ">=2.18.0,<3.0.0dev"
google-api-core = ">=2.15.0,<3.0.0"
google-auth = ">=2.26.1,<3.0.0"
google-cloud-core = ">=2.4.2,<3.0.0"
google-crc32c = ">=1.1.3,<2.0.0"
google-resumable-media = ">=2.7.2,<3.0.0"
requests = ">=2.22.0,<3.0.0"
[package.extras]
protobuf = ["protobuf (<6.0.0dev)"]
tracing = ["opentelemetry-api (>=1.1.0)"]
protobuf = ["protobuf (>=3.20.2,<7.0.0)"]
tracing = ["opentelemetry-api (>=1.1.0,<2.0.0)"]
[[package]]
name = "google-crc32c"
@@ -1744,14 +1745,14 @@ test = ["objgraph", "psutil"]
[[package]]
name = "groq"
version = "0.24.0"
version = "0.29.0"
description = "The official Python library for the groq API"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "groq-0.24.0-py3-none-any.whl", hash = "sha256:0020e6b0b2b267263c9eb7c318deef13c12f399c6525734200b11d777b00088e"},
{file = "groq-0.24.0.tar.gz", hash = "sha256:e821559de8a77fb81d2585b3faec80ff923d6d64fd52339b33f6c94997d6f7f5"},
{file = "groq-0.29.0-py3-none-any.whl", hash = "sha256:03515ec46be1ef1feef0cd9d876b6f30a39ee2742e76516153d84acd7c97f23a"},
{file = "groq-0.29.0.tar.gz", hash = "sha256:109dc4d696c05d44e4c2cd157652c4c6600c3e96f093f6e158facb5691e37847"},
]
[package.dependencies]
@@ -1762,6 +1763,9 @@ pydantic = ">=1.9.0,<3"
sniffio = "*"
typing-extensions = ">=4.10,<5"
[package.extras]
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.6)"]
[[package]]
name = "grpc-google-iam-v1"
version = "0.14.2"
@@ -2548,14 +2552,14 @@ files = [
[[package]]
name = "mem0ai"
version = "0.1.102"
version = "0.1.114"
description = "Long-term memory for AI Agents"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "mem0ai-0.1.102-py3-none-any.whl", hash = "sha256:1401ccfd2369e2182ce78abb61b817e739fe49508b5a8ad98abcd4f8ad4db0b4"},
{file = "mem0ai-0.1.102.tar.gz", hash = "sha256:7358dba4fbe954b9c3f33204c14df7babaf9067e2eb48241d89a32e6bc774988"},
{file = "mem0ai-0.1.114-py3-none-any.whl", hash = "sha256:dfb7f0079ee282f5d9782e220f6f09707bcf5e107925d1901dbca30d8dd83f9b"},
{file = "mem0ai-0.1.114.tar.gz", hash = "sha256:b27886132eaec78544e8b8b54f0b14a36728f3c99da54cb7cb417150e2fad7e1"},
]
[package.dependencies]
@@ -2568,8 +2572,11 @@ sqlalchemy = ">=2.0.31"
[package.extras]
dev = ["isort (>=5.13.2)", "pytest (>=8.2.2)", "ruff (>=0.6.5)"]
graph = ["langchain-neo4j (>=0.4.0)", "neo4j (>=5.23.1)", "rank-bm25 (>=0.2.2)"]
extras = ["boto3 (>=1.34.0)", "elasticsearch (>=8.0.0)", "langchain-community (>=0.0.0)", "langchain-memgraph (>=0.1.0)", "opensearch-py (>=2.0.0)", "sentence-transformers (>=5.0.0)"]
graph = ["langchain-aws (>=0.2.23)", "langchain-neo4j (>=0.4.0)", "neo4j (>=5.23.1)", "rank-bm25 (>=0.2.2)"]
llms = ["google-genai (>=1.0.0)", "google-generativeai (>=0.3.0)", "groq (>=0.3.0)", "litellm (>=0.1.0)", "ollama (>=0.1.0)", "together (>=0.2.10)", "vertexai (>=0.1.0)"]
test = ["pytest (>=8.2.2)", "pytest-asyncio (>=0.23.7)", "pytest-mock (>=3.14.0)"]
vector-stores = ["azure-search-documents (>=11.4.0b8)", "chromadb (>=0.4.24)", "faiss-cpu (>=1.7.4)", "pinecone (<=7.3.0)", "pinecone-text (>=0.10.0)", "pymochow (>=2.2.9)", "pymongo (>=4.13.2)", "upstash-vector (>=0.1.0)", "vecs (>=0.4.0)", "weaviate-client (>=4.4.0)"]
[[package]]
name = "more-itertools"
@@ -2908,14 +2915,14 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "ollama"
version = "0.4.9"
version = "0.5.1"
description = "The official Python client for Ollama."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "ollama-0.4.9-py3-none-any.whl", hash = "sha256:18c8c85358c54d7f73d6a66cda495b0e3ba99fdb88f824ae470d740fbb211a50"},
{file = "ollama-0.4.9.tar.gz", hash = "sha256:5266d4d29b5089a01489872b8e8f980f018bccbdd1082b3903448af1d5615ce7"},
{file = "ollama-0.5.1-py3-none-any.whl", hash = "sha256:4c8839f35bc173c7057b1eb2cbe7f498c1a7e134eafc9192824c8aecb3617506"},
{file = "ollama-0.5.1.tar.gz", hash = "sha256:5a799e4dc4e7af638b11e3ae588ab17623ee019e496caaf4323efbaa8feeff93"},
]
[package.dependencies]
@@ -2924,14 +2931,14 @@ pydantic = ">=2.9"
[[package]]
name = "openai"
version = "1.82.1"
version = "1.93.2"
description = "The official Python library for the openai API"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "openai-1.82.1-py3-none-any.whl", hash = "sha256:334eb5006edf59aa464c9e932b9d137468d810b2659e5daea9b3a8c39d052395"},
{file = "openai-1.82.1.tar.gz", hash = "sha256:ffc529680018e0417acac85f926f92aa0bbcbc26e82e2621087303c66bc7f95d"},
{file = "openai-1.93.2-py3-none-any.whl", hash = "sha256:5adbbebd48eae160e6d68efc4c0a4f7cb1318a44c62d9fc626cec229f418eab4"},
{file = "openai-1.93.2.tar.gz", hash = "sha256:4a7312b426b5e4c98b78dfa1148b5683371882de3ad3d5f7c8e0c74f3cc90778"},
]
[package.dependencies]
@@ -2945,6 +2952,7 @@ tqdm = ">4"
typing-extensions = ">=4.11,<5"
[package.extras]
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.6)"]
datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"]
realtime = ["websockets (>=13,<16)"]
voice-helpers = ["numpy (>=2.0.2)", "sounddevice (>=0.5.1)"]
@@ -3259,14 +3267,14 @@ testing = ["coverage", "pytest", "pytest-benchmark"]
[[package]]
name = "poethepoet"
version = "0.34.0"
description = "A task runner that works well with poetry."
version = "0.36.0"
description = "A task runner that works well with poetry and uv."
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "poethepoet-0.34.0-py3-none-any.whl", hash = "sha256:c472d6f0fdb341b48d346f4ccd49779840c15b30dfd6bc6347a80d6274b5e34e"},
{file = "poethepoet-0.34.0.tar.gz", hash = "sha256:86203acce555bbfe45cb6ccac61ba8b16a5784264484195874da457ddabf5850"},
{file = "poethepoet-0.36.0-py3-none-any.whl", hash = "sha256:693e3c1eae9f6731d3613c3c0c40f747d3c5c68a375beda42e590a63c5623308"},
{file = "poethepoet-0.36.0.tar.gz", hash = "sha256:2217b49cb4e4c64af0b42ff8c4814b17f02e107d38bc461542517348ede25663"},
]
[package.dependencies]
@@ -3492,14 +3500,14 @@ tqdm = "*"
[[package]]
name = "prometheus-client"
version = "0.21.1"
version = "0.22.1"
description = "Python client for the Prometheus monitoring system."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "prometheus_client-0.21.1-py3-none-any.whl", hash = "sha256:594b45c410d6f4f8888940fe80b5cc2521b305a1fafe1c58609ef715a001f301"},
{file = "prometheus_client-0.21.1.tar.gz", hash = "sha256:252505a722ac04b0456be05c05f75f45d760c2911ffc45f2a06bcaed9f3ae3fb"},
{file = "prometheus_client-0.22.1-py3-none-any.whl", hash = "sha256:cca895342e308174341b2cbf99a56bef291fbc0ef7b9e5412a0f26d653ba7094"},
{file = "prometheus_client-0.22.1.tar.gz", hash = "sha256:190f1331e783cf21eb60bca559354e0a4d4378facecf78f5428c39b675d20d28"},
]
[package.extras]
@@ -3783,83 +3791,88 @@ pyasn1 = ">=0.6.1,<0.7.0"
[[package]]
name = "pycares"
version = "4.8.0"
version = "4.9.0"
description = "Python interface for c-ares"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pycares-4.8.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f40d9f4a8de398b110fdf226cdfadd86e8c7eb71d5298120ec41cf8d94b0012f"},
{file = "pycares-4.8.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:339de06fc849a51015968038d2bbed68fc24047522404af9533f32395ca80d25"},
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:372a236c1502b9056b0bea195c64c329603b4efa70b593a33b7ae37fbb7fad00"},
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03f66a5e143d102ccc204bd4e29edd70bed28420f707efd2116748241e30cb73"},
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ef50504296cd5fc58cfd6318f82e20af24fbe2c83004f6ff16259adb13afdf14"},
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d1bc541b627c7951dd36136b18bd185c5244a0fb2af5b1492ffb8acaceec1c5b"},
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:938d188ed6bed696099be67ebdcdf121827b9432b17a9ea9e40dc35fd9d85363"},
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:327837ffdc0c7adda09c98e1263c64b2aff814eea51a423f66733c75ccd9a642"},
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:a6b9b8d08c4508c45bd39e0c74e9e7052736f18ca1d25a289365bb9ac36e5849"},
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:feac07d5e6d2d8f031c71237c21c21b8c995b41a1eba64560e8cf1e42ac11bc6"},
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:5bcdbf37012fd2323ca9f2a1074421a9ccf277d772632f8f0ce8c46ec7564250"},
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e3ebb692cb43fcf34fe0d26f2cf9a0ea53fdfb136463845b81fad651277922db"},
{file = "pycares-4.8.0-cp310-cp310-win32.whl", hash = "sha256:d98447ec0efff3fa868ccc54dcc56e71faff498f8848ecec2004c3108efb4da2"},
{file = "pycares-4.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:1abb8f40917960ead3c2771277f0bdee1967393b0fdf68743c225b606787da68"},
{file = "pycares-4.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5e25db89005ddd8d9c5720293afe6d6dd92e682fc6bc7a632535b84511e2060d"},
{file = "pycares-4.8.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6f9665ef116e6ee216c396f5f927756c2164f9f3316aec7ff1a9a1e1e7ec9b2a"},
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54a96893133471f6889b577147adcc21a480dbe316f56730871028379c8313f3"},
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51024b3a69762bd3100d94986a29922be15e13f56f991aaefb41f5bcd3d7f0bb"},
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:47ff9db50c599e4d965ae3bec99cc30941c1d2b0f078ec816680b70d052dd54a"},
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:27ef8ff4e0f60ea6769a60d1c3d1d2aefed1d832e7bb83fc3934884e2dba5cdd"},
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63511af7a3f9663f562fbb6bfa3591a259505d976e2aba1fa2da13dde43c6ca7"},
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:73c3219b47616e6a5ad1810de96ed59721c7751f19b70ae7bf24997a8365408f"},
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:da42a45207c18f37be5e491c14b6d1063cfe1e46620eb661735d0cedc2b59099"},
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8a068e898bb5dd09cd654e19cd2abf20f93d0cc59d5d955135ed48ea0f806aa1"},
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:962aed95675bb66c0b785a2fbbd1bb58ce7f009e283e4ef5aaa4a1f2dc00d217"},
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ce8b1a16c1e4517a82a0ebd7664783a327166a3764d844cf96b1fb7b9dd1e493"},
{file = "pycares-4.8.0-cp311-cp311-win32.whl", hash = "sha256:b3749ddbcbd216376c3b53d42d8b640b457133f1a12b0e003f3838f953037ae7"},
{file = "pycares-4.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:5ce8a4e1b485b2360ab666c4ea1db97f57ede345a3b566d80bfa52b17e616610"},
{file = "pycares-4.8.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3273e01a75308ed06d2492d83c7ba476e579a60a24d9f20fe178ce5e9d8d028b"},
{file = "pycares-4.8.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fcedaadea1f452911fd29935749f98d144dae758d6003b7e9b6c5d5bd47d1dff"},
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aae6cb33e287e06a4aabcbc57626df682c9a4fa8026207f5b498697f1c2fb562"},
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25038b930e5be82839503fb171385b2aefd6d541bc5b7da0938bdb67780467d2"},
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cc8499b6e7dfbe4af65f6938db710ce9acd1debf34af2cbb93b898b1e5da6a5a"},
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c4e1c6a68ef56a7622f6176d9946d4e51f3c853327a0123ef35a5380230c84cd"},
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7cc8c3c9114b9c84e4062d25ca9b4bddc80a65d0b074c7cb059275273382f89"},
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:4404014069d3e362abf404c9932d4335bb9c07ba834cfe7d683c725b92e0f9da"},
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:ee0a58c32ec2a352cef0e1d20335a7caf9871cd79b73be2ca2896fe70f09c9d7"},
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:35f32f52b486b8fede3cbebf088f30b01242d0321b5216887c28e80490595302"},
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:ecbb506e27a3b3a2abc001c77beeccf265475c84b98629a6b3e61bd9f2987eaa"},
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:9392b2a34adbf60cb9e38f4a0d363413ecea8d835b5a475122f50f76676d59dd"},
{file = "pycares-4.8.0-cp312-cp312-win32.whl", hash = "sha256:f0fbefe68403ffcff19c869b8d621c88a6d2cef18d53cf0dab0fa9458a6ca712"},
{file = "pycares-4.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:fa8aab6085a2ddfb1b43a06ddf1b498347117bb47cd620d9b12c43383c9c2737"},
{file = "pycares-4.8.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:358a9a2c6fed59f62788e63d88669224955443048a1602016d4358e92aedb365"},
{file = "pycares-4.8.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0e3e1278967fa8d4a0056be3fcc8fc551b8bad1fc7d0e5172196dccb8ddb036a"},
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:79befb773e370a8f97de9f16f5ea2c7e7fa0e3c6c74fbea6d332bf58164d7d06"},
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2b00d3695db64ce98a34e632e1d53f5a1cdb25451489f227bec2a6c03ff87ee8"},
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:37bdc4f2ff0612d60fc4f7547e12ff02cdcaa9a9e42e827bb64d4748994719f1"},
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd92c44498ec7a6139888b464b28c49f7ba975933689bd67ea8d572b94188404"},
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2665a0d810e2bbc41e97f3c3e5ea7950f666b3aa19c5f6c99d6b018ccd2e0052"},
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:45a629a6470a33478514c566bce50c63f1b17d1c5f2f964c9a6790330dc105fb"},
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:47bb378f1773f41cca8e31dcdf009ce4a9b8aff8a30c7267aaff9a099c407ba5"},
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:fb3feae38458005cc101956e38f16eb3145fff8cd793e35cd4bdef6bf1aa2623"},
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:14bc28aeaa66b0f4331ac94455e8043c8a06b3faafd78cc49d4b677bae0d0b08"},
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:62c82b871470f2864a1febf7b96bb1d108ce9063e6d3d43727e8a46f0028a456"},
{file = "pycares-4.8.0-cp313-cp313-win32.whl", hash = "sha256:01afa8964c698c8f548b46d726f766aa7817b2d4386735af1f7996903d724920"},
{file = "pycares-4.8.0-cp313-cp313-win_amd64.whl", hash = "sha256:22f86f81b12ab17b0a7bd0da1e27938caaed11715225c1168763af97f8bb51a7"},
{file = "pycares-4.8.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:61325d13a95255e858f42a7a1a9e482ff47ef2233f95ad9a4f308a3bd8ecf903"},
{file = "pycares-4.8.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:dfec3a7d42336fa46a1e7e07f67000fd4b97860598c59a894c08f81378629e4e"},
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b65067e4b4f5345688817fff6be06b9b1f4ec3619b0b9ecc639bc681b73f646b"},
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0322ad94bbaa7016139b5bbdcd0de6f6feb9d146d69e03a82aaca342e06830a6"},
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:456c60f170c997f9a43c7afa1085fced8efb7e13ae49dd5656f998ae13c4bdb4"},
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:57a2c4c9ce423a85b0e0227409dbaf0d478f5e0c31d9e626768e77e1e887d32f"},
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:478d9c479108b7527266864c0affe3d6e863492c9bc269217e36100c8fd89b91"},
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:aed56bca096990ca0aa9bbf95761fc87e02880e04b0845922b5c12ea9abe523f"},
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:ef265a390928ee2f77f8901c2273c53293157860451ad453ce7f45dd268b72f9"},
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:a5f17d7a76d8335f1c90a8530c8f1e8bb22e9a1d70a96f686efaed946de1c908"},
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:891f981feb2ef34367378f813fc17b3d706ce95b6548eeea0c9fe7705d7e54b1"},
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:4102f6d9117466cc0a1f527907a1454d109cc9e8551b8074888071ef16050fe3"},
{file = "pycares-4.8.0-cp39-cp39-win32.whl", hash = "sha256:d6775308659652adc88c82c53eda59b5e86a154aaba5ad1e287bbb3e0be77076"},
{file = "pycares-4.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:8bc05462aa44788d48544cca3d2532466fed2cdc5a2f24a43a92b620a61c9d19"},
{file = "pycares-4.8.0.tar.gz", hash = "sha256:2fc2ebfab960f654b3e3cf08a732486950da99393a657f8b44618ad3ed2d39c1"},
{file = "pycares-4.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0b8bd9a3ee6e9bc990e1933dc7e7e2f44d4184f49a90fa444297ac12ab6c0c84"},
{file = "pycares-4.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:417a5c20861f35977240ad4961479a6778125bcac21eb2ad1c3aad47e2ff7fab"},
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ab290faa4ea53ce53e3ceea1b3a42822daffce2d260005533293a52525076750"},
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7b1df81193084c9717734e4615e8c5074b9852478c9007d1a8bb242f7f580e67"},
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:20c7a6af0c2ccd17cc5a70d76e299a90e7ebd6c4d8a3d7fff5ae533339f61431"},
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:370f41442a5b034aebdb2719b04ee04d3e805454a20d3f64f688c1c49f9137c3"},
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:340e4a3bbfd14d73c01ec0793a321b8a4a93f64c508225883291078b7ee17ac8"},
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f0ec94785856ea4f5556aa18f4c027361ba4b26cb36c4ad97d2105ef4eec68ba"},
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:dd6b7e23a4a9e2039b5d67dfa0499d2d5f114667dc13fb5d7d03eed230c7ac4f"},
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:490c978b0be9d35a253a5e31dd598f6d66b453625f0eb7dc2d81b22b8c3bb3f4"},
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e433faaf07f44e44f1a1b839fee847480fe3db9431509dafc9f16d618d491d0f"},
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cf6d8851a06b79d10089962c9dadcb34dad00bf027af000f7102297a54aaff2e"},
{file = "pycares-4.9.0-cp310-cp310-win32.whl", hash = "sha256:4f803e7d66ac7d8342998b8b07393788991353a46b05bbaad0b253d6f3484ea8"},
{file = "pycares-4.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:8e17bd32267e3870855de3baed7d0efa6337344d68f44853fd9195c919f39400"},
{file = "pycares-4.9.0-cp310-cp310-win_arm64.whl", hash = "sha256:6b74f75d8e430f9bb11a1cc99b2e328eed74b17d8d4b476de09126f38d419eb9"},
{file = "pycares-4.9.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:16a97ee83ec60d35c7f716f117719932c27d428b1bb56b242ba1c4aa55521747"},
{file = "pycares-4.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:78748521423a211ce699a50c27cc5c19e98b7db610ccea98daad652ace373990"},
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8818b2c7a57d9d6d41e8b64d9ff87992b8ea2522fc0799686725228bc3cff6c5"},
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:96df8990f16013ca5194d6ece19dddb4ef9cd7c3efaab9f196ec3ccd44b40f8d"},
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:61af86fd58b8326e723b0d20fb96b56acaec2261c3a7c9a1c29d0a79659d613a"},
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ec72edb276bda559813cc807bc47b423d409ffab2402417a5381077e9c2c6be1"},
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:832fb122c7376c76cab62f8862fa5e398b9575fb7c9ff6bc9811086441ee64ca"},
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cdcfaef24f771a471671470ccfd676c0366ab6b0616fd8217b8f356c40a02b83"},
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:52cb056d06ff55d78a8665b97ae948abaaba2ca200ca59b10346d4526bce1e7d"},
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:54985ed3f2e8a87315269f24cb73441622857a7830adfc3a27c675a94c3261c1"},
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:08048e223615d4aef3dac81fe0ea18fb18d6fc97881f1eb5be95bb1379969b8d"},
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:cc60037421ce05a409484287b2cd428e1363cca73c999b5f119936bb8f255208"},
{file = "pycares-4.9.0-cp311-cp311-win32.whl", hash = "sha256:62b86895b60cfb91befb3086caa0792b53f949231c6c0c3053c7dfee3f1386ab"},
{file = "pycares-4.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:7046b3c80954beaabf2db52b09c3d6fe85f6c4646af973e61be79d1c51589932"},
{file = "pycares-4.9.0-cp311-cp311-win_arm64.whl", hash = "sha256:fcbda3fdf44e94d3962ca74e6ba3dc18c0d7029106f030d61c04c0876f319403"},
{file = "pycares-4.9.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d68ca2da1001aeccdc81c4a2fb1f1f6cfdafd3d00e44e7c1ed93e3e05437f666"},
{file = "pycares-4.9.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4f0c8fa5a384d79551a27eafa39eed29529e66ba8fa795ee432ab88d050432a3"},
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0eb8c428cf3b9c6ff9c641ba50ab6357b4480cd737498733e6169b0ac8a1a89b"},
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6845bd4a43abf6dab7fedbf024ef458ac3750a25b25076ea9913e5ac5fec4548"},
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5e28f4acc3b97e46610cf164665ebf914f709daea6ced0ca4358ce55bc1c3d6b"},
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9464a39861840ce35a79352c34d653a9db44f9333af7c9feddb97998d3e00c07"},
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e0611c1bd46d1fc6bdd9305b8850eb84c77df485769f72c574ed7b8389dfbee2"},
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d4fb5a38a51d03b75ac4320357e632c2e72e03fdeb13263ee333a40621415fdc"},
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:df5edae05fb3e1370ab7639e67e8891fdaa9026cb10f05dbd57893713f7a9cfe"},
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:397123ea53d261007bb0aa7e767ef238778f45026db40bed8196436da2cc73de"},
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:bb0d874d0b131b29894fd8a0f842be91ac21d50f90ec04cff4bb3f598464b523"},
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:497cc03a61ec1585eb17d2cb086a29a6a67d24babf1e9be519b47222916a3b06"},
{file = "pycares-4.9.0-cp312-cp312-win32.whl", hash = "sha256:b46e46313fdb5e82da15478652aac0fd15e1c9f33e08153bad845aa4007d6f84"},
{file = "pycares-4.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:12547a06445777091605a7581da15a0da158058beb8a05a3ebbf7301fd1f58d4"},
{file = "pycares-4.9.0-cp312-cp312-win_arm64.whl", hash = "sha256:f1e10bf1e8eb80b08e5c828627dba1ebc4acd54803bd0a27d92b9063b6aa99d8"},
{file = "pycares-4.9.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:574d815112a95ab09d75d0a9dc7dea737c06985e3125cf31c32ba6a3ed6ca006"},
{file = "pycares-4.9.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50e5ab06361d59625a27a7ad93d27e067dc7c9f6aa529a07d691eb17f3b43605"},
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:785f5fd11ff40237d9bc8afa441551bb449e2812c74334d1d10859569e07515c"},
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e194a500e403eba89b91fb863c917495c5b3dfcd1ce0ee8dc3a6f99a1360e2fc"},
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:112dd49cdec4e6150a8d95b197e8b6b7b4468a3170b30738ed9b248cb2240c04"},
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:94aa3c2f3eb0aa69160137134775501f06c901188e722aac63d2a210d4084f99"},
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b510d71255cf5a92ccc2643a553548fcb0623d6ed11c8c633b421d99d7fa4167"},
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5c6aa30b1492b8130f7832bf95178642c710ce6b7ba610c2b17377f77177e3cd"},
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:e5767988e044faffe2aff6a76aa08df99a8b6ef2641be8b00ea16334ce5dea93"},
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:b9928a942820a82daa3207509eaba9e0fa9660756ac56667ec2e062815331fcb"},
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:556c854174da76d544714cdfab10745ed5d4b99eec5899f7b13988cd26ff4763"},
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:d42e2202ca9aa9a0a9a6e43a4a4408bbe0311aaa44800fa27b8fd7f82b20152a"},
{file = "pycares-4.9.0-cp313-cp313-win32.whl", hash = "sha256:cce8ef72c9ed4982c84114e6148a4e42e989d745de7862a0ad8b3f1cdc05def2"},
{file = "pycares-4.9.0-cp313-cp313-win_amd64.whl", hash = "sha256:318cdf24f826f1d2f0c5a988730bd597e1683296628c8f1be1a5b96643c284fe"},
{file = "pycares-4.9.0-cp313-cp313-win_arm64.whl", hash = "sha256:faa9de8e647ed06757a2c117b70a7645a755561def814da6aca0d766cf71a402"},
{file = "pycares-4.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8310d27d68fa25be9781ce04d330f4860634a2ac34dd9265774b5f404679b41f"},
{file = "pycares-4.9.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:99cf98452d3285307eec123049f2c9c50b109e06751b0727c6acefb6da30c6a0"},
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ffd6e8c8250655504602b076f106653e085e6b1e15318013442558101aa4777"},
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4065858d8c812159c9a55601fda73760d9e5e3300f7868d9e546eab1084f36c"},
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91ee6818113faf9013945c2b54bcd6b123d0ac192ae3099cf4288cedaf2dbb25"},
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:21f0602059ec11857ab7ad608c7ec8bc6f7a302c04559ec06d33e82f040585f8"},
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e22e5b46ed9b12183091da56e4a5a20813b5436c4f13135d7a1c20a84027ca8a"},
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:9eded8649867bfd7aea7589c5755eae4d37686272f6ed7a995da40890d02de71"},
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:f71d31cbbe066657a2536c98aad850724a9ab7b1cd2624f491832ae9667ea8e7"},
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:2b30945982ab4741f097efc5b0853051afc3c11df26996ed53a700c7575175af"},
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:54a8f1f067d64810426491d33033f5353b54f35e5339126440ad4e6afbf3f149"},
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:41556a269a192349e92eee953f62eddd867e9eddb27f444b261e2c1c4a4a9eff"},
{file = "pycares-4.9.0-cp39-cp39-win32.whl", hash = "sha256:524d6c14eaa167ed098a4fe54856d1248fa20c296cdd6976f9c1b838ba32d014"},
{file = "pycares-4.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:15f930c733d36aa487b4ad60413013bd811281b5ea4ca620070fa38505d84df4"},
{file = "pycares-4.9.0-cp39-cp39-win_arm64.whl", hash = "sha256:79b7addb2a41267d46650ac0d9c4f3b3233b036f186b85606f7586881dfb4b69"},
{file = "pycares-4.9.0.tar.gz", hash = "sha256:8ee484ddb23dbec4d88d14ed5b6d592c1960d2e93c385d5e52b6fad564d82395"},
]
[package.dependencies]
@@ -3870,14 +3883,14 @@ idna = ["idna (>=2.1)"]
[[package]]
name = "pycodestyle"
version = "2.13.0"
version = "2.14.0"
description = "Python style guide checker"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pycodestyle-2.13.0-py2.py3-none-any.whl", hash = "sha256:35863c5974a271c7a726ed228a14a4f6daf49df369d8c50cd9a6f58a5e143ba9"},
{file = "pycodestyle-2.13.0.tar.gz", hash = "sha256:c8415bf09abe81d9c7f872502a6eee881fbe85d8763dd5b9924bb0a01d67efae"},
{file = "pycodestyle-2.14.0-py2.py3-none-any.whl", hash = "sha256:dd6bf7cb4ee77f8e016f9c8e74a35ddd9f67e1d5fd4184d86c3b98e07099f42d"},
{file = "pycodestyle-2.14.0.tar.gz", hash = "sha256:c4b5b517d278089ff9d0abdec919cd97262a3367449ea1c8b49b91529167b783"},
]
[[package]]
@@ -3894,14 +3907,14 @@ files = [
[[package]]
name = "pydantic"
version = "2.11.5"
version = "2.11.7"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic-2.11.5-py3-none-any.whl", hash = "sha256:f9c26ba06f9747749ca1e5c94d6a85cb84254577553c8785576fd38fa64dc0f7"},
{file = "pydantic-2.11.5.tar.gz", hash = "sha256:7f853db3d0ce78ce8bbb148c401c2cdd6431b3473c0cdff2755c7690952a7b7a"},
{file = "pydantic-2.11.7-py3-none-any.whl", hash = "sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b"},
{file = "pydantic-2.11.7.tar.gz", hash = "sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db"},
]
[package.dependencies]
@@ -4029,14 +4042,14 @@ typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
[[package]]
name = "pydantic-settings"
version = "2.9.1"
version = "2.10.1"
description = "Settings management using Pydantic"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic_settings-2.9.1-py3-none-any.whl", hash = "sha256:59b4f431b1defb26fe620c71a7d3968a710d719f5f4cdbbdb7926edeb770f6ef"},
{file = "pydantic_settings-2.9.1.tar.gz", hash = "sha256:c509bf79d27563add44e8446233359004ed85066cd096d8b510f715e6ef5d268"},
{file = "pydantic_settings-2.10.1-py3-none-any.whl", hash = "sha256:a60952460b99cf661dc25c29c0ef171721f98bfcb52ef8d9ea4c943d7c8cc796"},
{file = "pydantic_settings-2.10.1.tar.gz", hash = "sha256:06f0062169818d0f5524420a360d632d5857b83cffd4d42fe29597807a1614ee"},
]
[package.dependencies]
@@ -4053,16 +4066,31 @@ yaml = ["pyyaml (>=6.0.1)"]
[[package]]
name = "pyflakes"
version = "3.3.2"
version = "3.4.0"
description = "passive checker of Python programs"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pyflakes-3.3.2-py2.py3-none-any.whl", hash = "sha256:5039c8339cbb1944045f4ee5466908906180f13cc99cc9949348d10f82a5c32a"},
{file = "pyflakes-3.3.2.tar.gz", hash = "sha256:6dfd61d87b97fba5dcfaaf781171ac16be16453be6d816147989e7f6e6a9576b"},
{file = "pyflakes-3.4.0-py2.py3-none-any.whl", hash = "sha256:f742a7dbd0d9cb9ea41e9a24a918996e8170c799fa528688d40dd582c8265f4f"},
{file = "pyflakes-3.4.0.tar.gz", hash = "sha256:b24f96fafb7d2ab0ec5075b7350b3d2d2218eab42003821c06344973d3ea2f58"},
]
[[package]]
name = "pygments"
version = "2.19.2"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
groups = ["main", "dev"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
]
[package.extras]
windows-terminal = ["colorama (>=0.4.6)"]
[[package]]
name = "pyjwt"
version = "2.10.1"
@@ -4122,14 +4150,14 @@ files = [
[[package]]
name = "pyright"
version = "1.1.401"
version = "1.1.402"
description = "Command line wrapper for pyright"
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "pyright-1.1.401-py3-none-any.whl", hash = "sha256:6fde30492ba5b0d7667c16ecaf6c699fab8d7a1263f6a18549e0b00bf7724c06"},
{file = "pyright-1.1.401.tar.gz", hash = "sha256:788a82b6611fa5e34a326a921d86d898768cddf59edde8e93e56087d277cc6f1"},
{file = "pyright-1.1.402-py3-none-any.whl", hash = "sha256:2c721f11869baac1884e846232800fe021c33f1b4acb3929cff321f7ea4e2982"},
{file = "pyright-1.1.402.tar.gz", hash = "sha256:85a33c2d40cd4439c66aa946fd4ce71ab2f3f5b8c22ce36a623f59ac22937683"},
]
[package.dependencies]
@@ -4143,26 +4171,27 @@ nodejs = ["nodejs-wheel-binaries"]
[[package]]
name = "pytest"
version = "8.3.5"
version = "8.4.1"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"},
{file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"},
{file = "pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7"},
{file = "pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"},
]
[package.dependencies]
colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
colorama = {version = ">=0.4", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1", markers = "python_version < \"3.11\""}
iniconfig = ">=1"
packaging = ">=20"
pluggy = ">=1.5,<2"
pygments = ">=2.7.2"
tomli = {version = ">=1", markers = "python_version < \"3.11\""}
[package.extras]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "requests", "setuptools", "xmlschema"]
[[package]]
name = "pytest-asyncio"
@@ -4249,14 +4278,14 @@ six = ">=1.5"
[[package]]
name = "python-dotenv"
version = "1.1.0"
version = "1.1.1"
description = "Read key-value pairs from a .env file and set them as environment variables"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "python_dotenv-1.1.0-py3-none-any.whl", hash = "sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d"},
{file = "python_dotenv-1.1.0.tar.gz", hash = "sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5"},
{file = "python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc"},
{file = "python_dotenv-1.1.1.tar.gz", hash = "sha256:a8a6399716257f45be6a007360200409fce5cda2661e3dec71d23dc15f6189ab"},
]
[package.extras]
@@ -4702,19 +4731,19 @@ typing_extensions = ">=4.5.0"
[[package]]
name = "requests"
version = "2.32.3"
version = "2.32.4"
description = "Python HTTP for Humans."
optional = false
python-versions = ">=3.8"
groups = ["main", "dev"]
files = [
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
{file = "requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c"},
{file = "requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422"},
]
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<4"
charset_normalizer = ">=2,<4"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<3"
@@ -4900,30 +4929,30 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.11.12"
version = "0.12.2"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.11.12-py3-none-linux_armv6l.whl", hash = "sha256:c7680aa2f0d4c4f43353d1e72123955c7a2159b8646cd43402de6d4a3a25d7cc"},
{file = "ruff-0.11.12-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:2cad64843da9f134565c20bcc430642de897b8ea02e2e79e6e02a76b8dcad7c3"},
{file = "ruff-0.11.12-py3-none-macosx_11_0_arm64.whl", hash = "sha256:9b6886b524a1c659cee1758140138455d3c029783d1b9e643f3624a5ee0cb0aa"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cc3a3690aad6e86c1958d3ec3c38c4594b6ecec75c1f531e84160bd827b2012"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f97fdbc2549f456c65b3b0048560d44ddd540db1f27c778a938371424b49fe4a"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74adf84960236961090e2d1348c1a67d940fd12e811a33fb3d107df61eef8fc7"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:b56697e5b8bcf1d61293ccfe63873aba08fdbcbbba839fc046ec5926bdb25a3a"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4d47afa45e7b0eaf5e5969c6b39cbd108be83910b5c74626247e366fd7a36a13"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:692bf9603fe1bf949de8b09a2da896f05c01ed7a187f4a386cdba6760e7f61be"},
{file = "ruff-0.11.12-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:08033320e979df3b20dba567c62f69c45e01df708b0f9c83912d7abd3e0801cd"},
{file = "ruff-0.11.12-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:929b7706584f5bfd61d67d5070f399057d07c70585fa8c4491d78ada452d3bef"},
{file = "ruff-0.11.12-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:7de4a73205dc5756b8e09ee3ed67c38312dce1aa28972b93150f5751199981b5"},
{file = "ruff-0.11.12-py3-none-musllinux_1_2_i686.whl", hash = "sha256:2635c2a90ac1b8ca9e93b70af59dfd1dd2026a40e2d6eebaa3efb0465dd9cf02"},
{file = "ruff-0.11.12-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:d05d6a78a89166f03f03a198ecc9d18779076ad0eec476819467acb401028c0c"},
{file = "ruff-0.11.12-py3-none-win32.whl", hash = "sha256:f5a07f49767c4be4772d161bfc049c1f242db0cfe1bd976e0f0886732a4765d6"},
{file = "ruff-0.11.12-py3-none-win_amd64.whl", hash = "sha256:5a4d9f8030d8c3a45df201d7fb3ed38d0219bccd7955268e863ee4a115fa0832"},
{file = "ruff-0.11.12-py3-none-win_arm64.whl", hash = "sha256:65194e37853158d368e333ba282217941029a28ea90913c67e558c611d04daa5"},
{file = "ruff-0.11.12.tar.gz", hash = "sha256:43cf7f69c7d7c7d7513b9d59c5d8cafd704e05944f978614aa9faff6ac202603"},
{file = "ruff-0.12.2-py3-none-linux_armv6l.whl", hash = "sha256:093ea2b221df1d2b8e7ad92fc6ffdca40a2cb10d8564477a987b44fd4008a7be"},
{file = "ruff-0.12.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:09e4cf27cc10f96b1708100fa851e0daf21767e9709e1649175355280e0d950e"},
{file = "ruff-0.12.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:8ae64755b22f4ff85e9c52d1f82644abd0b6b6b6deedceb74bd71f35c24044cc"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3eb3a6b2db4d6e2c77e682f0b988d4d61aff06860158fdb413118ca133d57922"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:73448de992d05517170fc37169cbca857dfeaeaa8c2b9be494d7bcb0d36c8f4b"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b8b94317cbc2ae4a2771af641739f933934b03555e51515e6e021c64441532d"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:45fc42c3bf1d30d2008023a0a9a0cfb06bf9835b147f11fe0679f21ae86d34b1"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce48f675c394c37e958bf229fb5c1e843e20945a6d962cf3ea20b7a107dcd9f4"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:793d8859445ea47591272021a81391350205a4af65a9392401f418a95dfb75c9"},
{file = "ruff-0.12.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6932323db80484dda89153da3d8e58164d01d6da86857c79f1961934354992da"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:6aa7e623a3a11538108f61e859ebf016c4f14a7e6e4eba1980190cacb57714ce"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2a4a20aeed74671b2def096bdf2eac610c7d8ffcbf4fb0e627c06947a1d7078d"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:71a4c550195612f486c9d1f2b045a600aeba851b298c667807ae933478fcef04"},
{file = "ruff-0.12.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:4987b8f4ceadf597c927beee65a5eaf994c6e2b631df963f86d8ad1bdea99342"},
{file = "ruff-0.12.2-py3-none-win32.whl", hash = "sha256:369ffb69b70cd55b6c3fc453b9492d98aed98062db9fec828cdfd069555f5f1a"},
{file = "ruff-0.12.2-py3-none-win_amd64.whl", hash = "sha256:dca8a3b6d6dc9810ed8f328d406516bf4d660c00caeaef36eb831cf4871b0639"},
{file = "ruff-0.12.2-py3-none-win_arm64.whl", hash = "sha256:48d6c6bfb4761df68bc05ae630e24f506755e702d4fb08f08460be778c7ccb12"},
{file = "ruff-0.12.2.tar.gz", hash = "sha256:d7b4f55cd6f325cb7621244f19c873c565a08aff5a4ba9c69aa7355f3f7afd3e"},
]
[[package]]
@@ -4957,14 +4986,14 @@ files = [
[[package]]
name = "sentry-sdk"
version = "2.29.1"
version = "2.32.0"
description = "Python client for Sentry (https://sentry.io)"
optional = false
python-versions = ">=3.6"
groups = ["main"]
files = [
{file = "sentry_sdk-2.29.1-py2.py3-none-any.whl", hash = "sha256:90862fe0616ded4572da6c9dadb363121a1ae49a49e21c418f0634e9d10b4c19"},
{file = "sentry_sdk-2.29.1.tar.gz", hash = "sha256:8d4a0206b95fa5fe85e5e7517ed662e3888374bdc342c00e435e10e6d831aa6d"},
{file = "sentry_sdk-2.32.0-py2.py3-none-any.whl", hash = "sha256:6cf51521b099562d7ce3606da928c473643abe99b00ce4cb5626ea735f4ec345"},
{file = "sentry_sdk-2.32.0.tar.gz", hash = "sha256:9016c75d9316b0f6921ac14c8cd4fb938f26002430ac5be9945ab280f78bec6b"},
]
[package.dependencies]
@@ -5251,23 +5280,23 @@ typing-extensions = {version = ">=4.5.0", markers = "python_version >= \"3.7\""}
[[package]]
name = "supabase"
version = "2.15.1"
version = "2.16.0"
description = "Supabase client for Python."
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "supabase-2.15.1-py3-none-any.whl", hash = "sha256:749299cdd74ecf528f52045c1e60d9dba81cc2054656f754c0ca7fba0dd34827"},
{file = "supabase-2.15.1.tar.gz", hash = "sha256:66e847dab9346062aa6a25b4e81ac786b972c5d4299827c57d1d5bd6a0346070"},
{file = "supabase-2.16.0-py3-none-any.whl", hash = "sha256:99065caab3d90a56650bf39fbd0e49740995da3738ab28706c61bd7f2401db55"},
{file = "supabase-2.16.0.tar.gz", hash = "sha256:98f3810158012d4ec0e3083f2e5515f5e10b32bd71e7d458662140e963c1d164"},
]
[package.dependencies]
gotrue = ">=2.11.0,<3.0.0"
httpx = ">=0.26,<0.29"
postgrest = ">0.19,<1.1"
realtime = ">=2.4.0,<2.5.0"
storage3 = ">=0.10,<0.12"
supafunc = ">=0.9,<0.10"
postgrest = ">0.19,<1.2"
realtime = ">=2.4.0,<2.6.0"
storage3 = ">=0.10,<0.13"
supafunc = ">=0.9,<0.11"
[[package]]
name = "supafunc"
@@ -5474,14 +5503,14 @@ files = [
[[package]]
name = "tweepy"
version = "4.15.0"
description = "Twitter library for Python"
version = "4.16.0"
description = "Library for accessing the X API (Twitter)"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "tweepy-4.15.0-py3-none-any.whl", hash = "sha256:64adcea317158937059e4e2897b3ceb750b0c2dd5df58938c2da8f7eb3b88e6a"},
{file = "tweepy-4.15.0.tar.gz", hash = "sha256:1345cbcdf0a75e2d89f424c559fd49fda4d8cd7be25cd5131e3b57bad8a21d76"},
{file = "tweepy-4.16.0-py3-none-any.whl", hash = "sha256:48d1a1eb311d2c4b8990abcfa6f9fa2b2ad61be05c723b1a9b4f242656badae2"},
{file = "tweepy-4.16.0.tar.gz", hash = "sha256:1d95cbdc50bf6353a387f881f2584eaf60d14e00dbbdd8872a73de79c66878e3"},
]
[package.dependencies]
@@ -5492,8 +5521,6 @@ requests-oauthlib = ">=1.2.0,<3"
[package.extras]
async = ["aiohttp (>=3.7.3,<4)", "async-lru (>=1.0.3,<3)"]
dev = ["coverage (>=4.4.2)", "coveralls (>=2.1.0)", "tox (>=3.21.0)"]
docs = ["myst-parser (==0.15.2)", "readthedocs-sphinx-search (==0.1.1)", "sphinx (==4.2.0)", "sphinx-hoverxref (==0.7b1)", "sphinx-tabs (==3.2.0)", "sphinx_rtd_theme (==1.0.0)"]
socks = ["requests[socks] (>=2.27.0,<3)"]
test = ["urllib3 (<2)", "vcrpy (>=1.10.3)"]
[[package]]
@@ -6253,14 +6280,14 @@ requests = "*"
[[package]]
name = "zerobouncesdk"
version = "1.1.1"
version = "1.1.2"
description = "ZeroBounce Python API - https://www.zerobounce.net."
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "zerobouncesdk-1.1.1-py3-none-any.whl", hash = "sha256:9fb9dfa44fe4ce35d6f2e43d5144c31ca03544a3317d75643cb9f86b0c028675"},
{file = "zerobouncesdk-1.1.1.tar.gz", hash = "sha256:00aa537263d5bc21534c0007dd9f94ce8e0986caa530c5a0bbe0bd917451f236"},
{file = "zerobouncesdk-1.1.2-py3-none-any.whl", hash = "sha256:a89febfb3adade01c314e6bad2113ad093f1e1cca6ddf9fcf445a8b2a9a458b4"},
{file = "zerobouncesdk-1.1.2.tar.gz", hash = "sha256:24810a2e39c963bc75b4732356b0fc8b10091f2c892f0c8b08fbb32640fdccaf"},
]
[package.dependencies]
@@ -6402,4 +6429,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.13"
content-hash = "b5c1201f27ee8d05d5d8c89702123df4293f124301d1aef7451591a351872260"
content-hash = "476228d2bf59b90edc5425c462c1263cbc1f2d346f79a826ac5e7efe7823aaa6"

View File

@@ -10,61 +10,61 @@ packages = [{ include = "backend", format = "sdist" }]
[tool.poetry.dependencies]
python = ">=3.10,<3.13"
aio-pika = "^9.5.5"
aiodns = "^3.1.1"
anthropic = "^0.51.0"
aiodns = "^3.5.0"
anthropic = "^0.57.1"
apscheduler = "^3.11.0"
autogpt-libs = { path = "../autogpt_libs", develop = true }
bleach = { extras = ["css"], version = "^6.2.0" }
click = "^8.2.0"
cryptography = "^43.0"
discord-py = "^2.5.2"
e2b-code-interpreter = "^1.5.0"
fastapi = "^0.115.12"
e2b-code-interpreter = "^1.5.2"
fastapi = "^0.115.14"
feedparser = "^6.0.11"
flake8 = "^7.2.0"
google-api-python-client = "^2.169.0"
flake8 = "^7.3.0"
google-api-python-client = "^2.176.0"
google-auth-oauthlib = "^1.2.2"
google-cloud-storage = "^3.1.0"
google-cloud-storage = "^3.2.0"
googlemaps = "^4.10.0"
gravitasml = "^0.1.3"
groq = "^0.24.0"
groq = "^0.29.0"
jinja2 = "^3.1.6"
jsonref = "^1.1.0"
jsonschema = "^4.22.0"
launchdarkly-server-sdk = "^9.11.0"
mem0ai = "^0.1.98"
mem0ai = "^0.1.114"
moviepy = "^2.1.2"
ollama = "^0.4.8"
openai = "^1.78.1"
ollama = "^0.5.1"
openai = "^1.93.2"
pika = "^1.3.2"
pinecone = "^5.3.1"
poetry = "2.1.1" # CHECK DEPENDABOT SUPPORT BEFORE UPGRADING
postmarker = "^1.0"
praw = "~7.8.1"
prisma = "^0.15.0"
prometheus-client = "^0.21.1"
prometheus-client = "^0.22.1"
psutil = "^7.0.0"
psycopg2-binary = "^2.9.10"
pydantic = { extras = ["email"], version = "^2.11.4" }
pydantic-settings = "^2.9.1"
pytest = "^8.3.5"
pydantic = { extras = ["email"], version = "^2.11.7" }
pydantic-settings = "^2.10.1"
pytest = "^8.4.1"
pytest-asyncio = "^0.26.0"
python-dotenv = "^1.1.0"
python-dotenv = "^1.1.1"
python-multipart = "^0.0.20"
redis = "^5.2.0"
replicate = "^1.0.6"
sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlalchemy"], version = "^2.28.0"}
sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlalchemy"], version = "^2.32.0"}
sqlalchemy = "^2.0.40"
strenum = "^0.4.9"
stripe = "^11.5.0"
supabase = "2.15.1"
supabase = "2.16.0"
tenacity = "^9.1.2"
todoist-api-python = "^2.1.7"
tweepy = "^4.14.0"
tweepy = "^4.16.0"
uvicorn = { extras = ["standard"], version = "^0.34.2" }
websockets = "^14.2"
youtube-transcript-api = "^0.6.2"
zerobouncesdk = "^1.1.1"
zerobouncesdk = "^1.1.2"
# NOTE: please insert new dependencies in their alphabetical location
pytest-snapshot = "^0.9.0"
aiofiles = "^24.1.0"
@@ -78,12 +78,12 @@ black = "^24.10.0"
faker = "^33.3.1"
httpx = "^0.28.1"
isort = "^5.13.2"
poethepoet = "^0.34.0"
pyright = "^1.1.400"
poethepoet = "^0.36.0"
pyright = "^1.1.402"
pytest-mock = "^3.14.0"
pytest-watcher = "^0.4.2"
requests = "^2.32.3"
ruff = "^0.11.10"
requests = "^2.32.4"
ruff = "^0.12.2"
# NOTE: please insert new dependencies in their alphabetical location
[build-system]
@@ -123,3 +123,4 @@ filterwarnings = [
[tool.ruff]
target-version = "py310"

View File

@@ -0,0 +1,110 @@
#!/usr/bin/env python3
"""
Run test data creation and update scripts in sequence.
Usage:
poetry run python run_test_data.py
"""
import asyncio
import subprocess
import sys
from pathlib import Path
def run_command(cmd: list[str], cwd: Path | None = None) -> bool:
"""Run a command and return True if successful."""
try:
result = subprocess.run(
cmd, check=True, capture_output=True, text=True, cwd=cwd
)
if result.stdout:
print(result.stdout)
return True
except subprocess.CalledProcessError as e:
print(f"Error running command: {' '.join(cmd)}")
print(f"Error: {e.stderr}")
return False
async def main():
"""Main function to run test data scripts."""
print("=" * 60)
print("Running Test Data Scripts for AutoGPT Platform")
print("=" * 60)
print()
# Get the backend directory
backend_dir = Path(__file__).parent
test_dir = backend_dir / "test"
# Check if we're in the right directory
if not (backend_dir / "pyproject.toml").exists():
print("ERROR: This script must be run from the backend directory")
sys.exit(1)
print("1. Checking database connection...")
print("-" * 40)
# Import here to ensure proper environment setup
try:
from prisma import Prisma
db = Prisma()
await db.connect()
print("✓ Database connection successful")
await db.disconnect()
except Exception as e:
print(f"✗ Database connection failed: {e}")
print("\nPlease ensure:")
print("1. The database services are running (docker compose up -d)")
print("2. The DATABASE_URL in .env is correct")
print("3. Migrations have been run (poetry run prisma migrate deploy)")
sys.exit(1)
print()
print("2. Running test data creator...")
print("-" * 40)
# Run test_data_creator.py
if run_command(["poetry", "run", "python", "test_data_creator.py"], cwd=test_dir):
print()
print("✅ Test data created successfully!")
print()
print("3. Running test data updater...")
print("-" * 40)
# Run test_data_updater.py
if run_command(
["poetry", "run", "python", "test_data_updater.py"], cwd=test_dir
):
print()
print("✅ Test data updated successfully!")
else:
print()
print("❌ Test data updater failed!")
sys.exit(1)
else:
print()
print("❌ Test data creator failed!")
sys.exit(1)
print()
print("=" * 60)
print("Test data setup completed successfully!")
print("=" * 60)
print()
print("The materialized views have been populated with test data:")
print("- mv_agent_run_counts: Agent execution statistics")
print("- mv_review_stats: Store listing review statistics")
print()
print("You can now:")
print("1. Run tests: poetry run test")
print("2. Start the backend: poetry run serve")
print("3. View data in the database")
print()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -13,8 +13,10 @@ def wait_for_postgres(max_retries=5, delay=5):
"compose",
"-f",
"docker-compose.test.yaml",
"--env-file",
"../.env",
"exec",
"postgres-test",
"db",
"pg_isready",
"-U",
"postgres",
@@ -51,6 +53,8 @@ def test():
"compose",
"-f",
"docker-compose.test.yaml",
"--env-file",
"../.env",
"up",
"-d",
]
@@ -74,11 +78,20 @@ def test():
# to their development database, running tests would wipe their local data!
test_env = os.environ.copy()
# Use environment variables if set, otherwise use defaults that match docker-compose.test.yaml
db_user = os.getenv("DB_USER", "postgres")
db_pass = os.getenv("DB_PASS", "postgres")
db_name = os.getenv("DB_NAME", "postgres")
db_port = os.getenv("DB_PORT", "5432")
# Load database configuration from .env file
dotenv_path = os.path.join(os.path.dirname(__file__), "../.env")
if os.path.exists(dotenv_path):
with open(dotenv_path) as f:
for line in f:
if line.strip() and not line.startswith("#"):
key, value = line.strip().split("=", 1)
os.environ[key] = value
# Get database config from environment (now populated from .env)
db_user = os.getenv("POSTGRES_USER", "postgres")
db_pass = os.getenv("POSTGRES_PASSWORD", "postgres")
db_name = os.getenv("POSTGRES_DB", "postgres")
db_port = os.getenv("POSTGRES_PORT", "5432")
# Construct the test database URL - this ensures we're always pointing to the test container
test_env["DATABASE_URL"] = (

View File

@@ -413,6 +413,16 @@ model AgentNodeExecutionInputOutput {
@@index([name, time])
}
model AgentNodeExecutionKeyValueData {
userId String
key String
agentNodeExecutionId String
data Json?
createdAt DateTime @default(now())
updatedAt DateTime? @updatedAt
@@id([userId, key])
}
// Webhook that is registered with a provider and propagates to one or more nodes
model IntegrationWebhook {
id String @id @default(uuid())
@@ -432,8 +442,8 @@ model IntegrationWebhook {
providerWebhookId String // Webhook ID assigned by the provider
AgentNodes AgentNode[]
AgentPresets AgentPreset[]
AgentNodes AgentNode[]
AgentPresets AgentPreset[]
@@index([userId])
}
@@ -589,7 +599,23 @@ view Creator {
agent_runs Int
is_featured Boolean
// Index or unique are not applied to views
// Note: Prisma doesn't support indexes on views, but the following indexes exist in the database:
//
// Optimized indexes (partial indexes to reduce size and improve performance):
// - idx_profile_user on Profile(userId)
// - idx_store_listing_approved on StoreListing(owningUserId) WHERE isDeleted = false AND hasApprovedVersion = true
// - idx_store_listing_version_status on StoreListingVersion(storeListingId) WHERE submissionStatus = 'APPROVED'
// - idx_slv_categories_gin - GIN index on StoreListingVersion(categories) WHERE submissionStatus = 'APPROVED'
// - idx_slv_agent on StoreListingVersion(agentGraphId, agentGraphVersion) WHERE submissionStatus = 'APPROVED'
// - idx_store_listing_review_version on StoreListingReview(storeListingVersionId)
// - idx_store_listing_version_approved_listing on StoreListingVersion(storeListingId, version) WHERE submissionStatus = 'APPROVED'
// - idx_agent_graph_execution_agent on AgentGraphExecution(agentGraphId)
//
// Materialized views used (refreshed every 15 minutes via pg_cron):
// - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId
// - mv_review_stats - Pre-aggregated review statistics (count, avg rating) by storeListingId
//
// Query strategy: Uses CTEs to efficiently aggregate creator statistics leveraging materialized views
}
view StoreAgent {
@@ -612,7 +638,30 @@ view StoreAgent {
rating Float
versions String[]
// Index or unique are not applied to views
// Note: Prisma doesn't support indexes on views, but the following indexes exist in the database:
//
// Optimized indexes (partial indexes to reduce size and improve performance):
// - idx_store_listing_approved on StoreListing(owningUserId) WHERE isDeleted = false AND hasApprovedVersion = true
// - idx_store_listing_version_status on StoreListingVersion(storeListingId) WHERE submissionStatus = 'APPROVED'
// - idx_slv_categories_gin - GIN index on StoreListingVersion(categories) WHERE submissionStatus = 'APPROVED' for array searches
// - idx_slv_agent on StoreListingVersion(agentGraphId, agentGraphVersion) WHERE submissionStatus = 'APPROVED'
// - idx_store_listing_review_version on StoreListingReview(storeListingVersionId)
// - idx_store_listing_version_approved_listing on StoreListingVersion(storeListingId, version) WHERE submissionStatus = 'APPROVED'
// - idx_agent_graph_execution_agent on AgentGraphExecution(agentGraphId)
// - idx_profile_user on Profile(userId)
//
// Additional indexes from earlier migrations:
// - StoreListing_agentId_owningUserId_idx
// - StoreListing_isDeleted_isApproved_idx (replaced by idx_store_listing_approved)
// - StoreListing_isDeleted_idx
// - StoreListing_agentId_key (unique on agentGraphId)
// - StoreListingVersion_agentId_agentVersion_isDeleted_idx
//
// Materialized views used (refreshed every 15 minutes via pg_cron):
// - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId
// - mv_review_stats - Pre-aggregated review statistics (count, avg rating) by storeListingId
//
// Query strategy: Uses CTE for version aggregation and joins with materialized views for performance
}
view StoreSubmission {
@@ -639,6 +688,33 @@ view StoreSubmission {
// Index or unique are not applied to views
}
// Note: This is actually a MATERIALIZED VIEW in the database
// Refreshed automatically every 15 minutes via pg_cron (with fallback to manual refresh)
view mv_agent_run_counts {
agentGraphId String @unique
run_count Int
// Pre-aggregated count of AgentGraphExecution records by agentGraphId
// Used by StoreAgent and Creator views for performance optimization
// Unique index created automatically on agentGraphId for fast lookups
// Refresh uses CONCURRENTLY to avoid blocking reads
}
// Note: This is actually a MATERIALIZED VIEW in the database
// Refreshed automatically every 15 minutes via pg_cron (with fallback to manual refresh)
view mv_review_stats {
storeListingId String @unique
review_count Int
avg_rating Float
// Pre-aggregated review statistics from StoreListingReview
// Includes count of reviews and average rating per StoreListing
// Only includes approved versions (submissionStatus = 'APPROVED') and non-deleted listings
// Used by StoreAgent view for performance optimization
// Unique index created automatically on storeListingId for fast lookups
// Refresh uses CONCURRENTLY to avoid blocking reads
}
model StoreListing {
id String @id @default(uuid())
createdAt DateTime @default(now())

View File

@@ -7,7 +7,7 @@
"description": "A test graph",
"forked_from_id": null,
"forked_from_version": null,
"has_webhook_trigger": false,
"has_external_trigger": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -8,7 +8,7 @@
"description": "A test graph",
"forked_from_id": null,
"forked_from_version": null,
"has_webhook_trigger": false,
"has_external_trigger": false,
"id": "graph-123",
"input_schema": {
"properties": {},
@@ -16,9 +16,7 @@
"type": "object"
},
"is_active": true,
"links": [],
"name": "Test Graph",
"nodes": [],
"output_schema": {
"properties": {},
"required": [],

View File

@@ -0,0 +1 @@
"""SDK test module."""

Some files were not shown because too many files have changed in this diff Show More