mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-08 03:00:28 -04:00
Compare commits
14 Commits
feat/agent
...
chore/stor
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
20c9277f63 | ||
|
|
0fd3c2daae | ||
|
|
8b0888b5aa | ||
|
|
0961aed731 | ||
|
|
b272b79652 | ||
|
|
cb9fda0f1d | ||
|
|
1ed1af8ca0 | ||
|
|
0882a277b1 | ||
|
|
d29f086dec | ||
|
|
2c6b9c7c27 | ||
|
|
1029ee5c45 | ||
|
|
4e817c8d8a | ||
|
|
7c3e8ec221 | ||
|
|
996103d1e1 |
32
.github/workflows/platform-frontend-ci.yml
vendored
32
.github/workflows/platform-frontend-ci.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
cache-key: ${{ steps.cache-key.outputs.key }}
|
||||
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
@@ -108,14 +108,14 @@ jobs:
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Run tsc check
|
||||
run: pnpm type-check
|
||||
run: pnpm types
|
||||
|
||||
chromatic:
|
||||
runs-on: ubuntu-latest
|
||||
needs: setup
|
||||
# Only run on dev branch pushes or PRs targeting dev
|
||||
if: github.ref == 'refs/heads/dev' || github.base_ref == 'dev'
|
||||
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
@@ -148,7 +148,27 @@ jobs:
|
||||
onlyChanged: true
|
||||
workingDir: autogpt_platform/frontend
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
exitOnceUploaded: true
|
||||
buildScriptName: storybook:build
|
||||
|
||||
test-unit:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "21"
|
||||
|
||||
- name: Enable corepack
|
||||
run: corepack enable
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Run unit tests
|
||||
run: pnpm test:unit
|
||||
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -212,9 +232,7 @@ jobs:
|
||||
run: pnpm playwright install --with-deps ${{ matrix.browser }}
|
||||
|
||||
- name: Run Playwright tests
|
||||
run: pnpm test:no-build --project=${{ matrix.browser }}
|
||||
env:
|
||||
BROWSER_TYPE: ${{ matrix.browser }}
|
||||
run: pnpm playwright test --project=${{ matrix.browser }}
|
||||
|
||||
- name: Print Final Docker Compose logs
|
||||
if: always()
|
||||
|
||||
@@ -235,7 +235,7 @@ repos:
|
||||
hooks:
|
||||
- id: tsc
|
||||
name: Typecheck - AutoGPT Platform - Frontend
|
||||
entry: bash -c 'cd autogpt_platform/frontend && pnpm type-check'
|
||||
entry: bash -c 'cd autogpt_platform/frontend && pnpm types'
|
||||
files: ^autogpt_platform/frontend/
|
||||
types: [file]
|
||||
language: system
|
||||
|
||||
@@ -19,7 +19,7 @@ See `docs/content/platform/getting-started.md` for setup instructions.
|
||||
## Testing
|
||||
|
||||
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).
|
||||
- Frontend: `pnpm test` or `pnpm test-ui` for Playwright tests. See `docs/content/platform/contributing/tests.md` for tips.
|
||||
- Frontend: `pnpm test` or `pnpm test:ui` for Playwright tests. See `docs/content/platform/contributing/tests.md` for tips.
|
||||
|
||||
Always run the relevant linters and tests before committing.
|
||||
Use conventional commit messages for all commits (e.g. `feat(backend): add API`).
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
# AutoGPT: Build, Deploy, and Run AI Agents
|
||||
|
||||
[](https://discord.gg/autogpt)  
|
||||
[](https://discord.gg/autogpt)  
|
||||
[](https://twitter.com/Auto_GPT)  
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
|
||||
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Repository Overview
|
||||
|
||||
AutoGPT Platform is a monorepo containing:
|
||||
|
||||
- **Backend** (`/backend`): Python FastAPI server with async support
|
||||
- **Frontend** (`/frontend`): Next.js React application
|
||||
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
|
||||
@@ -11,6 +13,7 @@ AutoGPT Platform is a monorepo containing:
|
||||
## Essential Commands
|
||||
|
||||
### Backend Development
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
cd backend && poetry install
|
||||
@@ -35,6 +38,7 @@ poetry run pytest path/to/test_file.py::test_function_name
|
||||
poetry run format # Black + isort
|
||||
poetry run lint # ruff
|
||||
```
|
||||
|
||||
More details can be found in TESTING.md
|
||||
|
||||
#### Creating/Updating Snapshots
|
||||
@@ -47,8 +51,8 @@ poetry run pytest path/to/test.py --snapshot-update
|
||||
|
||||
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
|
||||
|
||||
|
||||
### Frontend Development
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
cd frontend && npm install
|
||||
@@ -66,12 +70,13 @@ npm run storybook
|
||||
npm run build
|
||||
|
||||
# Type checking
|
||||
npm run type-check
|
||||
npm run types
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### Backend Architecture
|
||||
|
||||
- **API Layer**: FastAPI with REST and WebSocket endpoints
|
||||
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
|
||||
- **Queue System**: RabbitMQ for async task processing
|
||||
@@ -80,6 +85,7 @@ npm run type-check
|
||||
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
|
||||
|
||||
### Frontend Architecture
|
||||
|
||||
- **Framework**: Next.js App Router with React Server Components
|
||||
- **State Management**: React hooks + Supabase client for real-time updates
|
||||
- **Workflow Builder**: Visual graph editor using @xyflow/react
|
||||
@@ -87,6 +93,7 @@ npm run type-check
|
||||
- **Feature Flags**: LaunchDarkly integration
|
||||
|
||||
### Key Concepts
|
||||
|
||||
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
|
||||
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
|
||||
3. **Integrations**: OAuth and API connections stored per user
|
||||
@@ -94,13 +101,16 @@ npm run type-check
|
||||
5. **Virus Scanning**: ClamAV integration for file upload security
|
||||
|
||||
### Testing Approach
|
||||
|
||||
- Backend uses pytest with snapshot testing for API responses
|
||||
- Test files are colocated with source files (`*_test.py`)
|
||||
- Frontend uses Playwright for E2E tests
|
||||
- Component testing via Storybook
|
||||
|
||||
### Database Schema
|
||||
|
||||
Key models (defined in `/backend/schema.prisma`):
|
||||
|
||||
- `User`: Authentication and profile data
|
||||
- `AgentGraph`: Workflow definitions with version control
|
||||
- `AgentGraphExecution`: Execution history and results
|
||||
@@ -108,6 +118,7 @@ Key models (defined in `/backend/schema.prisma`):
|
||||
- `StoreListing`: Marketplace listings for sharing agents
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
- Backend: `.env` file in `/backend`
|
||||
- Frontend: `.env.local` file in `/frontend`
|
||||
- Both require Supabase credentials and API keys for various services
|
||||
@@ -115,6 +126,7 @@ Key models (defined in `/backend/schema.prisma`):
|
||||
### Common Development Tasks
|
||||
|
||||
**Adding a new block:**
|
||||
|
||||
1. Create new file in `/backend/backend/blocks/`
|
||||
2. Inherit from `Block` base class
|
||||
3. Define input/output schemas
|
||||
@@ -123,12 +135,14 @@ Key models (defined in `/backend/schema.prisma`):
|
||||
6. Generate the block uuid using `uuid.uuid4()`
|
||||
|
||||
**Modifying the API:**
|
||||
|
||||
1. Update route in `/backend/backend/server/routers/`
|
||||
2. Add/update Pydantic models in same directory
|
||||
3. Write tests alongside the route file
|
||||
4. Run `poetry run test` to verify
|
||||
|
||||
**Frontend feature development:**
|
||||
|
||||
1. Components go in `/frontend/src/components/`
|
||||
2. Use existing UI components from `/frontend/src/components/ui/`
|
||||
3. Add Storybook stories for new components
|
||||
@@ -137,6 +151,7 @@ Key models (defined in `/backend/schema.prisma`):
|
||||
### Security Implementation
|
||||
|
||||
**Cache Protection Middleware:**
|
||||
|
||||
- Located in `/backend/backend/server/middleware/security.py`
|
||||
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
|
||||
- Uses an allow list approach - only explicitly permitted paths can be cached
|
||||
|
||||
48
autogpt_platform/autogpt_libs/poetry.lock
generated
48
autogpt_platform/autogpt_libs/poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "aiohappyeyeballs"
|
||||
@@ -177,7 +177,7 @@ files = [
|
||||
{file = "async-timeout-4.0.3.tar.gz", hash = "sha256:4640d96be84d82d02ed59ea2b7105a0f7b33abe8703703cd0ab0bf87c427522f"},
|
||||
{file = "async_timeout-4.0.3-py3-none-any.whl", hash = "sha256:7405140ff1230c310e51dc27b3145b9092d659ce68ff733fb0cefe3ee42be028"},
|
||||
]
|
||||
markers = {main = "python_version < \"3.11\"", dev = "python_full_version < \"3.11.3\""}
|
||||
markers = {main = "python_version == \"3.10\"", dev = "python_full_version < \"3.11.3\""}
|
||||
|
||||
[[package]]
|
||||
name = "attrs"
|
||||
@@ -390,7 +390,7 @@ description = "Backport of PEP 654 (exception groups)"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "python_version < \"3.11\""
|
||||
markers = "python_version == \"3.10\""
|
||||
files = [
|
||||
{file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"},
|
||||
{file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"},
|
||||
@@ -1667,30 +1667,30 @@ pyasn1 = ">=0.1.3"
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.12.2"
|
||||
version = "0.11.10"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "ruff-0.12.2-py3-none-linux_armv6l.whl", hash = "sha256:093ea2b221df1d2b8e7ad92fc6ffdca40a2cb10d8564477a987b44fd4008a7be"},
|
||||
{file = "ruff-0.12.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:09e4cf27cc10f96b1708100fa851e0daf21767e9709e1649175355280e0d950e"},
|
||||
{file = "ruff-0.12.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:8ae64755b22f4ff85e9c52d1f82644abd0b6b6b6deedceb74bd71f35c24044cc"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3eb3a6b2db4d6e2c77e682f0b988d4d61aff06860158fdb413118ca133d57922"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:73448de992d05517170fc37169cbca857dfeaeaa8c2b9be494d7bcb0d36c8f4b"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b8b94317cbc2ae4a2771af641739f933934b03555e51515e6e021c64441532d"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:45fc42c3bf1d30d2008023a0a9a0cfb06bf9835b147f11fe0679f21ae86d34b1"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce48f675c394c37e958bf229fb5c1e843e20945a6d962cf3ea20b7a107dcd9f4"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:793d8859445ea47591272021a81391350205a4af65a9392401f418a95dfb75c9"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6932323db80484dda89153da3d8e58164d01d6da86857c79f1961934354992da"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:6aa7e623a3a11538108f61e859ebf016c4f14a7e6e4eba1980190cacb57714ce"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2a4a20aeed74671b2def096bdf2eac610c7d8ffcbf4fb0e627c06947a1d7078d"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:71a4c550195612f486c9d1f2b045a600aeba851b298c667807ae933478fcef04"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:4987b8f4ceadf597c927beee65a5eaf994c6e2b631df963f86d8ad1bdea99342"},
|
||||
{file = "ruff-0.12.2-py3-none-win32.whl", hash = "sha256:369ffb69b70cd55b6c3fc453b9492d98aed98062db9fec828cdfd069555f5f1a"},
|
||||
{file = "ruff-0.12.2-py3-none-win_amd64.whl", hash = "sha256:dca8a3b6d6dc9810ed8f328d406516bf4d660c00caeaef36eb831cf4871b0639"},
|
||||
{file = "ruff-0.12.2-py3-none-win_arm64.whl", hash = "sha256:48d6c6bfb4761df68bc05ae630e24f506755e702d4fb08f08460be778c7ccb12"},
|
||||
{file = "ruff-0.12.2.tar.gz", hash = "sha256:d7b4f55cd6f325cb7621244f19c873c565a08aff5a4ba9c69aa7355f3f7afd3e"},
|
||||
{file = "ruff-0.11.10-py3-none-linux_armv6l.whl", hash = "sha256:859a7bfa7bc8888abbea31ef8a2b411714e6a80f0d173c2a82f9041ed6b50f58"},
|
||||
{file = "ruff-0.11.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:968220a57e09ea5e4fd48ed1c646419961a0570727c7e069842edd018ee8afed"},
|
||||
{file = "ruff-0.11.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1067245bad978e7aa7b22f67113ecc6eb241dca0d9b696144256c3a879663bca"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4854fd09c7aed5b1590e996a81aeff0c9ff51378b084eb5a0b9cd9518e6cff2"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8b4564e9f99168c0f9195a0fd5fa5928004b33b377137f978055e40008a082c5"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b6a9cc5b62c03cc1fea0044ed8576379dbaf751d5503d718c973d5418483641"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:607ecbb6f03e44c9e0a93aedacb17b4eb4f3563d00e8b474298a201622677947"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7b3a522fa389402cd2137df9ddefe848f727250535c70dafa840badffb56b7a4"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f071b0deed7e9245d5820dac235cbdd4ef99d7b12ff04c330a241ad3534319f"},
|
||||
{file = "ruff-0.11.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a60e3a0a617eafba1f2e4186d827759d65348fa53708ca547e384db28406a0b"},
|
||||
{file = "ruff-0.11.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:da8ec977eaa4b7bf75470fb575bea2cb41a0e07c7ea9d5a0a97d13dbca697bf2"},
|
||||
{file = "ruff-0.11.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:ddf8967e08227d1bd95cc0851ef80d2ad9c7c0c5aab1eba31db49cf0a7b99523"},
|
||||
{file = "ruff-0.11.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:5a94acf798a82db188f6f36575d80609072b032105d114b0f98661e1679c9125"},
|
||||
{file = "ruff-0.11.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:3afead355f1d16d95630df28d4ba17fb2cb9c8dfac8d21ced14984121f639bad"},
|
||||
{file = "ruff-0.11.10-py3-none-win32.whl", hash = "sha256:dc061a98d32a97211af7e7f3fa1d4ca2fcf919fb96c28f39551f35fc55bdbc19"},
|
||||
{file = "ruff-0.11.10-py3-none-win_amd64.whl", hash = "sha256:5cc725fbb4d25b0f185cb42df07ab6b76c4489b4bfb740a175f3a59c70e8a224"},
|
||||
{file = "ruff-0.11.10-py3-none-win_arm64.whl", hash = "sha256:ef69637b35fb8b210743926778d0e45e1bffa850a7c61e428c6b971549b5f5d1"},
|
||||
{file = "ruff-0.11.10.tar.gz", hash = "sha256:d522fb204b4959909ecac47da02830daec102eeb100fb50ea9554818d47a5fa6"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1823,7 +1823,7 @@ description = "A lil' TOML parser"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "python_version < \"3.11\""
|
||||
markers = "python_version == \"3.10\""
|
||||
files = [
|
||||
{file = "tomli-2.1.0-py3-none-any.whl", hash = "sha256:a5c57c3d1c56f5ccdf89f6523458f60ef716e210fc47c4cfb188c5ba473e0391"},
|
||||
{file = "tomli-2.1.0.tar.gz", hash = "sha256:3f646cae2aec94e17d04973e4249548320197cfabdf130015d023de4b74d8ab8"},
|
||||
@@ -2176,4 +2176,4 @@ type = ["pytest-mypy"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.10,<4.0"
|
||||
content-hash = "574057127b05f28c2ae39f7b11aa0d7c52f857655e9223e23a27c9989b2ac10f"
|
||||
content-hash = "d92143928a88ca3a56ac200c335910eafac938940022fed8bd0d17c95040b54f"
|
||||
|
||||
@@ -23,7 +23,7 @@ uvicorn = "^0.34.3"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
redis = "^5.2.1"
|
||||
ruff = "^0.12.2"
|
||||
ruff = "^0.11.10"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
|
||||
@@ -199,18 +199,9 @@ ZEROBOUNCE_API_KEY=
|
||||
|
||||
## ===== OPTIONAL API KEYS END ===== ##
|
||||
|
||||
# Block Error Rate Monitoring
|
||||
BLOCK_ERROR_RATE_THRESHOLD=0.5
|
||||
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400
|
||||
|
||||
# Logging Configuration
|
||||
LOG_LEVEL=INFO
|
||||
ENABLE_CLOUD_LOGGING=false
|
||||
ENABLE_FILE_LOGGING=false
|
||||
# Use to manually set the log directory
|
||||
# LOG_DIR=./logs
|
||||
|
||||
# Example Blocks Configuration
|
||||
# Set to true to enable example blocks in development
|
||||
# These blocks are disabled by default in production
|
||||
ENABLE_EXAMPLE_BLOCKS=false
|
||||
|
||||
@@ -1,150 +0,0 @@
|
||||
# Test Data Scripts
|
||||
|
||||
This directory contains scripts for creating and updating test data in the AutoGPT Platform database, specifically designed to test the materialized views for the store functionality.
|
||||
|
||||
## Scripts
|
||||
|
||||
### test_data_creator.py
|
||||
Creates a comprehensive set of test data including:
|
||||
- Users with profiles
|
||||
- Agent graphs, nodes, and executions
|
||||
- Store listings with multiple versions
|
||||
- Reviews and ratings
|
||||
- Library agents
|
||||
- Integration webhooks
|
||||
- Onboarding data
|
||||
- Credit transactions
|
||||
|
||||
**Image/Video Domains Used:**
|
||||
- Images: `picsum.photos` (for all image URLs)
|
||||
- Videos: `youtube.com` (for store listing videos)
|
||||
|
||||
### test_data_updater.py
|
||||
Updates existing test data to simulate real-world changes:
|
||||
- Adds new agent graph executions
|
||||
- Creates new store listing reviews
|
||||
- Updates store listing versions
|
||||
- Adds credit transactions
|
||||
- Refreshes materialized views
|
||||
|
||||
### check_db.py
|
||||
Tests and verifies materialized views functionality:
|
||||
- Checks pg_cron job status (for automatic refresh)
|
||||
- Displays current materialized view counts
|
||||
- Adds test data (executions and reviews)
|
||||
- Creates store listings if none exist
|
||||
- Manually refreshes materialized views
|
||||
- Compares before/after counts to verify updates
|
||||
- Provides a summary of test results
|
||||
|
||||
## Materialized Views
|
||||
|
||||
The scripts test three key database views:
|
||||
|
||||
1. **mv_agent_run_counts**: Tracks execution counts by agent
|
||||
2. **mv_review_stats**: Tracks review statistics (count, average rating) by store listing
|
||||
3. **StoreAgent**: A view that combines store listing data with execution counts and ratings for display
|
||||
|
||||
The materialized views (mv_agent_run_counts and mv_review_stats) are automatically refreshed every 15 minutes via pg_cron, or can be manually refreshed using the `refresh_store_materialized_views()` function.
|
||||
|
||||
## Usage
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Ensure the database is running:
|
||||
```bash
|
||||
docker compose up -d
|
||||
# or for test database:
|
||||
docker compose -f docker-compose.test.yaml --env-file ../.env up -d
|
||||
```
|
||||
|
||||
2. Run database migrations:
|
||||
```bash
|
||||
poetry run prisma migrate deploy
|
||||
```
|
||||
|
||||
### Running the Scripts
|
||||
|
||||
#### Option 1: Use the helper script (from backend directory)
|
||||
```bash
|
||||
poetry run python run_test_data.py
|
||||
```
|
||||
|
||||
#### Option 2: Run individually
|
||||
```bash
|
||||
# From backend/test directory:
|
||||
# Create initial test data
|
||||
poetry run python test_data_creator.py
|
||||
|
||||
# Update data to test materialized view changes
|
||||
poetry run python test_data_updater.py
|
||||
|
||||
# From backend directory:
|
||||
# Test materialized views functionality
|
||||
poetry run python check_db.py
|
||||
|
||||
# Check store data status
|
||||
poetry run python check_store_data.py
|
||||
```
|
||||
|
||||
#### Option 3: Use the shell script (from backend directory)
|
||||
```bash
|
||||
./run_test_data_scripts.sh
|
||||
```
|
||||
|
||||
### Manual Materialized View Refresh
|
||||
|
||||
To manually refresh the materialized views:
|
||||
```sql
|
||||
SELECT refresh_store_materialized_views();
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The scripts use the database configuration from your `.env` file:
|
||||
- `DATABASE_URL`: PostgreSQL connection string
|
||||
- Database should have the platform schema
|
||||
|
||||
## Data Generation Limits
|
||||
|
||||
Configured in `test_data_creator.py`:
|
||||
- 100 users
|
||||
- 100 agent blocks
|
||||
- 1-5 graphs per user
|
||||
- 2-5 nodes per graph
|
||||
- 1-5 presets per user
|
||||
- 1-10 library agents per user
|
||||
- 1-20 executions per graph
|
||||
- 1-5 reviews per store listing version
|
||||
|
||||
## Notes
|
||||
|
||||
- All image URLs use `picsum.photos` for consistency with Next.js image configuration
|
||||
- The scripts create realistic relationships between entities
|
||||
- Materialized views are refreshed at the end of each script
|
||||
- Data is designed to test both happy paths and edge cases
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Reviews and StoreAgent view showing 0
|
||||
|
||||
If `check_db.py` shows that reviews remain at 0 and StoreAgent view shows 0 store agents:
|
||||
|
||||
1. **No store listings exist**: The script will automatically create test store listings if none exist
|
||||
2. **No approved versions**: Store listings need approved versions to appear in the StoreAgent view
|
||||
3. **Check with `check_store_data.py`**: This script provides detailed information about:
|
||||
- Total store listings
|
||||
- Store listing versions by status
|
||||
- Existing reviews
|
||||
- StoreAgent view contents
|
||||
- Agent graph executions
|
||||
|
||||
### pg_cron not installed
|
||||
|
||||
The warning "pg_cron extension is not installed" is normal in local development environments. The materialized views can still be refreshed manually using the `refresh_store_materialized_views()` function, which all scripts do automatically.
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **Type errors with None values**: Fixed in the latest version of check_db.py by using `or 0` for nullable numeric fields
|
||||
- **Missing relations**: Ensure you're using the correct field names (e.g., `StoreListing` not `storeListing` in includes)
|
||||
- **Column name mismatches**: The database uses camelCase for column names (e.g., `agentGraphId` not `agent_graph_id`)
|
||||
@@ -14,27 +14,14 @@ T = TypeVar("T")
|
||||
@functools.cache
|
||||
def load_all_blocks() -> dict[str, type["Block"]]:
|
||||
from backend.data.block import Block
|
||||
from backend.util.settings import Config
|
||||
|
||||
# Check if example blocks should be loaded from settings
|
||||
config = Config()
|
||||
load_examples = config.enable_example_blocks
|
||||
|
||||
# Dynamically load all modules under backend.blocks
|
||||
current_dir = Path(__file__).parent
|
||||
modules = []
|
||||
for f in current_dir.rglob("*.py"):
|
||||
if not f.is_file() or f.name == "__init__.py" or f.name.startswith("test_"):
|
||||
continue
|
||||
|
||||
# Skip examples directory if not enabled
|
||||
relative_path = f.relative_to(current_dir)
|
||||
if not load_examples and relative_path.parts[0] == "examples":
|
||||
continue
|
||||
|
||||
module_path = str(relative_path)[:-3].replace(os.path.sep, ".")
|
||||
modules.append(module_path)
|
||||
|
||||
modules = [
|
||||
str(f.relative_to(current_dir))[:-3].replace(os.path.sep, ".")
|
||||
for f in current_dir.rglob("*.py")
|
||||
if f.is_file() and f.name != "__init__.py" and not f.name.startswith("test_")
|
||||
]
|
||||
for module in modules:
|
||||
if not re.match("^[a-z0-9_.]+$", module):
|
||||
raise ValueError(
|
||||
|
||||
@@ -14,7 +14,7 @@ from backend.data.block import (
|
||||
get_block,
|
||||
)
|
||||
from backend.data.execution import ExecutionStatus
|
||||
from backend.data.model import NodeExecutionStats, SchemaField
|
||||
from backend.data.model import SchemaField
|
||||
from backend.util import json, retry
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
@@ -151,12 +151,6 @@ class AgentExecutorBlock(Block):
|
||||
if event.event_type == ExecutionEventType.GRAPH_EXEC_UPDATE:
|
||||
# If the graph execution is COMPLETED, TERMINATED, or FAILED,
|
||||
# we can stop listening for further events.
|
||||
self.merge_stats(
|
||||
NodeExecutionStats(
|
||||
extra_cost=event.stats.cost if event.stats else 0,
|
||||
extra_steps=event.stats.node_exec_count if event.stats else 0,
|
||||
)
|
||||
)
|
||||
break
|
||||
|
||||
logger.debug(
|
||||
|
||||
32
autogpt_platform/backend/backend/blocks/exa/_auth.py
Normal file
32
autogpt_platform/backend/backend/blocks/exa/_auth.py
Normal file
@@ -0,0 +1,32 @@
|
||||
from typing import Literal
|
||||
|
||||
from pydantic import SecretStr
|
||||
|
||||
from backend.data.model import APIKeyCredentials, CredentialsField, CredentialsMetaInput
|
||||
from backend.integrations.providers import ProviderName
|
||||
|
||||
ExaCredentials = APIKeyCredentials
|
||||
ExaCredentialsInput = CredentialsMetaInput[
|
||||
Literal[ProviderName.EXA],
|
||||
Literal["api_key"],
|
||||
]
|
||||
|
||||
TEST_CREDENTIALS = APIKeyCredentials(
|
||||
id="01234567-89ab-cdef-0123-456789abcdef",
|
||||
provider="exa",
|
||||
api_key=SecretStr("mock-exa-api-key"),
|
||||
title="Mock Exa API key",
|
||||
expires_at=None,
|
||||
)
|
||||
|
||||
TEST_CREDENTIALS_INPUT = {
|
||||
"provider": TEST_CREDENTIALS.provider,
|
||||
"id": TEST_CREDENTIALS.id,
|
||||
"type": TEST_CREDENTIALS.type,
|
||||
"title": TEST_CREDENTIALS.title,
|
||||
}
|
||||
|
||||
|
||||
def ExaCredentialsField() -> ExaCredentialsInput:
|
||||
"""Creates an Exa credentials input on a block."""
|
||||
return CredentialsField(description="The Exa integration requires an API Key.")
|
||||
@@ -1,16 +0,0 @@
|
||||
"""
|
||||
Shared configuration for all Exa blocks using the new SDK pattern.
|
||||
"""
|
||||
|
||||
from backend.sdk import BlockCostType, ProviderBuilder
|
||||
|
||||
from ._webhook import ExaWebhookManager
|
||||
|
||||
# Configure the Exa provider once for all blocks
|
||||
exa = (
|
||||
ProviderBuilder("exa")
|
||||
.with_api_key("EXA_API_KEY", "Exa API Key")
|
||||
.with_webhook_manager(ExaWebhookManager)
|
||||
.with_base_cost(1, BlockCostType.RUN)
|
||||
.build()
|
||||
)
|
||||
@@ -1,134 +0,0 @@
|
||||
"""
|
||||
Exa Webhook Manager implementation.
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import hmac
|
||||
from enum import Enum
|
||||
|
||||
from backend.data.model import Credentials
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
BaseWebhooksManager,
|
||||
ProviderName,
|
||||
Requests,
|
||||
Webhook,
|
||||
)
|
||||
|
||||
|
||||
class ExaWebhookType(str, Enum):
|
||||
"""Available webhook types for Exa."""
|
||||
|
||||
WEBSET = "webset"
|
||||
|
||||
|
||||
class ExaEventType(str, Enum):
|
||||
"""Available event types for Exa webhooks."""
|
||||
|
||||
WEBSET_CREATED = "webset.created"
|
||||
WEBSET_DELETED = "webset.deleted"
|
||||
WEBSET_PAUSED = "webset.paused"
|
||||
WEBSET_IDLE = "webset.idle"
|
||||
WEBSET_SEARCH_CREATED = "webset.search.created"
|
||||
WEBSET_SEARCH_CANCELED = "webset.search.canceled"
|
||||
WEBSET_SEARCH_COMPLETED = "webset.search.completed"
|
||||
WEBSET_SEARCH_UPDATED = "webset.search.updated"
|
||||
IMPORT_CREATED = "import.created"
|
||||
IMPORT_COMPLETED = "import.completed"
|
||||
IMPORT_PROCESSING = "import.processing"
|
||||
WEBSET_ITEM_CREATED = "webset.item.created"
|
||||
WEBSET_ITEM_ENRICHED = "webset.item.enriched"
|
||||
WEBSET_EXPORT_CREATED = "webset.export.created"
|
||||
WEBSET_EXPORT_COMPLETED = "webset.export.completed"
|
||||
|
||||
|
||||
class ExaWebhookManager(BaseWebhooksManager):
|
||||
"""Webhook manager for Exa API."""
|
||||
|
||||
PROVIDER_NAME = ProviderName("exa")
|
||||
|
||||
class WebhookType(str, Enum):
|
||||
WEBSET = "webset"
|
||||
|
||||
@classmethod
|
||||
async def validate_payload(cls, webhook: Webhook, request) -> tuple[dict, str]:
|
||||
"""Validate incoming webhook payload and signature."""
|
||||
payload = await request.json()
|
||||
|
||||
# Get event type from payload
|
||||
event_type = payload.get("eventType", "unknown")
|
||||
|
||||
# Verify webhook signature if secret is available
|
||||
if webhook.secret:
|
||||
signature = request.headers.get("X-Exa-Signature")
|
||||
if signature:
|
||||
# Compute expected signature
|
||||
body = await request.body()
|
||||
expected_signature = hmac.new(
|
||||
webhook.secret.encode(), body, hashlib.sha256
|
||||
).hexdigest()
|
||||
|
||||
# Compare signatures
|
||||
if not hmac.compare_digest(signature, expected_signature):
|
||||
raise ValueError("Invalid webhook signature")
|
||||
|
||||
return payload, event_type
|
||||
|
||||
async def _register_webhook(
|
||||
self,
|
||||
credentials: Credentials,
|
||||
webhook_type: str,
|
||||
resource: str,
|
||||
events: list[str],
|
||||
ingress_url: str,
|
||||
secret: str,
|
||||
) -> tuple[str, dict]:
|
||||
"""Register webhook with Exa API."""
|
||||
if not isinstance(credentials, APIKeyCredentials):
|
||||
raise ValueError("Exa webhooks require API key credentials")
|
||||
api_key = credentials.api_key.get_secret_value()
|
||||
|
||||
# Create webhook via Exa API
|
||||
response = await Requests().post(
|
||||
"https://api.exa.ai/v0/webhooks",
|
||||
headers={"x-api-key": api_key},
|
||||
json={
|
||||
"url": ingress_url,
|
||||
"events": events,
|
||||
"metadata": {
|
||||
"resource": resource,
|
||||
"webhook_type": webhook_type,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
if not response.ok:
|
||||
error_data = response.json()
|
||||
raise Exception(f"Failed to create Exa webhook: {error_data}")
|
||||
|
||||
webhook_data = response.json()
|
||||
|
||||
# Store the secret returned by Exa
|
||||
return webhook_data["id"], {
|
||||
"events": events,
|
||||
"resource": resource,
|
||||
"exa_secret": webhook_data.get("secret"),
|
||||
}
|
||||
|
||||
async def _deregister_webhook(
|
||||
self, webhook: Webhook, credentials: Credentials
|
||||
) -> None:
|
||||
"""Deregister webhook from Exa API."""
|
||||
if not isinstance(credentials, APIKeyCredentials):
|
||||
raise ValueError("Exa webhooks require API key credentials")
|
||||
api_key = credentials.api_key.get_secret_value()
|
||||
|
||||
# Delete webhook via Exa API
|
||||
response = await Requests().delete(
|
||||
f"https://api.exa.ai/v0/webhooks/{webhook.provider_webhook_id}",
|
||||
headers={"x-api-key": api_key},
|
||||
)
|
||||
|
||||
if not response.ok and response.status != 404:
|
||||
error_data = response.json()
|
||||
raise Exception(f"Failed to delete Exa webhook: {error_data}")
|
||||
@@ -1,124 +0,0 @@
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
BaseModel,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
Requests,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._config import exa
|
||||
|
||||
|
||||
class CostBreakdown(BaseModel):
|
||||
keywordSearch: float
|
||||
neuralSearch: float
|
||||
contentText: float
|
||||
contentHighlight: float
|
||||
contentSummary: float
|
||||
|
||||
|
||||
class SearchBreakdown(BaseModel):
|
||||
search: float
|
||||
contents: float
|
||||
breakdown: CostBreakdown
|
||||
|
||||
|
||||
class PerRequestPrices(BaseModel):
|
||||
neuralSearch_1_25_results: float
|
||||
neuralSearch_26_100_results: float
|
||||
neuralSearch_100_plus_results: float
|
||||
keywordSearch_1_100_results: float
|
||||
keywordSearch_100_plus_results: float
|
||||
|
||||
|
||||
class PerPagePrices(BaseModel):
|
||||
contentText: float
|
||||
contentHighlight: float
|
||||
contentSummary: float
|
||||
|
||||
|
||||
class CostDollars(BaseModel):
|
||||
total: float
|
||||
breakDown: list[SearchBreakdown]
|
||||
perRequestPrices: PerRequestPrices
|
||||
perPagePrices: PerPagePrices
|
||||
|
||||
|
||||
class ExaAnswerBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
query: str = SchemaField(
|
||||
description="The question or query to answer",
|
||||
placeholder="What is the latest valuation of SpaceX?",
|
||||
)
|
||||
text: bool = SchemaField(
|
||||
default=False,
|
||||
description="If true, the response includes full text content in the search results",
|
||||
advanced=True,
|
||||
)
|
||||
model: str = SchemaField(
|
||||
default="exa",
|
||||
description="The search model to use (exa or exa-pro)",
|
||||
placeholder="exa",
|
||||
advanced=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
answer: str = SchemaField(
|
||||
description="The generated answer based on search results"
|
||||
)
|
||||
citations: list[dict] = SchemaField(
|
||||
description="Search results used to generate the answer",
|
||||
default_factory=list,
|
||||
)
|
||||
cost_dollars: CostDollars = SchemaField(
|
||||
description="Cost breakdown of the request"
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="b79ca4cc-9d5e-47d1-9d4f-e3a2d7f28df5",
|
||||
description="Get an LLM answer to a question informed by Exa search results",
|
||||
categories={BlockCategory.SEARCH, BlockCategory.AI},
|
||||
input_schema=ExaAnswerBlock.Input,
|
||||
output_schema=ExaAnswerBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = "https://api.exa.ai/answer"
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
# Build the payload
|
||||
payload = {
|
||||
"query": input_data.query,
|
||||
"text": input_data.text,
|
||||
"model": input_data.model,
|
||||
}
|
||||
|
||||
try:
|
||||
response = await Requests().post(url, headers=headers, json=payload)
|
||||
data = response.json()
|
||||
|
||||
yield "answer", data.get("answer", "")
|
||||
yield "citations", data.get("citations", [])
|
||||
yield "cost_dollars", data.get("costDollars", {})
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "answer", ""
|
||||
yield "citations", []
|
||||
yield "cost_dollars", {}
|
||||
@@ -1,39 +1,57 @@
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
Requests,
|
||||
SchemaField,
|
||||
)
|
||||
from typing import List
|
||||
|
||||
from ._config import exa
|
||||
from .helpers import ContentSettings
|
||||
from pydantic import BaseModel
|
||||
|
||||
from backend.blocks.exa._auth import (
|
||||
ExaCredentials,
|
||||
ExaCredentialsField,
|
||||
ExaCredentialsInput,
|
||||
)
|
||||
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
|
||||
from backend.data.model import SchemaField
|
||||
from backend.util.request import Requests
|
||||
|
||||
|
||||
class ContentRetrievalSettings(BaseModel):
|
||||
text: dict = SchemaField(
|
||||
description="Text content settings",
|
||||
default={"maxCharacters": 1000, "includeHtmlTags": False},
|
||||
advanced=True,
|
||||
)
|
||||
highlights: dict = SchemaField(
|
||||
description="Highlight settings",
|
||||
default={
|
||||
"numSentences": 3,
|
||||
"highlightsPerUrl": 3,
|
||||
"query": "",
|
||||
},
|
||||
advanced=True,
|
||||
)
|
||||
summary: dict = SchemaField(
|
||||
description="Summary settings",
|
||||
default={"query": ""},
|
||||
advanced=True,
|
||||
)
|
||||
|
||||
|
||||
class ExaContentsBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
credentials: ExaCredentialsInput = ExaCredentialsField()
|
||||
ids: List[str] = SchemaField(
|
||||
description="Array of document IDs obtained from searches",
|
||||
)
|
||||
ids: list[str] = SchemaField(
|
||||
description="Array of document IDs obtained from searches"
|
||||
)
|
||||
contents: ContentSettings = SchemaField(
|
||||
contents: ContentRetrievalSettings = SchemaField(
|
||||
description="Content retrieval settings",
|
||||
default=ContentSettings(),
|
||||
default=ContentRetrievalSettings(),
|
||||
advanced=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
results: list = SchemaField(
|
||||
description="List of document contents", default_factory=list
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
description="List of document contents",
|
||||
default_factory=list,
|
||||
)
|
||||
error: str = SchemaField(description="Error message if the request failed")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
@@ -45,7 +63,7 @@ class ExaContentsBlock(Block):
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = "https://api.exa.ai/contents"
|
||||
headers = {
|
||||
@@ -53,7 +71,6 @@ class ExaContentsBlock(Block):
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
# Convert ContentSettings to API format
|
||||
payload = {
|
||||
"ids": input_data.ids,
|
||||
"text": input_data.contents.text,
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
from typing import Optional
|
||||
|
||||
from backend.sdk import BaseModel, SchemaField
|
||||
from pydantic import BaseModel
|
||||
|
||||
from backend.data.model import SchemaField
|
||||
|
||||
|
||||
class TextSettings(BaseModel):
|
||||
@@ -40,90 +42,13 @@ class SummarySettings(BaseModel):
|
||||
class ContentSettings(BaseModel):
|
||||
text: TextSettings = SchemaField(
|
||||
default=TextSettings(),
|
||||
description="Text content settings",
|
||||
)
|
||||
highlights: HighlightSettings = SchemaField(
|
||||
default=HighlightSettings(),
|
||||
description="Highlight settings",
|
||||
)
|
||||
summary: SummarySettings = SchemaField(
|
||||
default=SummarySettings(),
|
||||
)
|
||||
|
||||
|
||||
# Websets Models
|
||||
class WebsetEntitySettings(BaseModel):
|
||||
type: Optional[str] = SchemaField(
|
||||
default=None,
|
||||
description="Entity type (e.g., 'company', 'person')",
|
||||
placeholder="company",
|
||||
)
|
||||
|
||||
|
||||
class WebsetCriterion(BaseModel):
|
||||
description: str = SchemaField(
|
||||
description="Description of the criterion",
|
||||
placeholder="Must be based in the US",
|
||||
)
|
||||
success_rate: Optional[int] = SchemaField(
|
||||
default=None,
|
||||
description="Success rate percentage",
|
||||
ge=0,
|
||||
le=100,
|
||||
)
|
||||
|
||||
|
||||
class WebsetSearchConfig(BaseModel):
|
||||
query: str = SchemaField(
|
||||
description="Search query",
|
||||
placeholder="Marketing agencies based in the US",
|
||||
)
|
||||
count: int = SchemaField(
|
||||
default=10,
|
||||
description="Number of results to return",
|
||||
ge=1,
|
||||
le=100,
|
||||
)
|
||||
entity: Optional[WebsetEntitySettings] = SchemaField(
|
||||
default=None,
|
||||
description="Entity settings for the search",
|
||||
)
|
||||
criteria: Optional[list[WebsetCriterion]] = SchemaField(
|
||||
default=None,
|
||||
description="Search criteria",
|
||||
)
|
||||
behavior: Optional[str] = SchemaField(
|
||||
default="override",
|
||||
description="Behavior when updating results ('override' or 'append')",
|
||||
placeholder="override",
|
||||
)
|
||||
|
||||
|
||||
class EnrichmentOption(BaseModel):
|
||||
label: str = SchemaField(
|
||||
description="Label for the enrichment option",
|
||||
placeholder="Option 1",
|
||||
)
|
||||
|
||||
|
||||
class WebsetEnrichmentConfig(BaseModel):
|
||||
title: str = SchemaField(
|
||||
description="Title of the enrichment",
|
||||
placeholder="Company Details",
|
||||
)
|
||||
description: str = SchemaField(
|
||||
description="Description of what this enrichment does",
|
||||
placeholder="Extract company information",
|
||||
)
|
||||
format: str = SchemaField(
|
||||
default="text",
|
||||
description="Format of the enrichment result",
|
||||
placeholder="text",
|
||||
)
|
||||
instructions: Optional[str] = SchemaField(
|
||||
default=None,
|
||||
description="Instructions for the enrichment",
|
||||
placeholder="Extract key company metrics",
|
||||
)
|
||||
options: Optional[list[EnrichmentOption]] = SchemaField(
|
||||
default=None,
|
||||
description="Options for the enrichment",
|
||||
description="Summary settings",
|
||||
)
|
||||
|
||||
@@ -1,61 +1,71 @@
|
||||
from datetime import datetime
|
||||
from typing import List
|
||||
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
Requests,
|
||||
SchemaField,
|
||||
from backend.blocks.exa._auth import (
|
||||
ExaCredentials,
|
||||
ExaCredentialsField,
|
||||
ExaCredentialsInput,
|
||||
)
|
||||
|
||||
from ._config import exa
|
||||
from .helpers import ContentSettings
|
||||
from backend.blocks.exa.helpers import ContentSettings
|
||||
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
|
||||
from backend.data.model import SchemaField
|
||||
from backend.util.request import Requests
|
||||
|
||||
|
||||
class ExaSearchBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
credentials: ExaCredentialsInput = ExaCredentialsField()
|
||||
query: str = SchemaField(description="The search query")
|
||||
use_auto_prompt: bool = SchemaField(
|
||||
description="Whether to use autoprompt", default=True, advanced=True
|
||||
description="Whether to use autoprompt",
|
||||
default=True,
|
||||
advanced=True,
|
||||
)
|
||||
type: str = SchemaField(
|
||||
description="Type of search",
|
||||
default="",
|
||||
advanced=True,
|
||||
)
|
||||
type: str = SchemaField(description="Type of search", default="", advanced=True)
|
||||
category: str = SchemaField(
|
||||
description="Category to search within", default="", advanced=True
|
||||
description="Category to search within",
|
||||
default="",
|
||||
advanced=True,
|
||||
)
|
||||
number_of_results: int = SchemaField(
|
||||
description="Number of results to return", default=10, advanced=True
|
||||
description="Number of results to return",
|
||||
default=10,
|
||||
advanced=True,
|
||||
)
|
||||
include_domains: list[str] = SchemaField(
|
||||
description="Domains to include in search", default_factory=list
|
||||
include_domains: List[str] = SchemaField(
|
||||
description="Domains to include in search",
|
||||
default_factory=list,
|
||||
)
|
||||
exclude_domains: list[str] = SchemaField(
|
||||
exclude_domains: List[str] = SchemaField(
|
||||
description="Domains to exclude from search",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
)
|
||||
start_crawl_date: datetime = SchemaField(
|
||||
description="Start date for crawled content"
|
||||
description="Start date for crawled content",
|
||||
)
|
||||
end_crawl_date: datetime = SchemaField(
|
||||
description="End date for crawled content"
|
||||
description="End date for crawled content",
|
||||
)
|
||||
start_published_date: datetime = SchemaField(
|
||||
description="Start date for published content"
|
||||
description="Start date for published content",
|
||||
)
|
||||
end_published_date: datetime = SchemaField(
|
||||
description="End date for published content"
|
||||
description="End date for published content",
|
||||
)
|
||||
include_text: list[str] = SchemaField(
|
||||
description="Text patterns to include", default_factory=list, advanced=True
|
||||
include_text: List[str] = SchemaField(
|
||||
description="Text patterns to include",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
)
|
||||
exclude_text: list[str] = SchemaField(
|
||||
description="Text patterns to exclude", default_factory=list, advanced=True
|
||||
exclude_text: List[str] = SchemaField(
|
||||
description="Text patterns to exclude",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
)
|
||||
contents: ContentSettings = SchemaField(
|
||||
description="Content retrieval settings",
|
||||
@@ -65,7 +75,8 @@ class ExaSearchBlock(Block):
|
||||
|
||||
class Output(BlockSchema):
|
||||
results: list = SchemaField(
|
||||
description="List of search results", default_factory=list
|
||||
description="List of search results",
|
||||
default_factory=list,
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed",
|
||||
@@ -81,7 +92,7 @@ class ExaSearchBlock(Block):
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = "https://api.exa.ai/search"
|
||||
headers = {
|
||||
@@ -93,7 +104,7 @@ class ExaSearchBlock(Block):
|
||||
"query": input_data.query,
|
||||
"useAutoprompt": input_data.use_auto_prompt,
|
||||
"numResults": input_data.number_of_results,
|
||||
"contents": input_data.contents.model_dump(),
|
||||
"contents": input_data.contents.dict(),
|
||||
}
|
||||
|
||||
date_field_mapping = {
|
||||
|
||||
@@ -1,60 +1,57 @@
|
||||
from datetime import datetime
|
||||
from typing import Any
|
||||
from typing import Any, List
|
||||
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
Requests,
|
||||
SchemaField,
|
||||
from backend.blocks.exa._auth import (
|
||||
ExaCredentials,
|
||||
ExaCredentialsField,
|
||||
ExaCredentialsInput,
|
||||
)
|
||||
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
|
||||
from backend.data.model import SchemaField
|
||||
from backend.util.request import Requests
|
||||
|
||||
from ._config import exa
|
||||
from .helpers import ContentSettings
|
||||
|
||||
|
||||
class ExaFindSimilarBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
credentials: ExaCredentialsInput = ExaCredentialsField()
|
||||
url: str = SchemaField(
|
||||
description="The url for which you would like to find similar links"
|
||||
)
|
||||
number_of_results: int = SchemaField(
|
||||
description="Number of results to return", default=10, advanced=True
|
||||
description="Number of results to return",
|
||||
default=10,
|
||||
advanced=True,
|
||||
)
|
||||
include_domains: list[str] = SchemaField(
|
||||
include_domains: List[str] = SchemaField(
|
||||
description="Domains to include in search",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
)
|
||||
exclude_domains: list[str] = SchemaField(
|
||||
exclude_domains: List[str] = SchemaField(
|
||||
description="Domains to exclude from search",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
)
|
||||
start_crawl_date: datetime = SchemaField(
|
||||
description="Start date for crawled content"
|
||||
description="Start date for crawled content",
|
||||
)
|
||||
end_crawl_date: datetime = SchemaField(
|
||||
description="End date for crawled content"
|
||||
description="End date for crawled content",
|
||||
)
|
||||
start_published_date: datetime = SchemaField(
|
||||
description="Start date for published content"
|
||||
description="Start date for published content",
|
||||
)
|
||||
end_published_date: datetime = SchemaField(
|
||||
description="End date for published content"
|
||||
description="End date for published content",
|
||||
)
|
||||
include_text: list[str] = SchemaField(
|
||||
include_text: List[str] = SchemaField(
|
||||
description="Text patterns to include (max 1 string, up to 5 words)",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
)
|
||||
exclude_text: list[str] = SchemaField(
|
||||
exclude_text: List[str] = SchemaField(
|
||||
description="Text patterns to exclude (max 1 string, up to 5 words)",
|
||||
default_factory=list,
|
||||
advanced=True,
|
||||
@@ -66,13 +63,11 @@ class ExaFindSimilarBlock(Block):
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
results: list[Any] = SchemaField(
|
||||
results: List[Any] = SchemaField(
|
||||
description="List of similar documents with title, URL, published date, author, and score",
|
||||
default_factory=list,
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
error: str = SchemaField(description="Error message if the request failed")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
@@ -84,7 +79,7 @@ class ExaFindSimilarBlock(Block):
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = "https://api.exa.ai/findSimilar"
|
||||
headers = {
|
||||
@@ -95,7 +90,7 @@ class ExaFindSimilarBlock(Block):
|
||||
payload = {
|
||||
"url": input_data.url,
|
||||
"numResults": input_data.number_of_results,
|
||||
"contents": input_data.contents.model_dump(),
|
||||
"contents": input_data.contents.dict(),
|
||||
}
|
||||
|
||||
optional_field_mapping = {
|
||||
|
||||
@@ -1,201 +0,0 @@
|
||||
"""
|
||||
Exa Webhook Blocks
|
||||
|
||||
These blocks handle webhook events from Exa's API for websets and other events.
|
||||
"""
|
||||
|
||||
from backend.sdk import (
|
||||
BaseModel,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
BlockType,
|
||||
BlockWebhookConfig,
|
||||
CredentialsMetaInput,
|
||||
Field,
|
||||
ProviderName,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._config import exa
|
||||
from ._webhook import ExaEventType
|
||||
|
||||
|
||||
class WebsetEventFilter(BaseModel):
|
||||
"""Filter configuration for Exa webset events."""
|
||||
|
||||
webset_created: bool = Field(
|
||||
default=True, description="Receive notifications when websets are created"
|
||||
)
|
||||
webset_deleted: bool = Field(
|
||||
default=False, description="Receive notifications when websets are deleted"
|
||||
)
|
||||
webset_paused: bool = Field(
|
||||
default=False, description="Receive notifications when websets are paused"
|
||||
)
|
||||
webset_idle: bool = Field(
|
||||
default=False, description="Receive notifications when websets become idle"
|
||||
)
|
||||
search_created: bool = Field(
|
||||
default=True,
|
||||
description="Receive notifications when webset searches are created",
|
||||
)
|
||||
search_completed: bool = Field(
|
||||
default=True, description="Receive notifications when webset searches complete"
|
||||
)
|
||||
search_canceled: bool = Field(
|
||||
default=False,
|
||||
description="Receive notifications when webset searches are canceled",
|
||||
)
|
||||
search_updated: bool = Field(
|
||||
default=False,
|
||||
description="Receive notifications when webset searches are updated",
|
||||
)
|
||||
item_created: bool = Field(
|
||||
default=True, description="Receive notifications when webset items are created"
|
||||
)
|
||||
item_enriched: bool = Field(
|
||||
default=True, description="Receive notifications when webset items are enriched"
|
||||
)
|
||||
export_created: bool = Field(
|
||||
default=False,
|
||||
description="Receive notifications when webset exports are created",
|
||||
)
|
||||
export_completed: bool = Field(
|
||||
default=True, description="Receive notifications when webset exports complete"
|
||||
)
|
||||
import_created: bool = Field(
|
||||
default=False, description="Receive notifications when imports are created"
|
||||
)
|
||||
import_completed: bool = Field(
|
||||
default=True, description="Receive notifications when imports complete"
|
||||
)
|
||||
import_processing: bool = Field(
|
||||
default=False, description="Receive notifications when imports are processing"
|
||||
)
|
||||
|
||||
|
||||
class ExaWebsetWebhookBlock(Block):
|
||||
"""
|
||||
Receives webhook notifications for Exa webset events.
|
||||
|
||||
This block allows you to monitor various events related to Exa websets,
|
||||
including creation, updates, searches, and exports.
|
||||
"""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="Exa API credentials for webhook management"
|
||||
)
|
||||
webhook_url: str = SchemaField(
|
||||
description="URL to receive webhooks (auto-generated)",
|
||||
default="",
|
||||
hidden=True,
|
||||
)
|
||||
webset_id: str = SchemaField(
|
||||
description="The webset ID to monitor (optional, monitors all if empty)",
|
||||
default="",
|
||||
)
|
||||
event_filter: WebsetEventFilter = SchemaField(
|
||||
description="Configure which events to receive", default=WebsetEventFilter()
|
||||
)
|
||||
payload: dict = SchemaField(
|
||||
description="Webhook payload data", default={}, hidden=True
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
event_type: str = SchemaField(description="Type of event that occurred")
|
||||
event_id: str = SchemaField(description="Unique identifier for this event")
|
||||
webset_id: str = SchemaField(description="ID of the affected webset")
|
||||
data: dict = SchemaField(description="Event-specific data")
|
||||
timestamp: str = SchemaField(description="When the event occurred")
|
||||
metadata: dict = SchemaField(description="Additional event metadata")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="d0204ed8-8b81-408d-8b8d-ed087a546228",
|
||||
description="Receive webhook notifications for Exa webset events",
|
||||
categories={BlockCategory.INPUT},
|
||||
input_schema=ExaWebsetWebhookBlock.Input,
|
||||
output_schema=ExaWebsetWebhookBlock.Output,
|
||||
block_type=BlockType.WEBHOOK,
|
||||
webhook_config=BlockWebhookConfig(
|
||||
provider=ProviderName("exa"),
|
||||
webhook_type="webset",
|
||||
event_filter_input="event_filter",
|
||||
resource_format="{webset_id}",
|
||||
),
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
"""Process incoming Exa webhook payload."""
|
||||
try:
|
||||
payload = input_data.payload
|
||||
|
||||
# Extract event details
|
||||
event_type = payload.get("eventType", "unknown")
|
||||
event_id = payload.get("eventId", "")
|
||||
|
||||
# Get webset ID from payload or input
|
||||
webset_id = payload.get("websetId", input_data.webset_id)
|
||||
|
||||
# Check if we should process this event based on filter
|
||||
should_process = self._should_process_event(
|
||||
event_type, input_data.event_filter
|
||||
)
|
||||
|
||||
if not should_process:
|
||||
# Skip events that don't match our filter
|
||||
return
|
||||
|
||||
# Extract event data
|
||||
event_data = payload.get("data", {})
|
||||
timestamp = payload.get("occurredAt", payload.get("createdAt", ""))
|
||||
metadata = payload.get("metadata", {})
|
||||
|
||||
yield "event_type", event_type
|
||||
yield "event_id", event_id
|
||||
yield "webset_id", webset_id
|
||||
yield "data", event_data
|
||||
yield "timestamp", timestamp
|
||||
yield "metadata", metadata
|
||||
|
||||
except Exception as e:
|
||||
# Handle errors gracefully
|
||||
yield "event_type", "error"
|
||||
yield "event_id", ""
|
||||
yield "webset_id", input_data.webset_id
|
||||
yield "data", {"error": str(e)}
|
||||
yield "timestamp", ""
|
||||
yield "metadata", {}
|
||||
|
||||
def _should_process_event(
|
||||
self, event_type: str, event_filter: WebsetEventFilter
|
||||
) -> bool:
|
||||
"""Check if an event should be processed based on the filter."""
|
||||
filter_mapping = {
|
||||
ExaEventType.WEBSET_CREATED: event_filter.webset_created,
|
||||
ExaEventType.WEBSET_DELETED: event_filter.webset_deleted,
|
||||
ExaEventType.WEBSET_PAUSED: event_filter.webset_paused,
|
||||
ExaEventType.WEBSET_IDLE: event_filter.webset_idle,
|
||||
ExaEventType.WEBSET_SEARCH_CREATED: event_filter.search_created,
|
||||
ExaEventType.WEBSET_SEARCH_COMPLETED: event_filter.search_completed,
|
||||
ExaEventType.WEBSET_SEARCH_CANCELED: event_filter.search_canceled,
|
||||
ExaEventType.WEBSET_SEARCH_UPDATED: event_filter.search_updated,
|
||||
ExaEventType.WEBSET_ITEM_CREATED: event_filter.item_created,
|
||||
ExaEventType.WEBSET_ITEM_ENRICHED: event_filter.item_enriched,
|
||||
ExaEventType.WEBSET_EXPORT_CREATED: event_filter.export_created,
|
||||
ExaEventType.WEBSET_EXPORT_COMPLETED: event_filter.export_completed,
|
||||
ExaEventType.IMPORT_CREATED: event_filter.import_created,
|
||||
ExaEventType.IMPORT_COMPLETED: event_filter.import_completed,
|
||||
ExaEventType.IMPORT_PROCESSING: event_filter.import_processing,
|
||||
}
|
||||
|
||||
# Try to convert string to ExaEventType enum
|
||||
try:
|
||||
event_type_enum = ExaEventType(event_type)
|
||||
return filter_mapping.get(event_type_enum, True)
|
||||
except ValueError:
|
||||
# If event_type is not a valid enum value, process it by default
|
||||
return True
|
||||
@@ -1,456 +0,0 @@
|
||||
from typing import Any, Optional
|
||||
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
Requests,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._config import exa
|
||||
from .helpers import WebsetEnrichmentConfig, WebsetSearchConfig
|
||||
|
||||
|
||||
class ExaCreateWebsetBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
search: WebsetSearchConfig = SchemaField(
|
||||
description="Initial search configuration for the Webset"
|
||||
)
|
||||
enrichments: Optional[list[WebsetEnrichmentConfig]] = SchemaField(
|
||||
default=None,
|
||||
description="Enrichments to apply to Webset items",
|
||||
advanced=True,
|
||||
)
|
||||
external_id: Optional[str] = SchemaField(
|
||||
default=None,
|
||||
description="External identifier for the webset",
|
||||
placeholder="my-webset-123",
|
||||
advanced=True,
|
||||
)
|
||||
metadata: Optional[dict] = SchemaField(
|
||||
default=None,
|
||||
description="Key-value pairs to associate with this webset",
|
||||
advanced=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webset_id: str = SchemaField(
|
||||
description="The unique identifier for the created webset"
|
||||
)
|
||||
status: str = SchemaField(description="The status of the webset")
|
||||
external_id: Optional[str] = SchemaField(
|
||||
description="The external identifier for the webset", default=None
|
||||
)
|
||||
created_at: str = SchemaField(
|
||||
description="The date and time the webset was created"
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="0cda29ff-c549-4a19-8805-c982b7d4ec34",
|
||||
description="Create a new Exa Webset for persistent web search collections",
|
||||
categories={BlockCategory.SEARCH},
|
||||
input_schema=ExaCreateWebsetBlock.Input,
|
||||
output_schema=ExaCreateWebsetBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = "https://api.exa.ai/websets/v0/websets"
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
# Build the payload
|
||||
payload: dict[str, Any] = {
|
||||
"search": input_data.search.model_dump(exclude_none=True),
|
||||
}
|
||||
|
||||
# Convert enrichments to API format
|
||||
if input_data.enrichments:
|
||||
enrichments_data = []
|
||||
for enrichment in input_data.enrichments:
|
||||
enrichments_data.append(enrichment.model_dump(exclude_none=True))
|
||||
payload["enrichments"] = enrichments_data
|
||||
|
||||
if input_data.external_id:
|
||||
payload["externalId"] = input_data.external_id
|
||||
|
||||
if input_data.metadata:
|
||||
payload["metadata"] = input_data.metadata
|
||||
|
||||
try:
|
||||
response = await Requests().post(url, headers=headers, json=payload)
|
||||
data = response.json()
|
||||
|
||||
yield "webset_id", data.get("id", "")
|
||||
yield "status", data.get("status", "")
|
||||
yield "external_id", data.get("externalId")
|
||||
yield "created_at", data.get("createdAt", "")
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "webset_id", ""
|
||||
yield "status", ""
|
||||
yield "created_at", ""
|
||||
|
||||
|
||||
class ExaUpdateWebsetBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
webset_id: str = SchemaField(
|
||||
description="The ID or external ID of the Webset to update",
|
||||
placeholder="webset-id-or-external-id",
|
||||
)
|
||||
metadata: Optional[dict] = SchemaField(
|
||||
default=None,
|
||||
description="Key-value pairs to associate with this webset (set to null to clear)",
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webset_id: str = SchemaField(description="The unique identifier for the webset")
|
||||
status: str = SchemaField(description="The status of the webset")
|
||||
external_id: Optional[str] = SchemaField(
|
||||
description="The external identifier for the webset", default=None
|
||||
)
|
||||
metadata: dict = SchemaField(
|
||||
description="Updated metadata for the webset", default_factory=dict
|
||||
)
|
||||
updated_at: str = SchemaField(
|
||||
description="The date and time the webset was updated"
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="89ccd99a-3c2b-4fbf-9e25-0ffa398d0314",
|
||||
description="Update metadata for an existing Webset",
|
||||
categories={BlockCategory.SEARCH},
|
||||
input_schema=ExaUpdateWebsetBlock.Input,
|
||||
output_schema=ExaUpdateWebsetBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}"
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
# Build the payload
|
||||
payload = {}
|
||||
if input_data.metadata is not None:
|
||||
payload["metadata"] = input_data.metadata
|
||||
|
||||
try:
|
||||
response = await Requests().post(url, headers=headers, json=payload)
|
||||
data = response.json()
|
||||
|
||||
yield "webset_id", data.get("id", "")
|
||||
yield "status", data.get("status", "")
|
||||
yield "external_id", data.get("externalId")
|
||||
yield "metadata", data.get("metadata", {})
|
||||
yield "updated_at", data.get("updatedAt", "")
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "webset_id", ""
|
||||
yield "status", ""
|
||||
yield "metadata", {}
|
||||
yield "updated_at", ""
|
||||
|
||||
|
||||
class ExaListWebsetsBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
cursor: Optional[str] = SchemaField(
|
||||
default=None,
|
||||
description="Cursor for pagination through results",
|
||||
advanced=True,
|
||||
)
|
||||
limit: int = SchemaField(
|
||||
default=25,
|
||||
description="Number of websets to return (1-100)",
|
||||
ge=1,
|
||||
le=100,
|
||||
advanced=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
websets: list = SchemaField(description="List of websets", default_factory=list)
|
||||
has_more: bool = SchemaField(
|
||||
description="Whether there are more results to paginate through",
|
||||
default=False,
|
||||
)
|
||||
next_cursor: Optional[str] = SchemaField(
|
||||
description="Cursor for the next page of results", default=None
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="1dcd8fd6-c13f-4e6f-bd4c-654428fa4757",
|
||||
description="List all Websets with pagination support",
|
||||
categories={BlockCategory.SEARCH},
|
||||
input_schema=ExaListWebsetsBlock.Input,
|
||||
output_schema=ExaListWebsetsBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = "https://api.exa.ai/websets/v0/websets"
|
||||
headers = {
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
params: dict[str, Any] = {
|
||||
"limit": input_data.limit,
|
||||
}
|
||||
if input_data.cursor:
|
||||
params["cursor"] = input_data.cursor
|
||||
|
||||
try:
|
||||
response = await Requests().get(url, headers=headers, params=params)
|
||||
data = response.json()
|
||||
|
||||
yield "websets", data.get("data", [])
|
||||
yield "has_more", data.get("hasMore", False)
|
||||
yield "next_cursor", data.get("nextCursor")
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "websets", []
|
||||
yield "has_more", False
|
||||
|
||||
|
||||
class ExaGetWebsetBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
webset_id: str = SchemaField(
|
||||
description="The ID or external ID of the Webset to retrieve",
|
||||
placeholder="webset-id-or-external-id",
|
||||
)
|
||||
expand_items: bool = SchemaField(
|
||||
default=False, description="Include items in the response", advanced=True
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webset_id: str = SchemaField(description="The unique identifier for the webset")
|
||||
status: str = SchemaField(description="The status of the webset")
|
||||
external_id: Optional[str] = SchemaField(
|
||||
description="The external identifier for the webset", default=None
|
||||
)
|
||||
searches: list[dict] = SchemaField(
|
||||
description="The searches performed on the webset", default_factory=list
|
||||
)
|
||||
enrichments: list[dict] = SchemaField(
|
||||
description="The enrichments applied to the webset", default_factory=list
|
||||
)
|
||||
monitors: list[dict] = SchemaField(
|
||||
description="The monitors for the webset", default_factory=list
|
||||
)
|
||||
items: Optional[list[dict]] = SchemaField(
|
||||
description="The items in the webset (if expand_items is true)",
|
||||
default=None,
|
||||
)
|
||||
metadata: dict = SchemaField(
|
||||
description="Key-value pairs associated with the webset",
|
||||
default_factory=dict,
|
||||
)
|
||||
created_at: str = SchemaField(
|
||||
description="The date and time the webset was created"
|
||||
)
|
||||
updated_at: str = SchemaField(
|
||||
description="The date and time the webset was last updated"
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="6ab8e12a-132c-41bf-b5f3-d662620fa832",
|
||||
description="Retrieve a Webset by ID or external ID",
|
||||
categories={BlockCategory.SEARCH},
|
||||
input_schema=ExaGetWebsetBlock.Input,
|
||||
output_schema=ExaGetWebsetBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}"
|
||||
headers = {
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
params = {}
|
||||
if input_data.expand_items:
|
||||
params["expand[]"] = "items"
|
||||
|
||||
try:
|
||||
response = await Requests().get(url, headers=headers, params=params)
|
||||
data = response.json()
|
||||
|
||||
yield "webset_id", data.get("id", "")
|
||||
yield "status", data.get("status", "")
|
||||
yield "external_id", data.get("externalId")
|
||||
yield "searches", data.get("searches", [])
|
||||
yield "enrichments", data.get("enrichments", [])
|
||||
yield "monitors", data.get("monitors", [])
|
||||
yield "items", data.get("items")
|
||||
yield "metadata", data.get("metadata", {})
|
||||
yield "created_at", data.get("createdAt", "")
|
||||
yield "updated_at", data.get("updatedAt", "")
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "webset_id", ""
|
||||
yield "status", ""
|
||||
yield "searches", []
|
||||
yield "enrichments", []
|
||||
yield "monitors", []
|
||||
yield "metadata", {}
|
||||
yield "created_at", ""
|
||||
yield "updated_at", ""
|
||||
|
||||
|
||||
class ExaDeleteWebsetBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
webset_id: str = SchemaField(
|
||||
description="The ID or external ID of the Webset to delete",
|
||||
placeholder="webset-id-or-external-id",
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webset_id: str = SchemaField(
|
||||
description="The unique identifier for the deleted webset"
|
||||
)
|
||||
external_id: Optional[str] = SchemaField(
|
||||
description="The external identifier for the deleted webset", default=None
|
||||
)
|
||||
status: str = SchemaField(description="The status of the deleted webset")
|
||||
success: str = SchemaField(
|
||||
description="Whether the deletion was successful", default="true"
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="aa6994a2-e986-421f-8d4c-7671d3be7b7e",
|
||||
description="Delete a Webset and all its items",
|
||||
categories={BlockCategory.SEARCH},
|
||||
input_schema=ExaDeleteWebsetBlock.Input,
|
||||
output_schema=ExaDeleteWebsetBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}"
|
||||
headers = {
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
try:
|
||||
response = await Requests().delete(url, headers=headers)
|
||||
data = response.json()
|
||||
|
||||
yield "webset_id", data.get("id", "")
|
||||
yield "external_id", data.get("externalId")
|
||||
yield "status", data.get("status", "")
|
||||
yield "success", "true"
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "webset_id", ""
|
||||
yield "status", ""
|
||||
yield "success", "false"
|
||||
|
||||
|
||||
class ExaCancelWebsetBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = exa.credentials_field(
|
||||
description="The Exa integration requires an API Key."
|
||||
)
|
||||
webset_id: str = SchemaField(
|
||||
description="The ID or external ID of the Webset to cancel",
|
||||
placeholder="webset-id-or-external-id",
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webset_id: str = SchemaField(description="The unique identifier for the webset")
|
||||
status: str = SchemaField(
|
||||
description="The status of the webset after cancellation"
|
||||
)
|
||||
external_id: Optional[str] = SchemaField(
|
||||
description="The external identifier for the webset", default=None
|
||||
)
|
||||
success: str = SchemaField(
|
||||
description="Whether the cancellation was successful", default="true"
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if the request failed", default=""
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="e40a6420-1db8-47bb-b00a-0e6aecd74176",
|
||||
description="Cancel all operations being performed on a Webset",
|
||||
categories={BlockCategory.SEARCH},
|
||||
input_schema=ExaCancelWebsetBlock.Input,
|
||||
output_schema=ExaCancelWebsetBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}/cancel"
|
||||
headers = {
|
||||
"x-api-key": credentials.api_key.get_secret_value(),
|
||||
}
|
||||
|
||||
try:
|
||||
response = await Requests().post(url, headers=headers)
|
||||
data = response.json()
|
||||
|
||||
yield "webset_id", data.get("id", "")
|
||||
yield "status", data.get("status", "")
|
||||
yield "external_id", data.get("externalId")
|
||||
yield "success", "true"
|
||||
|
||||
except Exception as e:
|
||||
yield "error", str(e)
|
||||
yield "webset_id", ""
|
||||
yield "status", ""
|
||||
yield "success", "false"
|
||||
@@ -1,9 +0,0 @@
|
||||
# Import the provider builder to ensure it's registered
|
||||
from backend.sdk.registry import AutoRegistry
|
||||
|
||||
from .triggers import GenericWebhookTriggerBlock, generic_webhook
|
||||
|
||||
# Ensure the SDK registry is patched to include our webhook manager
|
||||
AutoRegistry.patch_integrations()
|
||||
|
||||
__all__ = ["GenericWebhookTriggerBlock", "generic_webhook"]
|
||||
@@ -1,21 +1,13 @@
|
||||
from backend.sdk import (
|
||||
from backend.data.block import (
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockManualWebhookConfig,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
ProviderBuilder,
|
||||
ProviderName,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._webhook import GenericWebhooksManager, GenericWebhookType
|
||||
|
||||
generic_webhook = (
|
||||
ProviderBuilder("generic_webhook")
|
||||
.with_webhook_manager(GenericWebhooksManager)
|
||||
.build()
|
||||
)
|
||||
from backend.data.model import SchemaField
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.integrations.webhooks.generic import GenericWebhookType
|
||||
|
||||
|
||||
class GenericWebhookTriggerBlock(Block):
|
||||
@@ -44,7 +36,7 @@ class GenericWebhookTriggerBlock(Block):
|
||||
input_schema=GenericWebhookTriggerBlock.Input,
|
||||
output_schema=GenericWebhookTriggerBlock.Output,
|
||||
webhook_config=BlockManualWebhookConfig(
|
||||
provider=ProviderName(generic_webhook.name),
|
||||
provider=ProviderName.GENERIC_WEBHOOK,
|
||||
webhook_type=GenericWebhookType.PLAIN,
|
||||
),
|
||||
test_input={"constants": {"key": "value"}, "payload": self.example_payload},
|
||||
|
||||
@@ -989,274 +989,6 @@ class GoogleSheetsFindReplaceBlock(Block):
|
||||
return result
|
||||
|
||||
|
||||
class GoogleSheetsFindBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: GoogleCredentialsInput = GoogleCredentialsField(
|
||||
["https://www.googleapis.com/auth/spreadsheets.readonly"]
|
||||
)
|
||||
spreadsheet_id: str = SchemaField(
|
||||
description="The ID or URL of the spreadsheet to search in",
|
||||
title="Spreadsheet ID or URL",
|
||||
)
|
||||
find_text: str = SchemaField(
|
||||
description="The text to find",
|
||||
)
|
||||
sheet_id: int = SchemaField(
|
||||
description="The ID of the specific sheet to search (optional, searches all sheets if not provided)",
|
||||
default=-1,
|
||||
)
|
||||
match_case: bool = SchemaField(
|
||||
description="Whether to match case",
|
||||
default=False,
|
||||
)
|
||||
match_entire_cell: bool = SchemaField(
|
||||
description="Whether to match entire cell",
|
||||
default=False,
|
||||
)
|
||||
find_all: bool = SchemaField(
|
||||
description="Whether to find all occurrences (true) or just the first one (false)",
|
||||
default=True,
|
||||
)
|
||||
range: str = SchemaField(
|
||||
description="The A1 notation range to search in (optional, searches entire sheet if not provided)",
|
||||
default="",
|
||||
advanced=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
result: dict = SchemaField(
|
||||
description="The result of the find operation including locations and count",
|
||||
)
|
||||
locations: list[dict] = SchemaField(
|
||||
description="List of cell locations where the text was found",
|
||||
)
|
||||
count: int = SchemaField(
|
||||
description="Number of occurrences found",
|
||||
)
|
||||
error: str = SchemaField(
|
||||
description="Error message if any",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="0f4ecc72-b958-47b2-b65e-76d6d26b9b27",
|
||||
description="Find text in a Google Sheets spreadsheet. Returns locations and count of occurrences. Can find all occurrences or just the first one.",
|
||||
categories={BlockCategory.DATA},
|
||||
input_schema=GoogleSheetsFindBlock.Input,
|
||||
output_schema=GoogleSheetsFindBlock.Output,
|
||||
disabled=GOOGLE_SHEETS_DISABLED,
|
||||
test_input={
|
||||
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
|
||||
"find_text": "search_value",
|
||||
"match_case": False,
|
||||
"match_entire_cell": False,
|
||||
"find_all": True,
|
||||
"range": "Sheet1!A1:C10",
|
||||
"credentials": TEST_CREDENTIALS_INPUT,
|
||||
},
|
||||
test_credentials=TEST_CREDENTIALS,
|
||||
test_output=[
|
||||
("count", 3),
|
||||
(
|
||||
"locations",
|
||||
[
|
||||
{"sheet": "Sheet1", "row": 2, "column": 1, "address": "A2"},
|
||||
{"sheet": "Sheet1", "row": 5, "column": 3, "address": "C5"},
|
||||
{"sheet": "Sheet2", "row": 1, "column": 2, "address": "B1"},
|
||||
],
|
||||
),
|
||||
("result", {"success": True}),
|
||||
],
|
||||
test_mock={
|
||||
"_find_text": lambda *args, **kwargs: {
|
||||
"locations": [
|
||||
{"sheet": "Sheet1", "row": 2, "column": 1, "address": "A2"},
|
||||
{"sheet": "Sheet1", "row": 5, "column": 3, "address": "C5"},
|
||||
{"sheet": "Sheet2", "row": 1, "column": 2, "address": "B1"},
|
||||
],
|
||||
"count": 3,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
service = _build_sheets_service(credentials)
|
||||
spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id)
|
||||
result = await asyncio.to_thread(
|
||||
self._find_text,
|
||||
service,
|
||||
spreadsheet_id,
|
||||
input_data.find_text,
|
||||
input_data.sheet_id,
|
||||
input_data.match_case,
|
||||
input_data.match_entire_cell,
|
||||
input_data.find_all,
|
||||
input_data.range,
|
||||
)
|
||||
yield "count", result["count"]
|
||||
yield "locations", result["locations"]
|
||||
yield "result", {"success": True}
|
||||
|
||||
def _find_text(
|
||||
self,
|
||||
service,
|
||||
spreadsheet_id: str,
|
||||
find_text: str,
|
||||
sheet_id: int,
|
||||
match_case: bool,
|
||||
match_entire_cell: bool,
|
||||
find_all: bool,
|
||||
range: str,
|
||||
) -> dict:
|
||||
# Unfortunately, Google Sheets API doesn't have a dedicated "find-only" operation
|
||||
# that returns cell locations. The findReplace operation only returns a count.
|
||||
# So we need to search through the values manually to get location details.
|
||||
|
||||
locations = []
|
||||
search_range = range if range else None
|
||||
|
||||
if not search_range:
|
||||
# If no range specified, search entire spreadsheet
|
||||
meta = service.spreadsheets().get(spreadsheetId=spreadsheet_id).execute()
|
||||
sheets = meta.get("sheets", [])
|
||||
|
||||
# Filter to specific sheet if provided
|
||||
if sheet_id >= 0:
|
||||
sheets = [
|
||||
s
|
||||
for s in sheets
|
||||
if s.get("properties", {}).get("sheetId") == sheet_id
|
||||
]
|
||||
|
||||
# Search each sheet
|
||||
for sheet in sheets:
|
||||
sheet_name = sheet.get("properties", {}).get("title", "")
|
||||
sheet_range = f"'{sheet_name}'"
|
||||
self._search_range(
|
||||
service,
|
||||
spreadsheet_id,
|
||||
sheet_range,
|
||||
sheet_name,
|
||||
find_text,
|
||||
match_case,
|
||||
match_entire_cell,
|
||||
find_all,
|
||||
locations,
|
||||
)
|
||||
if not find_all and locations:
|
||||
break
|
||||
else:
|
||||
# Search specific range
|
||||
sheet_name, cell_range = parse_a1_notation(search_range)
|
||||
if not sheet_name:
|
||||
# Get first sheet name if not specified
|
||||
meta = (
|
||||
service.spreadsheets().get(spreadsheetId=spreadsheet_id).execute()
|
||||
)
|
||||
sheet_name = (
|
||||
meta.get("sheets", [{}])[0]
|
||||
.get("properties", {})
|
||||
.get("title", "Sheet1")
|
||||
)
|
||||
search_range = f"'{sheet_name}'!{search_range}"
|
||||
|
||||
self._search_range(
|
||||
service,
|
||||
spreadsheet_id,
|
||||
search_range,
|
||||
sheet_name,
|
||||
find_text,
|
||||
match_case,
|
||||
match_entire_cell,
|
||||
find_all,
|
||||
locations,
|
||||
)
|
||||
|
||||
return {"locations": locations, "count": len(locations)}
|
||||
|
||||
def _search_range(
|
||||
self,
|
||||
service,
|
||||
spreadsheet_id: str,
|
||||
range_name: str,
|
||||
sheet_name: str,
|
||||
find_text: str,
|
||||
match_case: bool,
|
||||
match_entire_cell: bool,
|
||||
find_all: bool,
|
||||
locations: list,
|
||||
):
|
||||
"""Search within a specific range and add results to locations list."""
|
||||
values_result = (
|
||||
service.spreadsheets()
|
||||
.values()
|
||||
.get(spreadsheetId=spreadsheet_id, range=range_name)
|
||||
.execute()
|
||||
)
|
||||
values = values_result.get("values", [])
|
||||
|
||||
# Parse range to get starting position
|
||||
_, cell_range = parse_a1_notation(range_name)
|
||||
start_col = 0
|
||||
start_row = 0
|
||||
|
||||
if cell_range and ":" in cell_range:
|
||||
start_cell = cell_range.split(":")[0]
|
||||
# Parse A1 notation (e.g., "B3" -> col=1, row=2)
|
||||
col_part = ""
|
||||
row_part = ""
|
||||
for char in start_cell:
|
||||
if char.isalpha():
|
||||
col_part += char
|
||||
elif char.isdigit():
|
||||
row_part += char
|
||||
|
||||
if col_part:
|
||||
start_col = ord(col_part.upper()) - ord("A")
|
||||
if row_part:
|
||||
start_row = int(row_part) - 1
|
||||
|
||||
# Search through values
|
||||
for row_idx, row in enumerate(values):
|
||||
for col_idx, cell_value in enumerate(row):
|
||||
if cell_value is None:
|
||||
continue
|
||||
|
||||
cell_str = str(cell_value)
|
||||
|
||||
# Apply search criteria
|
||||
search_text = find_text if match_case else find_text.lower()
|
||||
cell_text = cell_str if match_case else cell_str.lower()
|
||||
|
||||
found = False
|
||||
if match_entire_cell:
|
||||
found = cell_text == search_text
|
||||
else:
|
||||
found = search_text in cell_text
|
||||
|
||||
if found:
|
||||
# Calculate actual spreadsheet position
|
||||
actual_row = start_row + row_idx + 1
|
||||
actual_col = start_col + col_idx + 1
|
||||
col_letter = chr(ord("A") + start_col + col_idx)
|
||||
address = f"{col_letter}{actual_row}"
|
||||
|
||||
location = {
|
||||
"sheet": sheet_name,
|
||||
"row": actual_row,
|
||||
"column": actual_col,
|
||||
"address": address,
|
||||
"value": cell_str,
|
||||
}
|
||||
locations.append(location)
|
||||
|
||||
# Stop after first match if find_all is False
|
||||
if not find_all:
|
||||
return
|
||||
|
||||
|
||||
class GoogleSheetsFormatBlock(Block):
|
||||
class Input(BlockSchema):
|
||||
credentials: GoogleCredentialsInput = GoogleCredentialsField(
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
"""
|
||||
Linear integration blocks for AutoGPT Platform.
|
||||
"""
|
||||
|
||||
from .comment import LinearCreateCommentBlock
|
||||
from .issues import LinearCreateIssueBlock, LinearSearchIssuesBlock
|
||||
from .projects import LinearSearchProjectsBlock
|
||||
|
||||
__all__ = [
|
||||
"LinearCreateCommentBlock",
|
||||
"LinearCreateIssueBlock",
|
||||
"LinearSearchIssuesBlock",
|
||||
"LinearSearchProjectsBlock",
|
||||
]
|
||||
@@ -1,11 +1,16 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any, Dict, Optional, Union
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from backend.sdk import APIKeyCredentials, OAuth2Credentials, Requests
|
||||
|
||||
from .models import CreateCommentResponse, CreateIssueResponse, Issue, Project
|
||||
from backend.blocks.linear._auth import LinearCredentials
|
||||
from backend.blocks.linear.models import (
|
||||
CreateCommentResponse,
|
||||
CreateIssueResponse,
|
||||
Issue,
|
||||
Project,
|
||||
)
|
||||
from backend.util.request import Requests
|
||||
|
||||
|
||||
class LinearAPIException(Exception):
|
||||
@@ -24,12 +29,13 @@ class LinearClient:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
credentials: Union[OAuth2Credentials, APIKeyCredentials, None] = None,
|
||||
credentials: LinearCredentials | None = None,
|
||||
custom_requests: Optional[Requests] = None,
|
||||
):
|
||||
if custom_requests:
|
||||
self._requests = custom_requests
|
||||
else:
|
||||
|
||||
headers: Dict[str, str] = {
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
@@ -1,19 +1,31 @@
|
||||
"""
|
||||
Shared configuration for all Linear blocks using the new SDK pattern.
|
||||
"""
|
||||
|
||||
import os
|
||||
from enum import Enum
|
||||
from typing import Literal
|
||||
|
||||
from backend.sdk import (
|
||||
from pydantic import SecretStr
|
||||
|
||||
from backend.data.model import (
|
||||
APIKeyCredentials,
|
||||
BlockCostType,
|
||||
CredentialsField,
|
||||
CredentialsMetaInput,
|
||||
OAuth2Credentials,
|
||||
ProviderBuilder,
|
||||
SecretStr,
|
||||
)
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.util.settings import Secrets
|
||||
|
||||
secrets = Secrets()
|
||||
LINEAR_OAUTH_IS_CONFIGURED = bool(
|
||||
secrets.linear_client_id and secrets.linear_client_secret
|
||||
)
|
||||
|
||||
from ._oauth import LinearOAuthHandler
|
||||
LinearCredentials = OAuth2Credentials | APIKeyCredentials
|
||||
# LinearCredentialsInput = CredentialsMetaInput[
|
||||
# Literal[ProviderName.LINEAR],
|
||||
# Literal["oauth2", "api_key"] if LINEAR_OAUTH_IS_CONFIGURED else Literal["oauth2"],
|
||||
# ]
|
||||
LinearCredentialsInput = CredentialsMetaInput[
|
||||
Literal[ProviderName.LINEAR], Literal["oauth2"]
|
||||
]
|
||||
|
||||
|
||||
# (required) Comma separated list of scopes:
|
||||
|
||||
@@ -38,35 +50,21 @@ class LinearScope(str, Enum):
|
||||
ADMIN = "admin"
|
||||
|
||||
|
||||
# Check if Linear OAuth is configured
|
||||
client_id = os.getenv("LINEAR_CLIENT_ID")
|
||||
client_secret = os.getenv("LINEAR_CLIENT_SECRET")
|
||||
LINEAR_OAUTH_IS_CONFIGURED = bool(client_id and client_secret)
|
||||
def LinearCredentialsField(scopes: list[LinearScope]) -> LinearCredentialsInput:
|
||||
"""
|
||||
Creates a Linear credentials input on a block.
|
||||
|
||||
# Build the Linear provider
|
||||
builder = (
|
||||
ProviderBuilder("linear")
|
||||
.with_api_key(env_var_name="LINEAR_API_KEY", title="Linear API Key")
|
||||
.with_base_cost(1, BlockCostType.RUN)
|
||||
)
|
||||
|
||||
# Linear only supports OAuth authentication
|
||||
if LINEAR_OAUTH_IS_CONFIGURED:
|
||||
builder = builder.with_oauth(
|
||||
LinearOAuthHandler,
|
||||
scopes=[
|
||||
LinearScope.READ,
|
||||
LinearScope.WRITE,
|
||||
LinearScope.ISSUES_CREATE,
|
||||
LinearScope.COMMENTS_CREATE,
|
||||
],
|
||||
client_id_env_var="LINEAR_CLIENT_ID",
|
||||
client_secret_env_var="LINEAR_CLIENT_SECRET",
|
||||
Params:
|
||||
scope: The authorization scope needed for the block to work. ([list of available scopes](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps#available-scopes))
|
||||
""" # noqa
|
||||
return CredentialsField(
|
||||
required_scopes=set([LinearScope.READ.value]).union(
|
||||
set([scope.value for scope in scopes])
|
||||
),
|
||||
description="The Linear integration can be used with OAuth, "
|
||||
"or any API key with sufficient permissions for the blocks it is used on.",
|
||||
)
|
||||
|
||||
# Build the provider
|
||||
linear = builder.build()
|
||||
|
||||
|
||||
TEST_CREDENTIALS_OAUTH = OAuth2Credentials(
|
||||
id="01234567-89ab-cdef-0123-456789abcdef",
|
||||
@@ -1,32 +1,24 @@
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
OAuth2Credentials,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._api import LinearAPIException, LinearClient
|
||||
from ._config import (
|
||||
from backend.blocks.linear._api import LinearAPIException, LinearClient
|
||||
from backend.blocks.linear._auth import (
|
||||
LINEAR_OAUTH_IS_CONFIGURED,
|
||||
TEST_CREDENTIALS_INPUT_OAUTH,
|
||||
TEST_CREDENTIALS_OAUTH,
|
||||
LinearCredentials,
|
||||
LinearCredentialsField,
|
||||
LinearCredentialsInput,
|
||||
LinearScope,
|
||||
linear,
|
||||
)
|
||||
from .models import CreateCommentResponse
|
||||
from backend.blocks.linear.models import CreateCommentResponse
|
||||
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
|
||||
from backend.data.model import SchemaField
|
||||
|
||||
|
||||
class LinearCreateCommentBlock(Block):
|
||||
"""Block for creating comments on Linear issues"""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = linear.credentials_field(
|
||||
description="Linear credentials with comment creation permissions",
|
||||
required_scopes={LinearScope.COMMENTS_CREATE},
|
||||
credentials: LinearCredentialsInput = LinearCredentialsField(
|
||||
scopes=[LinearScope.COMMENTS_CREATE],
|
||||
)
|
||||
issue_id: str = SchemaField(description="ID of the issue to comment on")
|
||||
comment: str = SchemaField(description="Comment text to add to the issue")
|
||||
@@ -63,7 +55,7 @@ class LinearCreateCommentBlock(Block):
|
||||
|
||||
@staticmethod
|
||||
async def create_comment(
|
||||
credentials: OAuth2Credentials | APIKeyCredentials, issue_id: str, comment: str
|
||||
credentials: LinearCredentials, issue_id: str, comment: str
|
||||
) -> tuple[str, str]:
|
||||
client = LinearClient(credentials=credentials)
|
||||
response: CreateCommentResponse = await client.try_create_comment(
|
||||
@@ -72,11 +64,7 @@ class LinearCreateCommentBlock(Block):
|
||||
return response.comment.id, response.comment.body
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: Input,
|
||||
*,
|
||||
credentials: OAuth2Credentials | APIKeyCredentials,
|
||||
**kwargs,
|
||||
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
"""Execute the comment creation"""
|
||||
try:
|
||||
|
||||
@@ -1,32 +1,24 @@
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
OAuth2Credentials,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._api import LinearAPIException, LinearClient
|
||||
from ._config import (
|
||||
from backend.blocks.linear._api import LinearAPIException, LinearClient
|
||||
from backend.blocks.linear._auth import (
|
||||
LINEAR_OAUTH_IS_CONFIGURED,
|
||||
TEST_CREDENTIALS_INPUT_OAUTH,
|
||||
TEST_CREDENTIALS_OAUTH,
|
||||
LinearCredentials,
|
||||
LinearCredentialsField,
|
||||
LinearCredentialsInput,
|
||||
LinearScope,
|
||||
linear,
|
||||
)
|
||||
from .models import CreateIssueResponse, Issue
|
||||
from backend.blocks.linear.models import CreateIssueResponse, Issue
|
||||
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
|
||||
from backend.data.model import SchemaField
|
||||
|
||||
|
||||
class LinearCreateIssueBlock(Block):
|
||||
"""Block for creating issues on Linear"""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = linear.credentials_field(
|
||||
description="Linear credentials with issue creation permissions",
|
||||
required_scopes={LinearScope.ISSUES_CREATE},
|
||||
credentials: LinearCredentialsInput = LinearCredentialsField(
|
||||
scopes=[LinearScope.ISSUES_CREATE],
|
||||
)
|
||||
title: str = SchemaField(description="Title of the issue")
|
||||
description: str | None = SchemaField(description="Description of the issue")
|
||||
@@ -76,7 +68,7 @@ class LinearCreateIssueBlock(Block):
|
||||
|
||||
@staticmethod
|
||||
async def create_issue(
|
||||
credentials: OAuth2Credentials | APIKeyCredentials,
|
||||
credentials: LinearCredentials,
|
||||
team_name: str,
|
||||
title: str,
|
||||
description: str | None = None,
|
||||
@@ -102,11 +94,7 @@ class LinearCreateIssueBlock(Block):
|
||||
return response.issue.identifier, response.issue.title
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: Input,
|
||||
*,
|
||||
credentials: OAuth2Credentials,
|
||||
**kwargs,
|
||||
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
"""Execute the issue creation"""
|
||||
try:
|
||||
@@ -133,9 +121,8 @@ class LinearSearchIssuesBlock(Block):
|
||||
|
||||
class Input(BlockSchema):
|
||||
term: str = SchemaField(description="Term to search for issues")
|
||||
credentials: CredentialsMetaInput = linear.credentials_field(
|
||||
description="Linear credentials with read permissions",
|
||||
required_scopes={LinearScope.READ},
|
||||
credentials: LinearCredentialsInput = LinearCredentialsField(
|
||||
scopes=[LinearScope.READ],
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
@@ -182,7 +169,7 @@ class LinearSearchIssuesBlock(Block):
|
||||
|
||||
@staticmethod
|
||||
async def search_issues(
|
||||
credentials: OAuth2Credentials | APIKeyCredentials,
|
||||
credentials: LinearCredentials,
|
||||
term: str,
|
||||
) -> list[Issue]:
|
||||
client = LinearClient(credentials=credentials)
|
||||
@@ -190,11 +177,7 @@ class LinearSearchIssuesBlock(Block):
|
||||
return response
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: Input,
|
||||
*,
|
||||
credentials: OAuth2Credentials | APIKeyCredentials,
|
||||
**kwargs,
|
||||
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
"""Execute the issue search"""
|
||||
try:
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from backend.sdk import BaseModel
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class Comment(BaseModel):
|
||||
|
||||
@@ -1,32 +1,24 @@
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
OAuth2Credentials,
|
||||
SchemaField,
|
||||
)
|
||||
|
||||
from ._api import LinearAPIException, LinearClient
|
||||
from ._config import (
|
||||
from backend.blocks.linear._api import LinearAPIException, LinearClient
|
||||
from backend.blocks.linear._auth import (
|
||||
LINEAR_OAUTH_IS_CONFIGURED,
|
||||
TEST_CREDENTIALS_INPUT_OAUTH,
|
||||
TEST_CREDENTIALS_OAUTH,
|
||||
LinearCredentials,
|
||||
LinearCredentialsField,
|
||||
LinearCredentialsInput,
|
||||
LinearScope,
|
||||
linear,
|
||||
)
|
||||
from .models import Project
|
||||
from backend.blocks.linear.models import Project
|
||||
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
|
||||
from backend.data.model import SchemaField
|
||||
|
||||
|
||||
class LinearSearchProjectsBlock(Block):
|
||||
"""Block for searching projects on Linear"""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = linear.credentials_field(
|
||||
description="Linear credentials with read permissions",
|
||||
required_scopes={LinearScope.READ},
|
||||
credentials: LinearCredentialsInput = LinearCredentialsField(
|
||||
scopes=[LinearScope.READ],
|
||||
)
|
||||
term: str = SchemaField(description="Term to search for projects")
|
||||
|
||||
@@ -78,7 +70,7 @@ class LinearSearchProjectsBlock(Block):
|
||||
|
||||
@staticmethod
|
||||
async def search_projects(
|
||||
credentials: OAuth2Credentials | APIKeyCredentials,
|
||||
credentials: LinearCredentials,
|
||||
term: str,
|
||||
) -> list[Project]:
|
||||
client = LinearClient(credentials=credentials)
|
||||
@@ -86,11 +78,7 @@ class LinearSearchProjectsBlock(Block):
|
||||
return response
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: Input,
|
||||
*,
|
||||
credentials: OAuth2Credentials | APIKeyCredentials,
|
||||
**kwargs,
|
||||
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
"""Execute the project search"""
|
||||
try:
|
||||
|
||||
@@ -127,9 +127,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
|
||||
PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE = (
|
||||
"perplexity/llama-3.1-sonar-large-128k-online"
|
||||
)
|
||||
PERPLEXITY_SONAR = "perplexity/sonar"
|
||||
PERPLEXITY_SONAR_PRO = "perplexity/sonar-pro"
|
||||
PERPLEXITY_SONAR_DEEP_RESEARCH = "perplexity/sonar-deep-research"
|
||||
QWEN_QWQ_32B_PREVIEW = "qwen/qwq-32b-preview"
|
||||
NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B = "nousresearch/hermes-3-llama-3.1-405b"
|
||||
NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B = "nousresearch/hermes-3-llama-3.1-70b"
|
||||
@@ -232,13 +229,6 @@ MODEL_METADATA = {
|
||||
LlmModel.PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE: ModelMetadata(
|
||||
"open_router", 127072, 127072
|
||||
),
|
||||
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 127000),
|
||||
LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata("open_router", 200000, 8000),
|
||||
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata(
|
||||
"open_router",
|
||||
128000,
|
||||
128000,
|
||||
),
|
||||
LlmModel.QWEN_QWQ_32B_PREVIEW: ModelMetadata("open_router", 32768, 32768),
|
||||
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: ModelMetadata(
|
||||
"open_router", 131000, 4096
|
||||
@@ -283,7 +273,6 @@ class LLMResponse(BaseModel):
|
||||
tool_calls: Optional[List[ToolContentBlock]] | None
|
||||
prompt_tokens: int
|
||||
completion_tokens: int
|
||||
reasoning: Optional[str] = None
|
||||
|
||||
|
||||
def convert_openai_tool_fmt_to_anthropic(
|
||||
@@ -318,46 +307,6 @@ def convert_openai_tool_fmt_to_anthropic(
|
||||
return anthropic_tools
|
||||
|
||||
|
||||
def extract_openai_reasoning(response) -> str | None:
|
||||
"""Extract reasoning from OpenAI-compatible response if available."""
|
||||
"""Note: This will likely not working since the reasoning is not present in another Response API"""
|
||||
reasoning = None
|
||||
choice = response.choices[0]
|
||||
if hasattr(choice, "reasoning") and getattr(choice, "reasoning", None):
|
||||
reasoning = str(getattr(choice, "reasoning"))
|
||||
elif hasattr(response, "reasoning") and getattr(response, "reasoning", None):
|
||||
reasoning = str(getattr(response, "reasoning"))
|
||||
elif hasattr(choice.message, "reasoning") and getattr(
|
||||
choice.message, "reasoning", None
|
||||
):
|
||||
reasoning = str(getattr(choice.message, "reasoning"))
|
||||
return reasoning
|
||||
|
||||
|
||||
def extract_openai_tool_calls(response) -> list[ToolContentBlock] | None:
|
||||
"""Extract tool calls from OpenAI-compatible response."""
|
||||
if response.choices[0].message.tool_calls:
|
||||
return [
|
||||
ToolContentBlock(
|
||||
id=tool.id,
|
||||
type=tool.type,
|
||||
function=ToolCall(
|
||||
name=tool.function.name,
|
||||
arguments=tool.function.arguments,
|
||||
),
|
||||
)
|
||||
for tool in response.choices[0].message.tool_calls
|
||||
]
|
||||
return None
|
||||
|
||||
|
||||
def get_parallel_tool_calls_param(llm_model: LlmModel, parallel_tool_calls):
|
||||
"""Get the appropriate parallel_tool_calls parameter for OpenAI-compatible APIs."""
|
||||
if llm_model.startswith("o") or parallel_tool_calls is None:
|
||||
return openai.NOT_GIVEN
|
||||
return parallel_tool_calls
|
||||
|
||||
|
||||
async def llm_call(
|
||||
credentials: APIKeyCredentials,
|
||||
llm_model: LlmModel,
|
||||
@@ -411,9 +360,8 @@ async def llm_call(
|
||||
oai_client = openai.AsyncOpenAI(api_key=credentials.api_key.get_secret_value())
|
||||
response_format = None
|
||||
|
||||
parallel_tool_calls = get_parallel_tool_calls_param(
|
||||
llm_model, parallel_tool_calls
|
||||
)
|
||||
if llm_model.startswith("o") or parallel_tool_calls is None:
|
||||
parallel_tool_calls = openai.NOT_GIVEN
|
||||
|
||||
if json_format:
|
||||
response_format = {"type": "json_object"}
|
||||
@@ -427,8 +375,20 @@ async def llm_call(
|
||||
parallel_tool_calls=parallel_tool_calls,
|
||||
)
|
||||
|
||||
tool_calls = extract_openai_tool_calls(response)
|
||||
reasoning = extract_openai_reasoning(response)
|
||||
if response.choices[0].message.tool_calls:
|
||||
tool_calls = [
|
||||
ToolContentBlock(
|
||||
id=tool.id,
|
||||
type=tool.type,
|
||||
function=ToolCall(
|
||||
name=tool.function.name,
|
||||
arguments=tool.function.arguments,
|
||||
),
|
||||
)
|
||||
for tool in response.choices[0].message.tool_calls
|
||||
]
|
||||
else:
|
||||
tool_calls = None
|
||||
|
||||
return LLMResponse(
|
||||
raw_response=response.choices[0].message,
|
||||
@@ -437,7 +397,6 @@ async def llm_call(
|
||||
tool_calls=tool_calls,
|
||||
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
|
||||
completion_tokens=response.usage.completion_tokens if response.usage else 0,
|
||||
reasoning=reasoning,
|
||||
)
|
||||
elif provider == "anthropic":
|
||||
|
||||
@@ -499,12 +458,6 @@ async def llm_call(
|
||||
f"Tool use stop reason but no tool calls found in content. {resp}"
|
||||
)
|
||||
|
||||
reasoning = None
|
||||
for content_block in resp.content:
|
||||
if hasattr(content_block, "type") and content_block.type == "thinking":
|
||||
reasoning = content_block.thinking
|
||||
break
|
||||
|
||||
return LLMResponse(
|
||||
raw_response=resp,
|
||||
prompt=prompt,
|
||||
@@ -516,7 +469,6 @@ async def llm_call(
|
||||
tool_calls=tool_calls,
|
||||
prompt_tokens=resp.usage.input_tokens,
|
||||
completion_tokens=resp.usage.output_tokens,
|
||||
reasoning=reasoning,
|
||||
)
|
||||
except anthropic.APIError as e:
|
||||
error_message = f"Anthropic API error: {str(e)}"
|
||||
@@ -541,7 +493,6 @@ async def llm_call(
|
||||
tool_calls=None,
|
||||
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
|
||||
completion_tokens=response.usage.completion_tokens if response.usage else 0,
|
||||
reasoning=None,
|
||||
)
|
||||
elif provider == "ollama":
|
||||
if tools:
|
||||
@@ -563,7 +514,6 @@ async def llm_call(
|
||||
tool_calls=None,
|
||||
prompt_tokens=response.get("prompt_eval_count") or 0,
|
||||
completion_tokens=response.get("eval_count") or 0,
|
||||
reasoning=None,
|
||||
)
|
||||
elif provider == "open_router":
|
||||
tools_param = tools if tools else openai.NOT_GIVEN
|
||||
@@ -572,10 +522,6 @@ async def llm_call(
|
||||
api_key=credentials.api_key.get_secret_value(),
|
||||
)
|
||||
|
||||
parallel_tool_calls_param = get_parallel_tool_calls_param(
|
||||
llm_model, parallel_tool_calls
|
||||
)
|
||||
|
||||
response = await client.chat.completions.create(
|
||||
extra_headers={
|
||||
"HTTP-Referer": "https://agpt.co",
|
||||
@@ -585,7 +531,6 @@ async def llm_call(
|
||||
messages=prompt, # type: ignore
|
||||
max_tokens=max_tokens,
|
||||
tools=tools_param, # type: ignore
|
||||
parallel_tool_calls=parallel_tool_calls_param,
|
||||
)
|
||||
|
||||
# If there's no response, raise an error
|
||||
@@ -595,8 +540,19 @@ async def llm_call(
|
||||
else:
|
||||
raise ValueError("No response from OpenRouter.")
|
||||
|
||||
tool_calls = extract_openai_tool_calls(response)
|
||||
reasoning = extract_openai_reasoning(response)
|
||||
if response.choices[0].message.tool_calls:
|
||||
tool_calls = [
|
||||
ToolContentBlock(
|
||||
id=tool.id,
|
||||
type=tool.type,
|
||||
function=ToolCall(
|
||||
name=tool.function.name, arguments=tool.function.arguments
|
||||
),
|
||||
)
|
||||
for tool in response.choices[0].message.tool_calls
|
||||
]
|
||||
else:
|
||||
tool_calls = None
|
||||
|
||||
return LLMResponse(
|
||||
raw_response=response.choices[0].message,
|
||||
@@ -605,7 +561,6 @@ async def llm_call(
|
||||
tool_calls=tool_calls,
|
||||
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
|
||||
completion_tokens=response.usage.completion_tokens if response.usage else 0,
|
||||
reasoning=reasoning,
|
||||
)
|
||||
elif provider == "llama_api":
|
||||
tools_param = tools if tools else openai.NOT_GIVEN
|
||||
@@ -614,10 +569,6 @@ async def llm_call(
|
||||
api_key=credentials.api_key.get_secret_value(),
|
||||
)
|
||||
|
||||
parallel_tool_calls_param = get_parallel_tool_calls_param(
|
||||
llm_model, parallel_tool_calls
|
||||
)
|
||||
|
||||
response = await client.chat.completions.create(
|
||||
extra_headers={
|
||||
"HTTP-Referer": "https://agpt.co",
|
||||
@@ -627,7 +578,9 @@ async def llm_call(
|
||||
messages=prompt, # type: ignore
|
||||
max_tokens=max_tokens,
|
||||
tools=tools_param, # type: ignore
|
||||
parallel_tool_calls=parallel_tool_calls_param,
|
||||
parallel_tool_calls=(
|
||||
openai.NOT_GIVEN if parallel_tool_calls is None else parallel_tool_calls
|
||||
),
|
||||
)
|
||||
|
||||
# If there's no response, raise an error
|
||||
@@ -637,8 +590,19 @@ async def llm_call(
|
||||
else:
|
||||
raise ValueError("No response from Llama API.")
|
||||
|
||||
tool_calls = extract_openai_tool_calls(response)
|
||||
reasoning = extract_openai_reasoning(response)
|
||||
if response.choices[0].message.tool_calls:
|
||||
tool_calls = [
|
||||
ToolContentBlock(
|
||||
id=tool.id,
|
||||
type=tool.type,
|
||||
function=ToolCall(
|
||||
name=tool.function.name, arguments=tool.function.arguments
|
||||
),
|
||||
)
|
||||
for tool in response.choices[0].message.tool_calls
|
||||
]
|
||||
else:
|
||||
tool_calls = None
|
||||
|
||||
return LLMResponse(
|
||||
raw_response=response.choices[0].message,
|
||||
@@ -647,7 +611,6 @@ async def llm_call(
|
||||
tool_calls=tool_calls,
|
||||
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
|
||||
completion_tokens=response.usage.completion_tokens if response.usage else 0,
|
||||
reasoning=reasoning,
|
||||
)
|
||||
elif provider == "aiml_api":
|
||||
client = openai.OpenAI(
|
||||
@@ -671,7 +634,6 @@ async def llm_call(
|
||||
completion_tokens=(
|
||||
completion.usage.completion_tokens if completion.usage else 0
|
||||
),
|
||||
reasoning=None,
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unsupported LLM provider: {provider}")
|
||||
@@ -785,7 +747,6 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
|
||||
tool_calls=None,
|
||||
prompt_tokens=0,
|
||||
completion_tokens=0,
|
||||
reasoning=None,
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
@@ -452,33 +452,28 @@ class SmartDecisionMakerBlock(Block):
|
||||
if pending_tool_calls and input_data.last_tool_output is None:
|
||||
raise ValueError(f"Tool call requires an output for {pending_tool_calls}")
|
||||
|
||||
# Only assign the last tool output to the first pending tool call
|
||||
tool_output = []
|
||||
if pending_tool_calls and input_data.last_tool_output is not None:
|
||||
# Get the first pending tool call ID
|
||||
first_call_id = next(iter(pending_tool_calls.keys()))
|
||||
tool_output.append(
|
||||
_create_tool_response(first_call_id, input_data.last_tool_output)
|
||||
# Prefill all missing tool calls with the last tool output/
|
||||
# TODO: we need a better way to handle this.
|
||||
tool_output = [
|
||||
_create_tool_response(pending_call_id, input_data.last_tool_output)
|
||||
for pending_call_id, count in pending_tool_calls.items()
|
||||
for _ in range(count)
|
||||
]
|
||||
|
||||
# If the SDM block only calls 1 tool at a time, this should not happen.
|
||||
if len(tool_output) > 1:
|
||||
logger.warning(
|
||||
f"[SmartDecisionMakerBlock-node_exec_id={node_exec_id}] "
|
||||
f"Multiple pending tool calls are prefilled using a single output. "
|
||||
f"Execution may not be accurate."
|
||||
)
|
||||
|
||||
# Add tool output to prompt right away
|
||||
prompt.extend(tool_output)
|
||||
|
||||
# Check if there are still pending tool calls after handling the first one
|
||||
remaining_pending_calls = get_pending_tool_calls(prompt)
|
||||
|
||||
# If there are still pending tool calls, yield the conversation and return early
|
||||
if remaining_pending_calls:
|
||||
yield "conversations", prompt
|
||||
return
|
||||
|
||||
# Fallback on adding tool output in the conversation history as user prompt.
|
||||
elif input_data.last_tool_output:
|
||||
logger.error(
|
||||
if len(tool_output) == 0 and input_data.last_tool_output:
|
||||
logger.warning(
|
||||
f"[SmartDecisionMakerBlock-node_exec_id={node_exec_id}] "
|
||||
f"No pending tool calls found. This may indicate an issue with the "
|
||||
f"conversation history, or the tool giving response more than once."
|
||||
f"This should not happen! Please check the conversation history for any inconsistencies."
|
||||
f"conversation history, or an LLM calling two tools at the same time."
|
||||
)
|
||||
tool_output.append(
|
||||
{
|
||||
@@ -486,7 +481,8 @@ class SmartDecisionMakerBlock(Block):
|
||||
"content": f"Last tool output: {json.dumps(input_data.last_tool_output)}",
|
||||
}
|
||||
)
|
||||
prompt.extend(tool_output)
|
||||
|
||||
prompt.extend(tool_output)
|
||||
if input_data.multiple_tool_calls:
|
||||
input_data.sys_prompt += "\nYou can call a tool (different tools) multiple times in a single response."
|
||||
else:
|
||||
@@ -554,11 +550,5 @@ class SmartDecisionMakerBlock(Block):
|
||||
else:
|
||||
yield f"tools_^_{tool_name}_~_{arg_name}", None
|
||||
|
||||
# Add reasoning to conversation history if available
|
||||
if response.reasoning:
|
||||
prompt.append(
|
||||
{"role": "assistant", "content": f"[Reasoning]: {response.reasoning}"}
|
||||
)
|
||||
|
||||
prompt.append(response.raw_response)
|
||||
yield "conversations", prompt
|
||||
response.prompt.append(response.raw_response)
|
||||
yield "conversations", response.prompt
|
||||
|
||||
@@ -9,117 +9,3 @@ from backend.util.test import execute_block_test
|
||||
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name)
|
||||
async def test_available_blocks(block: Type[Block]):
|
||||
await execute_block_test(block())
|
||||
|
||||
|
||||
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name)
|
||||
async def test_block_ids_valid(block: Type[Block]):
|
||||
# add the tests here to check they are uuid4
|
||||
import uuid
|
||||
|
||||
# Skip list for blocks with known invalid UUIDs
|
||||
skip_blocks = {
|
||||
"GetWeatherInformationBlock",
|
||||
"CodeExecutionBlock",
|
||||
"CountdownTimerBlock",
|
||||
"TwitterGetListTweetsBlock",
|
||||
"TwitterRemoveListMemberBlock",
|
||||
"TwitterAddListMemberBlock",
|
||||
"TwitterGetListMembersBlock",
|
||||
"TwitterGetListMembershipsBlock",
|
||||
"TwitterUnfollowListBlock",
|
||||
"TwitterFollowListBlock",
|
||||
"TwitterUnpinListBlock",
|
||||
"TwitterPinListBlock",
|
||||
"TwitterGetPinnedListsBlock",
|
||||
"TwitterDeleteListBlock",
|
||||
"TwitterUpdateListBlock",
|
||||
"TwitterCreateListBlock",
|
||||
"TwitterGetListBlock",
|
||||
"TwitterGetOwnedListsBlock",
|
||||
"TwitterGetSpacesBlock",
|
||||
"TwitterGetSpaceByIdBlock",
|
||||
"TwitterGetSpaceBuyersBlock",
|
||||
"TwitterGetSpaceTweetsBlock",
|
||||
"TwitterSearchSpacesBlock",
|
||||
"TwitterGetUserMentionsBlock",
|
||||
"TwitterGetHomeTimelineBlock",
|
||||
"TwitterGetUserTweetsBlock",
|
||||
"TwitterGetTweetBlock",
|
||||
"TwitterGetTweetsBlock",
|
||||
"TwitterGetQuoteTweetsBlock",
|
||||
"TwitterLikeTweetBlock",
|
||||
"TwitterGetLikingUsersBlock",
|
||||
"TwitterGetLikedTweetsBlock",
|
||||
"TwitterUnlikeTweetBlock",
|
||||
"TwitterBookmarkTweetBlock",
|
||||
"TwitterGetBookmarkedTweetsBlock",
|
||||
"TwitterRemoveBookmarkTweetBlock",
|
||||
"TwitterRetweetBlock",
|
||||
"TwitterRemoveRetweetBlock",
|
||||
"TwitterGetRetweetersBlock",
|
||||
"TwitterHideReplyBlock",
|
||||
"TwitterUnhideReplyBlock",
|
||||
"TwitterPostTweetBlock",
|
||||
"TwitterDeleteTweetBlock",
|
||||
"TwitterSearchRecentTweetsBlock",
|
||||
"TwitterUnfollowUserBlock",
|
||||
"TwitterFollowUserBlock",
|
||||
"TwitterGetFollowersBlock",
|
||||
"TwitterGetFollowingBlock",
|
||||
"TwitterUnmuteUserBlock",
|
||||
"TwitterGetMutedUsersBlock",
|
||||
"TwitterMuteUserBlock",
|
||||
"TwitterGetBlockedUsersBlock",
|
||||
"TwitterGetUserBlock",
|
||||
"TwitterGetUsersBlock",
|
||||
"TodoistCreateLabelBlock",
|
||||
"TodoistListLabelsBlock",
|
||||
"TodoistGetLabelBlock",
|
||||
"TodoistUpdateLabelBlock",
|
||||
"TodoistDeleteLabelBlock",
|
||||
"TodoistGetSharedLabelsBlock",
|
||||
"TodoistRenameSharedLabelsBlock",
|
||||
"TodoistRemoveSharedLabelsBlock",
|
||||
"TodoistCreateTaskBlock",
|
||||
"TodoistGetTasksBlock",
|
||||
"TodoistGetTaskBlock",
|
||||
"TodoistUpdateTaskBlock",
|
||||
"TodoistCloseTaskBlock",
|
||||
"TodoistReopenTaskBlock",
|
||||
"TodoistDeleteTaskBlock",
|
||||
"TodoistListSectionsBlock",
|
||||
"TodoistGetSectionBlock",
|
||||
"TodoistDeleteSectionBlock",
|
||||
"TodoistCreateProjectBlock",
|
||||
"TodoistGetProjectBlock",
|
||||
"TodoistUpdateProjectBlock",
|
||||
"TodoistDeleteProjectBlock",
|
||||
"TodoistListCollaboratorsBlock",
|
||||
"TodoistGetCommentsBlock",
|
||||
"TodoistGetCommentBlock",
|
||||
"TodoistUpdateCommentBlock",
|
||||
"TodoistDeleteCommentBlock",
|
||||
"GithubListStargazersBlock",
|
||||
"Slant3DSlicerBlock",
|
||||
}
|
||||
|
||||
block_instance = block()
|
||||
|
||||
# Skip blocks with known invalid UUIDs
|
||||
if block_instance.__class__.__name__ in skip_blocks:
|
||||
pytest.skip(
|
||||
f"Skipping UUID check for {block_instance.__class__.__name__} - known invalid UUID"
|
||||
)
|
||||
|
||||
# Check that the ID is not empty
|
||||
assert block_instance.id, f"Block {block.name} has empty ID"
|
||||
|
||||
# Check that the ID is a valid UUID4
|
||||
try:
|
||||
parsed_uuid = uuid.UUID(block_instance.id)
|
||||
# Verify it's specifically UUID version 4
|
||||
assert (
|
||||
parsed_uuid.version == 4
|
||||
), f"Block {block.name} ID is UUID version {parsed_uuid.version}, expected version 4"
|
||||
except ValueError:
|
||||
pytest.fail(f"Block {block.name} has invalid UUID format: {block_instance.id}")
|
||||
|
||||
@@ -1,359 +0,0 @@
|
||||
import asyncio
|
||||
import random
|
||||
from datetime import datetime
|
||||
|
||||
from faker import Faker
|
||||
from prisma import Prisma
|
||||
|
||||
faker = Faker()
|
||||
|
||||
|
||||
async def check_cron_job(db):
|
||||
"""Check if the pg_cron job for refreshing materialized views exists."""
|
||||
print("\n1. Checking pg_cron job...")
|
||||
print("-" * 40)
|
||||
|
||||
try:
|
||||
# Check if pg_cron extension exists
|
||||
extension_check = await db.query_raw("CREATE EXTENSION pg_cron;")
|
||||
print(extension_check)
|
||||
extension_check = await db.query_raw(
|
||||
"SELECT COUNT(*) as count FROM pg_extension WHERE extname = 'pg_cron'"
|
||||
)
|
||||
if extension_check[0]["count"] == 0:
|
||||
print("⚠️ pg_cron extension is not installed")
|
||||
return False
|
||||
|
||||
# Check if the refresh job exists
|
||||
job_check = await db.query_raw(
|
||||
"""
|
||||
SELECT jobname, schedule, command
|
||||
FROM cron.job
|
||||
WHERE jobname = 'refresh-store-views'
|
||||
"""
|
||||
)
|
||||
|
||||
if job_check:
|
||||
job = job_check[0]
|
||||
print("✅ pg_cron job found:")
|
||||
print(f" Name: {job['jobname']}")
|
||||
print(f" Schedule: {job['schedule']} (every 15 minutes)")
|
||||
print(f" Command: {job['command']}")
|
||||
return True
|
||||
else:
|
||||
print("⚠️ pg_cron job 'refresh-store-views' not found")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error checking pg_cron: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def get_materialized_view_counts(db):
|
||||
"""Get current counts from materialized views."""
|
||||
print("\n2. Getting current materialized view data...")
|
||||
print("-" * 40)
|
||||
|
||||
# Get counts from mv_agent_run_counts
|
||||
agent_runs = await db.query_raw(
|
||||
"""
|
||||
SELECT COUNT(*) as total_agents,
|
||||
SUM(run_count) as total_runs,
|
||||
MAX(run_count) as max_runs,
|
||||
MIN(run_count) as min_runs
|
||||
FROM mv_agent_run_counts
|
||||
"""
|
||||
)
|
||||
|
||||
# Get counts from mv_review_stats
|
||||
review_stats = await db.query_raw(
|
||||
"""
|
||||
SELECT COUNT(*) as total_listings,
|
||||
SUM(review_count) as total_reviews,
|
||||
AVG(avg_rating) as overall_avg_rating
|
||||
FROM mv_review_stats
|
||||
"""
|
||||
)
|
||||
|
||||
# Get sample data from StoreAgent view
|
||||
store_agents = await db.query_raw(
|
||||
"""
|
||||
SELECT COUNT(*) as total_store_agents,
|
||||
AVG(runs) as avg_runs,
|
||||
AVG(rating) as avg_rating
|
||||
FROM "StoreAgent"
|
||||
"""
|
||||
)
|
||||
|
||||
agent_run_data = agent_runs[0] if agent_runs else {}
|
||||
review_data = review_stats[0] if review_stats else {}
|
||||
store_data = store_agents[0] if store_agents else {}
|
||||
|
||||
print("📊 mv_agent_run_counts:")
|
||||
print(f" Total agents: {agent_run_data.get('total_agents', 0)}")
|
||||
print(f" Total runs: {agent_run_data.get('total_runs', 0)}")
|
||||
print(f" Max runs per agent: {agent_run_data.get('max_runs', 0)}")
|
||||
print(f" Min runs per agent: {agent_run_data.get('min_runs', 0)}")
|
||||
|
||||
print("\n📊 mv_review_stats:")
|
||||
print(f" Total listings: {review_data.get('total_listings', 0)}")
|
||||
print(f" Total reviews: {review_data.get('total_reviews', 0)}")
|
||||
print(f" Overall avg rating: {review_data.get('overall_avg_rating') or 0:.2f}")
|
||||
|
||||
print("\n📊 StoreAgent view:")
|
||||
print(f" Total store agents: {store_data.get('total_store_agents', 0)}")
|
||||
print(f" Average runs: {store_data.get('avg_runs') or 0:.2f}")
|
||||
print(f" Average rating: {store_data.get('avg_rating') or 0:.2f}")
|
||||
|
||||
return {
|
||||
"agent_runs": agent_run_data,
|
||||
"reviews": review_data,
|
||||
"store_agents": store_data,
|
||||
}
|
||||
|
||||
|
||||
async def add_test_data(db):
|
||||
"""Add some test data to verify materialized view updates."""
|
||||
print("\n3. Adding test data...")
|
||||
print("-" * 40)
|
||||
|
||||
# Get some existing data
|
||||
users = await db.user.find_many(take=5)
|
||||
graphs = await db.agentgraph.find_many(take=5)
|
||||
|
||||
if not users or not graphs:
|
||||
print("❌ No existing users or graphs found. Run test_data_creator.py first.")
|
||||
return False
|
||||
|
||||
# Add new executions
|
||||
print("Adding new agent graph executions...")
|
||||
new_executions = 0
|
||||
for graph in graphs:
|
||||
for _ in range(random.randint(2, 5)):
|
||||
await db.agentgraphexecution.create(
|
||||
data={
|
||||
"agentGraphId": graph.id,
|
||||
"agentGraphVersion": graph.version,
|
||||
"userId": random.choice(users).id,
|
||||
"executionStatus": "COMPLETED",
|
||||
"startedAt": datetime.now(),
|
||||
}
|
||||
)
|
||||
new_executions += 1
|
||||
|
||||
print(f"✅ Added {new_executions} new executions")
|
||||
|
||||
# Check if we need to create store listings first
|
||||
store_versions = await db.storelistingversion.find_many(
|
||||
where={"submissionStatus": "APPROVED"}, take=5
|
||||
)
|
||||
|
||||
if not store_versions:
|
||||
print("\nNo approved store listings found. Creating test store listings...")
|
||||
|
||||
# Create store listings for existing agent graphs
|
||||
for i, graph in enumerate(graphs[:3]): # Create up to 3 store listings
|
||||
# Create a store listing
|
||||
listing = await db.storelisting.create(
|
||||
data={
|
||||
"slug": f"test-agent-{graph.id[:8]}",
|
||||
"agentGraphId": graph.id,
|
||||
"agentGraphVersion": graph.version,
|
||||
"hasApprovedVersion": True,
|
||||
"owningUserId": graph.userId,
|
||||
}
|
||||
)
|
||||
|
||||
# Create an approved version
|
||||
version = await db.storelistingversion.create(
|
||||
data={
|
||||
"storeListingId": listing.id,
|
||||
"agentGraphId": graph.id,
|
||||
"agentGraphVersion": graph.version,
|
||||
"name": f"Test Agent {i+1}",
|
||||
"subHeading": faker.catch_phrase(),
|
||||
"description": faker.paragraph(nb_sentences=5),
|
||||
"imageUrls": [faker.image_url()],
|
||||
"categories": ["productivity", "automation"],
|
||||
"submissionStatus": "APPROVED",
|
||||
"submittedAt": datetime.now(),
|
||||
}
|
||||
)
|
||||
|
||||
# Update listing with active version
|
||||
await db.storelisting.update(
|
||||
where={"id": listing.id}, data={"activeVersionId": version.id}
|
||||
)
|
||||
|
||||
print("✅ Created test store listings")
|
||||
|
||||
# Re-fetch approved versions
|
||||
store_versions = await db.storelistingversion.find_many(
|
||||
where={"submissionStatus": "APPROVED"}, take=5
|
||||
)
|
||||
|
||||
# Add new reviews
|
||||
print("\nAdding new store listing reviews...")
|
||||
new_reviews = 0
|
||||
for version in store_versions:
|
||||
# Find users who haven't reviewed this version
|
||||
existing_reviews = await db.storelistingreview.find_many(
|
||||
where={"storeListingVersionId": version.id}
|
||||
)
|
||||
reviewed_user_ids = {r.reviewByUserId for r in existing_reviews}
|
||||
available_users = [u for u in users if u.id not in reviewed_user_ids]
|
||||
|
||||
if available_users:
|
||||
user = random.choice(available_users)
|
||||
await db.storelistingreview.create(
|
||||
data={
|
||||
"storeListingVersionId": version.id,
|
||||
"reviewByUserId": user.id,
|
||||
"score": random.randint(3, 5),
|
||||
"comments": faker.text(max_nb_chars=100),
|
||||
}
|
||||
)
|
||||
new_reviews += 1
|
||||
|
||||
print(f"✅ Added {new_reviews} new reviews")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
async def refresh_materialized_views(db):
|
||||
"""Manually refresh the materialized views."""
|
||||
print("\n4. Manually refreshing materialized views...")
|
||||
print("-" * 40)
|
||||
|
||||
try:
|
||||
await db.execute_raw("SELECT refresh_store_materialized_views();")
|
||||
print("✅ Materialized views refreshed successfully")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Error refreshing views: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def compare_counts(before, after):
|
||||
"""Compare counts before and after refresh."""
|
||||
print("\n5. Comparing counts before and after refresh...")
|
||||
print("-" * 40)
|
||||
|
||||
# Compare agent runs
|
||||
print("🔍 Agent run changes:")
|
||||
before_runs = before["agent_runs"].get("total_runs") or 0
|
||||
after_runs = after["agent_runs"].get("total_runs") or 0
|
||||
print(
|
||||
f" Total runs: {before_runs} → {after_runs} " f"(+{after_runs - before_runs})"
|
||||
)
|
||||
|
||||
# Compare reviews
|
||||
print("\n🔍 Review changes:")
|
||||
before_reviews = before["reviews"].get("total_reviews") or 0
|
||||
after_reviews = after["reviews"].get("total_reviews") or 0
|
||||
print(
|
||||
f" Total reviews: {before_reviews} → {after_reviews} "
|
||||
f"(+{after_reviews - before_reviews})"
|
||||
)
|
||||
|
||||
# Compare store agents
|
||||
print("\n🔍 StoreAgent view changes:")
|
||||
before_avg_runs = before["store_agents"].get("avg_runs", 0) or 0
|
||||
after_avg_runs = after["store_agents"].get("avg_runs", 0) or 0
|
||||
print(
|
||||
f" Average runs: {before_avg_runs:.2f} → {after_avg_runs:.2f} "
|
||||
f"(+{after_avg_runs - before_avg_runs:.2f})"
|
||||
)
|
||||
|
||||
# Verify changes occurred
|
||||
runs_changed = (after["agent_runs"].get("total_runs") or 0) > (
|
||||
before["agent_runs"].get("total_runs") or 0
|
||||
)
|
||||
reviews_changed = (after["reviews"].get("total_reviews") or 0) > (
|
||||
before["reviews"].get("total_reviews") or 0
|
||||
)
|
||||
|
||||
if runs_changed and reviews_changed:
|
||||
print("\n✅ Materialized views are updating correctly!")
|
||||
return True
|
||||
else:
|
||||
print("\n⚠️ Some materialized views may not have updated:")
|
||||
if not runs_changed:
|
||||
print(" - Agent run counts did not increase")
|
||||
if not reviews_changed:
|
||||
print(" - Review counts did not increase")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
db = Prisma()
|
||||
await db.connect()
|
||||
|
||||
print("=" * 60)
|
||||
print("Materialized Views Test")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
# Check if data exists
|
||||
user_count = await db.user.count()
|
||||
if user_count == 0:
|
||||
print("❌ No data in database. Please run test_data_creator.py first.")
|
||||
await db.disconnect()
|
||||
return
|
||||
|
||||
# 1. Check cron job
|
||||
cron_exists = await check_cron_job(db)
|
||||
|
||||
# 2. Get initial counts
|
||||
counts_before = await get_materialized_view_counts(db)
|
||||
|
||||
# 3. Add test data
|
||||
data_added = await add_test_data(db)
|
||||
refresh_success = False
|
||||
|
||||
if data_added:
|
||||
# Wait a moment for data to be committed
|
||||
print("\nWaiting for data to be committed...")
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# 4. Manually refresh views
|
||||
refresh_success = await refresh_materialized_views(db)
|
||||
|
||||
if refresh_success:
|
||||
# 5. Get counts after refresh
|
||||
counts_after = await get_materialized_view_counts(db)
|
||||
|
||||
# 6. Compare results
|
||||
await compare_counts(counts_before, counts_after)
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 60)
|
||||
print("Test Summary")
|
||||
print("=" * 60)
|
||||
print(f"✓ pg_cron job exists: {'Yes' if cron_exists else 'No'}")
|
||||
print(f"✓ Test data added: {'Yes' if data_added else 'No'}")
|
||||
print(f"✓ Manual refresh worked: {'Yes' if refresh_success else 'No'}")
|
||||
print(
|
||||
f"✓ Views updated correctly: {'Yes' if data_added and refresh_success else 'Cannot verify'}"
|
||||
)
|
||||
|
||||
if cron_exists:
|
||||
print(
|
||||
"\n💡 The materialized views will also refresh automatically every 15 minutes via pg_cron."
|
||||
)
|
||||
else:
|
||||
print(
|
||||
"\n⚠️ Automatic refresh is not configured. Views must be refreshed manually."
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Test failed with error: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
|
||||
await db.disconnect()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -1,159 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Check store-related data in the database."""
|
||||
|
||||
import asyncio
|
||||
|
||||
from prisma import Prisma
|
||||
|
||||
|
||||
async def check_store_data(db):
|
||||
"""Check what store data exists in the database."""
|
||||
|
||||
print("============================================================")
|
||||
print("Store Data Check")
|
||||
print("============================================================")
|
||||
|
||||
# Check store listings
|
||||
print("\n1. Store Listings:")
|
||||
print("-" * 40)
|
||||
listings = await db.storelisting.find_many()
|
||||
print(f"Total store listings: {len(listings)}")
|
||||
|
||||
if listings:
|
||||
for listing in listings[:5]:
|
||||
print(f"\nListing ID: {listing.id}")
|
||||
print(f" Name: {listing.name}")
|
||||
print(f" Status: {listing.status}")
|
||||
print(f" Slug: {listing.slug}")
|
||||
|
||||
# Check store listing versions
|
||||
print("\n\n2. Store Listing Versions:")
|
||||
print("-" * 40)
|
||||
versions = await db.storelistingversion.find_many(include={"StoreListing": True})
|
||||
print(f"Total store listing versions: {len(versions)}")
|
||||
|
||||
# Group by submission status
|
||||
status_counts = {}
|
||||
for version in versions:
|
||||
status = version.submissionStatus
|
||||
status_counts[status] = status_counts.get(status, 0) + 1
|
||||
|
||||
print("\nVersions by status:")
|
||||
for status, count in status_counts.items():
|
||||
print(f" {status}: {count}")
|
||||
|
||||
# Show approved versions
|
||||
approved_versions = [v for v in versions if v.submissionStatus == "APPROVED"]
|
||||
print(f"\nApproved versions: {len(approved_versions)}")
|
||||
if approved_versions:
|
||||
for version in approved_versions[:5]:
|
||||
print(f"\n Version ID: {version.id}")
|
||||
print(f" Listing: {version.StoreListing.name}")
|
||||
print(f" Version: {version.version}")
|
||||
|
||||
# Check store listing reviews
|
||||
print("\n\n3. Store Listing Reviews:")
|
||||
print("-" * 40)
|
||||
reviews = await db.storelistingreview.find_many(
|
||||
include={"StoreListingVersion": {"include": {"StoreListing": True}}}
|
||||
)
|
||||
print(f"Total reviews: {len(reviews)}")
|
||||
|
||||
if reviews:
|
||||
# Calculate average rating
|
||||
total_score = sum(r.score for r in reviews)
|
||||
avg_score = total_score / len(reviews) if reviews else 0
|
||||
print(f"Average rating: {avg_score:.2f}")
|
||||
|
||||
# Show sample reviews
|
||||
print("\nSample reviews:")
|
||||
for review in reviews[:3]:
|
||||
print(f"\n Review for: {review.StoreListingVersion.StoreListing.name}")
|
||||
print(f" Score: {review.score}")
|
||||
print(f" Comments: {review.comments[:100]}...")
|
||||
|
||||
# Check StoreAgent view data
|
||||
print("\n\n4. StoreAgent View Data:")
|
||||
print("-" * 40)
|
||||
|
||||
# Query the StoreAgent view
|
||||
query = """
|
||||
SELECT
|
||||
sa.listing_id,
|
||||
sa.slug,
|
||||
sa.agent_name,
|
||||
sa.description,
|
||||
sa.featured,
|
||||
sa.runs,
|
||||
sa.rating,
|
||||
sa.creator_username,
|
||||
sa.categories,
|
||||
sa.updated_at
|
||||
FROM "StoreAgent" sa
|
||||
LIMIT 10;
|
||||
"""
|
||||
|
||||
store_agents = await db.query_raw(query)
|
||||
print(f"Total store agents in view: {len(store_agents)}")
|
||||
|
||||
if store_agents:
|
||||
for agent in store_agents[:5]:
|
||||
print(f"\nStore Agent: {agent['agent_name']}")
|
||||
print(f" Slug: {agent['slug']}")
|
||||
print(f" Runs: {agent['runs']}")
|
||||
print(f" Rating: {agent['rating']}")
|
||||
print(f" Creator: {agent['creator_username']}")
|
||||
|
||||
# Check the underlying data that should populate StoreAgent
|
||||
print("\n\n5. Data that should populate StoreAgent view:")
|
||||
print("-" * 40)
|
||||
|
||||
# Check for any APPROVED store listing versions
|
||||
query = """
|
||||
SELECT COUNT(*) as count
|
||||
FROM "StoreListingVersion"
|
||||
WHERE "submissionStatus" = 'APPROVED'
|
||||
"""
|
||||
|
||||
result = await db.query_raw(query)
|
||||
approved_count = result[0]["count"] if result else 0
|
||||
print(f"Approved store listing versions: {approved_count}")
|
||||
|
||||
# Check for store listings with hasApprovedVersion = true
|
||||
query = """
|
||||
SELECT COUNT(*) as count
|
||||
FROM "StoreListing"
|
||||
WHERE "hasApprovedVersion" = true AND "isDeleted" = false
|
||||
"""
|
||||
|
||||
result = await db.query_raw(query)
|
||||
has_approved_count = result[0]["count"] if result else 0
|
||||
print(f"Store listings with approved versions: {has_approved_count}")
|
||||
|
||||
# Check agent graph executions
|
||||
query = """
|
||||
SELECT COUNT(DISTINCT "agentGraphId") as unique_agents,
|
||||
COUNT(*) as total_executions
|
||||
FROM "AgentGraphExecution"
|
||||
"""
|
||||
|
||||
result = await db.query_raw(query)
|
||||
if result:
|
||||
print("\nAgent Graph Executions:")
|
||||
print(f" Unique agents with executions: {result[0]['unique_agents']}")
|
||||
print(f" Total executions: {result[0]['total_executions']}")
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main function."""
|
||||
db = Prisma()
|
||||
await db.connect()
|
||||
|
||||
try:
|
||||
await check_store_data(db)
|
||||
finally:
|
||||
await db.disconnect()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -425,7 +425,28 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
|
||||
raise ValueError(f"{self.name} did not produce any output for {output}")
|
||||
|
||||
def merge_stats(self, stats: NodeExecutionStats) -> NodeExecutionStats:
|
||||
self.execution_stats += stats
|
||||
stats_dict = stats.model_dump()
|
||||
current_stats = self.execution_stats.model_dump()
|
||||
|
||||
for key, value in stats_dict.items():
|
||||
if key not in current_stats:
|
||||
# Field doesn't exist yet, just set it, but this will probably
|
||||
# not happen, just in case though so we throw for invalid when
|
||||
# converting back in
|
||||
current_stats[key] = value
|
||||
elif isinstance(value, dict) and isinstance(current_stats[key], dict):
|
||||
current_stats[key].update(value)
|
||||
elif isinstance(value, (int, float)) and isinstance(
|
||||
current_stats[key], (int, float)
|
||||
):
|
||||
current_stats[key] += value
|
||||
elif isinstance(value, list) and isinstance(current_stats[key], list):
|
||||
current_stats[key].extend(value)
|
||||
else:
|
||||
current_stats[key] = value
|
||||
|
||||
self.execution_stats = NodeExecutionStats(**current_stats)
|
||||
|
||||
return self.execution_stats
|
||||
|
||||
@property
|
||||
@@ -492,12 +513,6 @@ def get_blocks() -> dict[str, Type[Block]]:
|
||||
|
||||
|
||||
async def initialize_blocks() -> None:
|
||||
# First, sync all provider costs to blocks
|
||||
# Imported here to avoid circular import
|
||||
from backend.sdk.cost_integration import sync_all_provider_costs
|
||||
|
||||
sync_all_provider_costs()
|
||||
|
||||
for cls in get_blocks().values():
|
||||
block = cls()
|
||||
existing_block = await AgentBlock.prisma().find_first(
|
||||
|
||||
@@ -85,9 +85,6 @@ MODEL_COST: dict[LlmModel, int] = {
|
||||
LlmModel.EVA_QWEN_2_5_32B: 1,
|
||||
LlmModel.DEEPSEEK_CHAT: 2,
|
||||
LlmModel.PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE: 1,
|
||||
LlmModel.PERPLEXITY_SONAR: 1,
|
||||
LlmModel.PERPLEXITY_SONAR_PRO: 5,
|
||||
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: 10,
|
||||
LlmModel.QWEN_QWQ_32B_PREVIEW: 2,
|
||||
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: 1,
|
||||
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B: 1,
|
||||
|
||||
@@ -93,28 +93,6 @@ async def locked_transaction(key: str):
|
||||
yield tx
|
||||
|
||||
|
||||
def get_database_schema() -> str:
|
||||
"""Extract database schema from DATABASE_URL."""
|
||||
parsed_url = urlparse(DATABASE_URL)
|
||||
query_params = dict(parse_qsl(parsed_url.query))
|
||||
return query_params.get("schema", "public")
|
||||
|
||||
|
||||
async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
|
||||
"""Execute raw SQL query with proper schema handling."""
|
||||
schema = get_database_schema()
|
||||
schema_prefix = f"{schema}." if schema != "public" else ""
|
||||
formatted_query = query_template.format(schema_prefix=schema_prefix)
|
||||
|
||||
import prisma as prisma_module
|
||||
|
||||
result = await prisma_module.get_client().query_raw(
|
||||
formatted_query, *args # type: ignore
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class BaseDbModel(BaseModel):
|
||||
id: str = Field(default_factory=lambda: str(uuid4()))
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ from .block import (
|
||||
get_io_block_ids,
|
||||
get_webhook_block_ids,
|
||||
)
|
||||
from .db import BaseDbModel, query_raw_with_schema
|
||||
from .db import BaseDbModel
|
||||
from .event_bus import AsyncRedisEventBus, RedisEventBus
|
||||
from .includes import (
|
||||
EXECUTION_RESULT_INCLUDE,
|
||||
@@ -68,21 +68,6 @@ config = Config()
|
||||
# -------------------------- Models -------------------------- #
|
||||
|
||||
|
||||
class BlockErrorStats(BaseModel):
|
||||
"""Typed data structure for block error statistics."""
|
||||
|
||||
block_id: str
|
||||
total_executions: int
|
||||
failed_executions: int
|
||||
|
||||
@property
|
||||
def error_rate(self) -> float:
|
||||
"""Calculate error rate as a percentage."""
|
||||
if self.total_executions == 0:
|
||||
return 0.0
|
||||
return (self.failed_executions / self.total_executions) * 100
|
||||
|
||||
|
||||
ExecutionStatus = AgentExecutionStatus
|
||||
|
||||
|
||||
@@ -372,7 +357,6 @@ async def get_graph_executions(
|
||||
created_time_lte: datetime | None = None,
|
||||
limit: int | None = None,
|
||||
) -> list[GraphExecutionMeta]:
|
||||
"""⚠️ **Optional `user_id` check**: MUST USE check in user-facing endpoints."""
|
||||
where_filter: AgentGraphExecutionWhereInput = {
|
||||
"isDeleted": False,
|
||||
}
|
||||
@@ -738,7 +722,6 @@ async def delete_graph_execution(
|
||||
|
||||
|
||||
async def get_node_execution(node_exec_id: str) -> NodeExecutionResult | None:
|
||||
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
|
||||
execution = await AgentNodeExecution.prisma().find_first(
|
||||
where={"id": node_exec_id},
|
||||
include=EXECUTION_RESULT_INCLUDE,
|
||||
@@ -749,19 +732,15 @@ async def get_node_execution(node_exec_id: str) -> NodeExecutionResult | None:
|
||||
|
||||
|
||||
async def get_node_executions(
|
||||
graph_exec_id: str | None = None,
|
||||
graph_exec_id: str,
|
||||
node_id: str | None = None,
|
||||
block_ids: list[str] | None = None,
|
||||
statuses: list[ExecutionStatus] | None = None,
|
||||
limit: int | None = None,
|
||||
created_time_gte: datetime | None = None,
|
||||
created_time_lte: datetime | None = None,
|
||||
include_exec_data: bool = True,
|
||||
) -> list[NodeExecutionResult]:
|
||||
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
|
||||
where_clause: AgentNodeExecutionWhereInput = {}
|
||||
if graph_exec_id:
|
||||
where_clause["agentGraphExecutionId"] = graph_exec_id
|
||||
where_clause: AgentNodeExecutionWhereInput = {
|
||||
"agentGraphExecutionId": graph_exec_id,
|
||||
}
|
||||
if node_id:
|
||||
where_clause["agentNodeId"] = node_id
|
||||
if block_ids:
|
||||
@@ -769,19 +748,9 @@ async def get_node_executions(
|
||||
if statuses:
|
||||
where_clause["OR"] = [{"executionStatus": status} for status in statuses]
|
||||
|
||||
if created_time_gte or created_time_lte:
|
||||
where_clause["addedTime"] = {
|
||||
"gte": created_time_gte or datetime.min.replace(tzinfo=timezone.utc),
|
||||
"lte": created_time_lte or datetime.max.replace(tzinfo=timezone.utc),
|
||||
}
|
||||
|
||||
executions = await AgentNodeExecution.prisma().find_many(
|
||||
where=where_clause,
|
||||
include=(
|
||||
EXECUTION_RESULT_INCLUDE
|
||||
if include_exec_data
|
||||
else {"Node": True, "GraphExecution": True}
|
||||
),
|
||||
include=EXECUTION_RESULT_INCLUDE,
|
||||
order=EXECUTION_RESULT_ORDER,
|
||||
take=limit,
|
||||
)
|
||||
@@ -792,7 +761,6 @@ async def get_node_executions(
|
||||
async def get_latest_node_execution(
|
||||
node_id: str, graph_eid: str
|
||||
) -> NodeExecutionResult | None:
|
||||
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
|
||||
execution = await AgentNodeExecution.prisma().find_first(
|
||||
where={
|
||||
"agentGraphExecutionId": graph_eid,
|
||||
@@ -995,33 +963,3 @@ async def set_execution_kv_data(
|
||||
},
|
||||
)
|
||||
return type_utils.convert(resp.data, type[Any]) if resp and resp.data else None
|
||||
|
||||
|
||||
async def get_block_error_stats(
|
||||
start_time: datetime, end_time: datetime
|
||||
) -> list[BlockErrorStats]:
|
||||
"""Get block execution stats using efficient SQL aggregation."""
|
||||
|
||||
query_template = """
|
||||
SELECT
|
||||
n."agentBlockId" as block_id,
|
||||
COUNT(*) as total_executions,
|
||||
SUM(CASE WHEN ne."executionStatus" = 'FAILED' THEN 1 ELSE 0 END) as failed_executions
|
||||
FROM {schema_prefix}"AgentNodeExecution" ne
|
||||
JOIN {schema_prefix}"AgentNode" n ON ne."agentNodeId" = n.id
|
||||
WHERE ne."addedTime" >= $1::timestamp AND ne."addedTime" <= $2::timestamp
|
||||
GROUP BY n."agentBlockId"
|
||||
HAVING COUNT(*) >= 10
|
||||
"""
|
||||
|
||||
result = await query_raw_with_schema(query_template, start_time, end_time)
|
||||
|
||||
# Convert to typed data structures
|
||||
return [
|
||||
BlockErrorStats(
|
||||
block_id=row["block_id"],
|
||||
total_executions=int(row["total_executions"]),
|
||||
failed_executions=int(row["failed_executions"]),
|
||||
)
|
||||
for row in result
|
||||
]
|
||||
|
||||
@@ -3,6 +3,7 @@ import uuid
|
||||
from collections import defaultdict
|
||||
from typing import TYPE_CHECKING, Any, Literal, Optional, cast
|
||||
|
||||
import prisma
|
||||
from prisma import Json
|
||||
from prisma.enums import SubmissionStatus
|
||||
from prisma.models import AgentGraph, AgentNode, AgentNodeLink, StoreListingVersion
|
||||
@@ -13,7 +14,7 @@ from prisma.types import (
|
||||
AgentNodeLinkCreateInput,
|
||||
StoreListingVersionWhereInput,
|
||||
)
|
||||
from pydantic import Field, JsonValue, create_model
|
||||
from pydantic import JsonValue, create_model
|
||||
from pydantic.fields import computed_field
|
||||
|
||||
from backend.blocks.agent import AgentExecutorBlock
|
||||
@@ -30,7 +31,7 @@ from backend.integrations.providers import ProviderName
|
||||
from backend.util import type as type_utils
|
||||
|
||||
from .block import Block, BlockInput, BlockSchema, BlockType, get_block, get_blocks
|
||||
from .db import BaseDbModel, query_raw_with_schema, transaction
|
||||
from .db import BaseDbModel, transaction
|
||||
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -188,23 +189,6 @@ class BaseGraph(BaseDbModel):
|
||||
)
|
||||
)
|
||||
|
||||
@computed_field
|
||||
@property
|
||||
def has_external_trigger(self) -> bool:
|
||||
return self.webhook_input_node is not None
|
||||
|
||||
@property
|
||||
def webhook_input_node(self) -> Node | None:
|
||||
return next(
|
||||
(
|
||||
node
|
||||
for node in self.nodes
|
||||
if node.block.block_type
|
||||
in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
|
||||
),
|
||||
None,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _generate_schema(
|
||||
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
|
||||
@@ -342,6 +326,11 @@ class GraphModel(Graph):
|
||||
user_id: str
|
||||
nodes: list[NodeModel] = [] # type: ignore
|
||||
|
||||
@computed_field
|
||||
@property
|
||||
def has_webhook_trigger(self) -> bool:
|
||||
return self.webhook_input_node is not None
|
||||
|
||||
@property
|
||||
def starting_nodes(self) -> list[NodeModel]:
|
||||
outbound_nodes = {link.sink_id for link in self.links}
|
||||
@@ -354,12 +343,17 @@ class GraphModel(Graph):
|
||||
if node.id not in outbound_nodes or node.id in input_nodes
|
||||
]
|
||||
|
||||
def meta(self) -> "GraphMeta":
|
||||
"""
|
||||
Returns a GraphMeta object with metadata about the graph.
|
||||
This is used to return metadata about the graph without exposing nodes and links.
|
||||
"""
|
||||
return GraphMeta.from_graph(self)
|
||||
@property
|
||||
def webhook_input_node(self) -> NodeModel | None:
|
||||
return next(
|
||||
(
|
||||
node
|
||||
for node in self.nodes
|
||||
if node.block.block_type
|
||||
in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
|
||||
),
|
||||
None,
|
||||
)
|
||||
|
||||
def reassign_ids(self, user_id: str, reassign_graph_id: bool = False):
|
||||
"""
|
||||
@@ -395,10 +389,8 @@ class GraphModel(Graph):
|
||||
|
||||
# Reassign Link IDs
|
||||
for link in graph.links:
|
||||
if link.source_id in id_map:
|
||||
link.source_id = id_map[link.source_id]
|
||||
if link.sink_id in id_map:
|
||||
link.sink_id = id_map[link.sink_id]
|
||||
link.source_id = id_map[link.source_id]
|
||||
link.sink_id = id_map[link.sink_id]
|
||||
|
||||
# Reassign User IDs for agent blocks
|
||||
for node in graph.nodes:
|
||||
@@ -618,18 +610,6 @@ class GraphModel(Graph):
|
||||
)
|
||||
|
||||
|
||||
class GraphMeta(Graph):
|
||||
user_id: str
|
||||
|
||||
# Easy work-around to prevent exposing nodes and links in the API response
|
||||
nodes: list[NodeModel] = Field(default=[], exclude=True) # type: ignore
|
||||
links: list[Link] = Field(default=[], exclude=True)
|
||||
|
||||
@staticmethod
|
||||
def from_graph(graph: GraphModel) -> "GraphMeta":
|
||||
return GraphMeta(**graph.model_dump())
|
||||
|
||||
|
||||
# --------------------- CRUD functions --------------------- #
|
||||
|
||||
|
||||
@@ -658,10 +638,10 @@ async def set_node_webhook(node_id: str, webhook_id: str | None) -> NodeModel:
|
||||
return NodeModel.from_db(node)
|
||||
|
||||
|
||||
async def list_graphs(
|
||||
async def get_graphs(
|
||||
user_id: str,
|
||||
filter_by: Literal["active"] | None = "active",
|
||||
) -> list[GraphMeta]:
|
||||
) -> list[GraphModel]:
|
||||
"""
|
||||
Retrieves graph metadata objects.
|
||||
Default behaviour is to get all currently active graphs.
|
||||
@@ -671,7 +651,7 @@ async def list_graphs(
|
||||
user_id: The ID of the user that owns the graph.
|
||||
|
||||
Returns:
|
||||
list[GraphMeta]: A list of objects representing the retrieved graphs.
|
||||
list[GraphModel]: A list of objects representing the retrieved graphs.
|
||||
"""
|
||||
where_clause: AgentGraphWhereInput = {"userId": user_id}
|
||||
|
||||
@@ -685,13 +665,13 @@ async def list_graphs(
|
||||
include=AGENT_GRAPH_INCLUDE,
|
||||
)
|
||||
|
||||
graph_models: list[GraphMeta] = []
|
||||
graph_models = []
|
||||
for graph in graphs:
|
||||
try:
|
||||
graph_meta = GraphModel.from_db(graph).meta()
|
||||
# Trigger serialization to validate that the graph is well formed
|
||||
graph_meta.model_dump()
|
||||
graph_models.append(graph_meta)
|
||||
graph_model = GraphModel.from_db(graph)
|
||||
# Trigger serialization to validate that the graph is well formed.
|
||||
graph_model.model_dump()
|
||||
graph_models.append(graph_model)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing graph {graph.id}: {e}")
|
||||
continue
|
||||
@@ -1058,13 +1038,13 @@ async def fix_llm_provider_credentials():
|
||||
|
||||
broken_nodes = []
|
||||
try:
|
||||
broken_nodes = await query_raw_with_schema(
|
||||
broken_nodes = await prisma.get_client().query_raw(
|
||||
"""
|
||||
SELECT graph."userId" user_id,
|
||||
node.id node_id,
|
||||
node."constantInput" node_preset_input
|
||||
FROM {schema_prefix}"AgentNode" node
|
||||
LEFT JOIN {schema_prefix}"AgentGraph" graph
|
||||
FROM platform."AgentNode" node
|
||||
LEFT JOIN platform."AgentGraph" graph
|
||||
ON node."agentGraphId" = graph.id
|
||||
WHERE node."constantInput"::jsonb->'credentials'->>'provider' = 'llm'
|
||||
ORDER BY graph."userId";
|
||||
|
||||
@@ -42,9 +42,6 @@ from pydantic_core import (
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.util.settings import Secrets
|
||||
|
||||
# Type alias for any provider name (including custom ones)
|
||||
AnyProviderName = str # Will be validated as ProviderName at runtime
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from backend.data.block import BlockSchema
|
||||
|
||||
@@ -344,7 +341,7 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
|
||||
type: CT
|
||||
|
||||
@classmethod
|
||||
def allowed_providers(cls) -> tuple[ProviderName, ...] | None:
|
||||
def allowed_providers(cls) -> tuple[ProviderName, ...]:
|
||||
return get_args(cls.model_fields["provider"].annotation)
|
||||
|
||||
@classmethod
|
||||
@@ -369,12 +366,7 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
|
||||
f"{field_schema}"
|
||||
) from e
|
||||
|
||||
providers = cls.allowed_providers()
|
||||
if (
|
||||
providers is not None
|
||||
and len(providers) > 1
|
||||
and not schema_extra.discriminator
|
||||
):
|
||||
if len(cls.allowed_providers()) > 1 and not schema_extra.discriminator:
|
||||
raise TypeError(
|
||||
f"Multi-provider CredentialsField '{field_name}' "
|
||||
"requires discriminator!"
|
||||
@@ -386,12 +378,7 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
|
||||
if hasattr(model_class, "allowed_providers") and hasattr(
|
||||
model_class, "allowed_cred_types"
|
||||
):
|
||||
allowed_providers = model_class.allowed_providers()
|
||||
# If no specific providers (None), allow any string
|
||||
if allowed_providers is None:
|
||||
schema["credentials_provider"] = ["string"] # Allow any string provider
|
||||
else:
|
||||
schema["credentials_provider"] = allowed_providers
|
||||
schema["credentials_provider"] = model_class.allowed_providers()
|
||||
schema["credentials_types"] = model_class.allowed_cred_types()
|
||||
# Do not return anything, just mutate schema in place
|
||||
|
||||
@@ -553,11 +540,6 @@ def CredentialsField(
|
||||
if v is not None
|
||||
}
|
||||
|
||||
# Merge any json_schema_extra passed in kwargs
|
||||
if "json_schema_extra" in kwargs:
|
||||
extra_schema = kwargs.pop("json_schema_extra")
|
||||
field_schema_extra.update(extra_schema)
|
||||
|
||||
return Field(
|
||||
title=title,
|
||||
description=description,
|
||||
@@ -636,35 +618,6 @@ class NodeExecutionStats(BaseModel):
|
||||
llm_retry_count: int = 0
|
||||
input_token_count: int = 0
|
||||
output_token_count: int = 0
|
||||
extra_cost: int = 0
|
||||
extra_steps: int = 0
|
||||
|
||||
def __iadd__(self, other: "NodeExecutionStats") -> "NodeExecutionStats":
|
||||
"""Mutate this instance by adding another NodeExecutionStats."""
|
||||
if not isinstance(other, NodeExecutionStats):
|
||||
return NotImplemented
|
||||
|
||||
stats_dict = other.model_dump()
|
||||
current_stats = self.model_dump()
|
||||
|
||||
for key, value in stats_dict.items():
|
||||
if key not in current_stats:
|
||||
# Field doesn't exist yet, just set it
|
||||
setattr(self, key, value)
|
||||
elif isinstance(value, dict) and isinstance(current_stats[key], dict):
|
||||
current_stats[key].update(value)
|
||||
setattr(self, key, current_stats[key])
|
||||
elif isinstance(value, (int, float)) and isinstance(
|
||||
current_stats[key], (int, float)
|
||||
):
|
||||
setattr(self, key, current_stats[key] + value)
|
||||
elif isinstance(value, list) and isinstance(current_stats[key], list):
|
||||
current_stats[key].extend(value)
|
||||
setattr(self, key, current_stats[key])
|
||||
else:
|
||||
setattr(self, key, value)
|
||||
|
||||
return self
|
||||
|
||||
|
||||
class GraphExecutionStats(BaseModel):
|
||||
|
||||
@@ -5,7 +5,6 @@ from backend.data import db
|
||||
from backend.data.credit import UsageTransactionMetadata, get_user_credit_model
|
||||
from backend.data.execution import (
|
||||
create_graph_execution,
|
||||
get_block_error_stats,
|
||||
get_execution_kv_data,
|
||||
get_graph_execution,
|
||||
get_graph_execution_meta,
|
||||
@@ -106,7 +105,6 @@ class DatabaseManager(AppService):
|
||||
upsert_execution_output = _(upsert_execution_output)
|
||||
get_execution_kv_data = _(get_execution_kv_data)
|
||||
set_execution_kv_data = _(set_execution_kv_data)
|
||||
get_block_error_stats = _(get_block_error_stats)
|
||||
|
||||
# Graphs
|
||||
get_node = _(get_node)
|
||||
@@ -201,9 +199,6 @@ class DatabaseManagerClient(AppServiceClient):
|
||||
d.get_user_notification_oldest_message_in_batch
|
||||
)
|
||||
|
||||
# Block error monitoring
|
||||
get_block_error_stats = _(d.get_block_error_stats)
|
||||
|
||||
|
||||
class DatabaseManagerAsyncClient(AppServiceClient):
|
||||
d = DatabaseManager
|
||||
@@ -231,4 +226,3 @@ class DatabaseManagerAsyncClient(AppServiceClient):
|
||||
update_user_integrations = d.update_user_integrations
|
||||
get_execution_kv_data = d.get_execution_kv_data
|
||||
set_execution_kv_data = d.set_execution_kv_data
|
||||
get_block_error_stats = d.get_block_error_stats
|
||||
|
||||
@@ -207,7 +207,9 @@ async def execute_node(
|
||||
|
||||
# Update execution stats
|
||||
if execution_stats is not None:
|
||||
execution_stats += node_block.execution_stats
|
||||
execution_stats = execution_stats.model_copy(
|
||||
update=node_block.execution_stats.model_dump()
|
||||
)
|
||||
execution_stats.input_size = input_size
|
||||
execution_stats.output_size = output_size
|
||||
|
||||
@@ -646,10 +648,9 @@ class Executor:
|
||||
return
|
||||
|
||||
nonlocal execution_stats
|
||||
execution_stats.node_count += 1 + result.extra_steps
|
||||
execution_stats.node_count += 1
|
||||
execution_stats.nodes_cputime += result.cputime
|
||||
execution_stats.nodes_walltime += result.walltime
|
||||
execution_stats.cost += result.extra_cost
|
||||
if (err := result.error) and isinstance(err, Exception):
|
||||
execution_stats.node_error_count += 1
|
||||
update_node_execution_status(
|
||||
@@ -876,7 +877,6 @@ class Executor:
|
||||
ExecutionStatus.QUEUED,
|
||||
ExecutionStatus.RUNNING,
|
||||
],
|
||||
include_exec_data=False,
|
||||
)
|
||||
db_client.update_node_execution_status_batch(
|
||||
[node_exec.node_exec_id for node_exec in inflight_executions],
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from enum import Enum
|
||||
from typing import Optional
|
||||
from urllib.parse import parse_qs, urlencode, urlparse, urlunparse
|
||||
@@ -13,23 +14,25 @@ from apscheduler.schedulers.blocking import BlockingScheduler
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
from autogpt_libs.utils.cache import thread_cached
|
||||
from dotenv import load_dotenv
|
||||
from prisma.enums import NotificationType
|
||||
from pydantic import BaseModel, Field, ValidationError
|
||||
from sqlalchemy import MetaData, create_engine
|
||||
|
||||
from backend.data.block import BlockInput
|
||||
from backend.data.execution import GraphExecutionWithNodes
|
||||
from backend.data.execution import ExecutionStatus
|
||||
from backend.data.model import CredentialsMetaInput
|
||||
from backend.executor import utils as execution_utils
|
||||
from backend.monitoring import (
|
||||
NotificationJobArgs,
|
||||
process_existing_batches,
|
||||
process_weekly_summary,
|
||||
report_block_error_rates,
|
||||
report_late_executions,
|
||||
)
|
||||
from backend.notifications.notifications import NotificationManagerClient
|
||||
from backend.util.exceptions import NotAuthorizedError, NotFoundError
|
||||
from backend.util.logging import PrefixFilter
|
||||
from backend.util.service import AppService, AppServiceClient, endpoint_to_async, expose
|
||||
from backend.util.metrics import sentry_capture_error
|
||||
from backend.util.service import (
|
||||
AppService,
|
||||
AppServiceClient,
|
||||
endpoint_to_async,
|
||||
expose,
|
||||
get_service_client,
|
||||
)
|
||||
from backend.util.settings import Config
|
||||
|
||||
|
||||
@@ -68,6 +71,11 @@ def job_listener(event):
|
||||
logger.info(f"Job {event.job_id} completed successfully.")
|
||||
|
||||
|
||||
@thread_cached
|
||||
def get_notification_client():
|
||||
return get_service_client(NotificationManagerClient)
|
||||
|
||||
|
||||
@thread_cached
|
||||
def get_event_loop():
|
||||
return asyncio.new_event_loop()
|
||||
@@ -81,7 +89,7 @@ async def _execute_graph(**kwargs):
|
||||
args = GraphExecutionJobArgs(**kwargs)
|
||||
try:
|
||||
logger.info(f"Executing recurring job for graph #{args.graph_id}")
|
||||
graph_exec: GraphExecutionWithNodes = await execution_utils.add_graph_execution(
|
||||
await execution_utils.add_graph_execution(
|
||||
user_id=args.user_id,
|
||||
graph_id=args.graph_id,
|
||||
graph_version=args.graph_version,
|
||||
@@ -89,14 +97,65 @@ async def _execute_graph(**kwargs):
|
||||
graph_credentials_inputs=args.input_credentials,
|
||||
use_db_query=False,
|
||||
)
|
||||
logger.info(
|
||||
f"Graph execution started with ID {graph_exec.id} for graph {args.graph_id}"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing graph {args.graph_id}: {e}")
|
||||
|
||||
|
||||
# Monitoring functions are now imported from monitoring module
|
||||
class LateExecutionException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def report_late_executions() -> str:
|
||||
late_executions = execution_utils.get_db_client().get_graph_executions(
|
||||
statuses=[ExecutionStatus.QUEUED],
|
||||
created_time_gte=datetime.now(timezone.utc)
|
||||
- timedelta(seconds=config.execution_late_notification_checkrange_secs),
|
||||
created_time_lte=datetime.now(timezone.utc)
|
||||
- timedelta(seconds=config.execution_late_notification_threshold_secs),
|
||||
limit=1000,
|
||||
)
|
||||
|
||||
if not late_executions:
|
||||
return "No late executions detected."
|
||||
|
||||
num_late_executions = len(late_executions)
|
||||
num_users = len(set([r.user_id for r in late_executions]))
|
||||
|
||||
late_execution_details = [
|
||||
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Created At: {exec.started_at.isoformat()}`"
|
||||
for exec in late_executions
|
||||
]
|
||||
|
||||
error = LateExecutionException(
|
||||
f"Late executions detected: {num_late_executions} late executions from {num_users} users "
|
||||
f"in the last {config.execution_late_notification_checkrange_secs} seconds. "
|
||||
f"Graph has been queued for more than {config.execution_late_notification_threshold_secs} seconds. "
|
||||
"Please check the executor status. Details:\n"
|
||||
+ "\n".join(late_execution_details)
|
||||
)
|
||||
msg = str(error)
|
||||
sentry_capture_error(error)
|
||||
get_notification_client().discord_system_alert(msg)
|
||||
return msg
|
||||
|
||||
|
||||
def process_existing_batches(**kwargs):
|
||||
args = NotificationJobArgs(**kwargs)
|
||||
try:
|
||||
logger.info(
|
||||
f"Processing existing batches for notification type {args.notification_types}"
|
||||
)
|
||||
get_notification_client().process_existing_batches(args.notification_types)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing existing batches: {e}")
|
||||
|
||||
|
||||
def process_weekly_summary(**kwargs):
|
||||
try:
|
||||
logger.info("Processing weekly summary")
|
||||
get_notification_client().queue_weekly_summary()
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing weekly summary: {e}")
|
||||
|
||||
|
||||
class Jobstores(Enum):
|
||||
@@ -131,6 +190,11 @@ class GraphExecutionJobInfo(GraphExecutionJobArgs):
|
||||
)
|
||||
|
||||
|
||||
class NotificationJobArgs(BaseModel):
|
||||
notification_types: list[NotificationType]
|
||||
cron: str
|
||||
|
||||
|
||||
class NotificationJobInfo(NotificationJobArgs):
|
||||
id: str
|
||||
name: str
|
||||
@@ -223,16 +287,6 @@ class Scheduler(AppService):
|
||||
jobstore=Jobstores.EXECUTION.value,
|
||||
)
|
||||
|
||||
# Block Error Rate Monitoring
|
||||
self.scheduler.add_job(
|
||||
report_block_error_rates,
|
||||
id="report_block_error_rates",
|
||||
trigger="interval",
|
||||
replace_existing=True,
|
||||
seconds=config.block_error_rate_check_interval_secs,
|
||||
jobstore=Jobstores.EXECUTION.value,
|
||||
)
|
||||
|
||||
self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
|
||||
self.scheduler.start()
|
||||
|
||||
@@ -325,10 +379,6 @@ class Scheduler(AppService):
|
||||
def execute_report_late_executions(self):
|
||||
return report_late_executions()
|
||||
|
||||
@expose
|
||||
def execute_report_block_error_rates(self):
|
||||
return report_block_error_rates()
|
||||
|
||||
|
||||
class SchedulerClient(AppServiceClient):
|
||||
@classmethod
|
||||
|
||||
@@ -731,7 +731,6 @@ async def stop_graph_execution(
|
||||
node_execs = await db.get_node_executions(
|
||||
graph_exec_id=graph_exec_id,
|
||||
statuses=[ExecutionStatus.QUEUED, ExecutionStatus.INCOMPLETE],
|
||||
include_exec_data=False,
|
||||
)
|
||||
await db.update_node_execution_status_batch(
|
||||
[node_exec.node_exec_id for node_exec in node_execs],
|
||||
|
||||
@@ -1,226 +1,29 @@
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from backend.integrations.oauth.todoist import TodoistOAuthHandler
|
||||
|
||||
from .github import GitHubOAuthHandler
|
||||
from .google import GoogleOAuthHandler
|
||||
from .linear import LinearOAuthHandler
|
||||
from .notion import NotionOAuthHandler
|
||||
from .twitter import TwitterOAuthHandler
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ..providers import ProviderName
|
||||
from .base import BaseOAuthHandler
|
||||
|
||||
# --8<-- [start:HANDLERS_BY_NAMEExample]
|
||||
# Build handlers dict with string keys for compatibility with SDK auto-registration
|
||||
_ORIGINAL_HANDLERS = [
|
||||
GitHubOAuthHandler,
|
||||
GoogleOAuthHandler,
|
||||
NotionOAuthHandler,
|
||||
TwitterOAuthHandler,
|
||||
TodoistOAuthHandler,
|
||||
]
|
||||
|
||||
# Start with original handlers
|
||||
_handlers_dict = {
|
||||
(
|
||||
handler.PROVIDER_NAME.value
|
||||
if hasattr(handler.PROVIDER_NAME, "value")
|
||||
else str(handler.PROVIDER_NAME)
|
||||
): handler
|
||||
for handler in _ORIGINAL_HANDLERS
|
||||
HANDLERS_BY_NAME: dict["ProviderName", type["BaseOAuthHandler"]] = {
|
||||
handler.PROVIDER_NAME: handler
|
||||
for handler in [
|
||||
GitHubOAuthHandler,
|
||||
GoogleOAuthHandler,
|
||||
NotionOAuthHandler,
|
||||
TwitterOAuthHandler,
|
||||
LinearOAuthHandler,
|
||||
TodoistOAuthHandler,
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
class SDKAwareCredentials(BaseModel):
|
||||
"""OAuth credentials configuration."""
|
||||
|
||||
use_secrets: bool = True
|
||||
client_id_env_var: Optional[str] = None
|
||||
client_secret_env_var: Optional[str] = None
|
||||
|
||||
|
||||
_credentials_by_provider = {}
|
||||
# Add default credentials for original handlers
|
||||
for handler in _ORIGINAL_HANDLERS:
|
||||
provider_name = (
|
||||
handler.PROVIDER_NAME.value
|
||||
if hasattr(handler.PROVIDER_NAME, "value")
|
||||
else str(handler.PROVIDER_NAME)
|
||||
)
|
||||
_credentials_by_provider[provider_name] = SDKAwareCredentials(
|
||||
use_secrets=True, client_id_env_var=None, client_secret_env_var=None
|
||||
)
|
||||
|
||||
|
||||
# Create a custom dict class that includes SDK handlers
|
||||
class SDKAwareHandlersDict(dict):
|
||||
"""Dictionary that automatically includes SDK-registered OAuth handlers."""
|
||||
|
||||
def __getitem__(self, key):
|
||||
# First try the original handlers
|
||||
if key in _handlers_dict:
|
||||
return _handlers_dict[key]
|
||||
|
||||
# Then try SDK handlers
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_handlers = AutoRegistry.get_oauth_handlers()
|
||||
if key in sdk_handlers:
|
||||
return sdk_handlers[key]
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# If not found, raise KeyError
|
||||
raise KeyError(key)
|
||||
|
||||
def get(self, key, default=None):
|
||||
try:
|
||||
return self[key]
|
||||
except KeyError:
|
||||
return default
|
||||
|
||||
def __contains__(self, key):
|
||||
if key in _handlers_dict:
|
||||
return True
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_handlers = AutoRegistry.get_oauth_handlers()
|
||||
return key in sdk_handlers
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
def keys(self):
|
||||
# Combine all keys into a single dict and return its keys view
|
||||
combined = dict(_handlers_dict)
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_handlers = AutoRegistry.get_oauth_handlers()
|
||||
combined.update(sdk_handlers)
|
||||
except ImportError:
|
||||
pass
|
||||
return combined.keys()
|
||||
|
||||
def values(self):
|
||||
combined = dict(_handlers_dict)
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_handlers = AutoRegistry.get_oauth_handlers()
|
||||
combined.update(sdk_handlers)
|
||||
except ImportError:
|
||||
pass
|
||||
return combined.values()
|
||||
|
||||
def items(self):
|
||||
combined = dict(_handlers_dict)
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_handlers = AutoRegistry.get_oauth_handlers()
|
||||
combined.update(sdk_handlers)
|
||||
except ImportError:
|
||||
pass
|
||||
return combined.items()
|
||||
|
||||
|
||||
class SDKAwareCredentialsDict(dict):
|
||||
"""Dictionary that automatically includes SDK-registered OAuth credentials."""
|
||||
|
||||
def __getitem__(self, key):
|
||||
# First try the original handlers
|
||||
if key in _credentials_by_provider:
|
||||
return _credentials_by_provider[key]
|
||||
|
||||
# Then try SDK credentials
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_credentials = AutoRegistry.get_oauth_credentials()
|
||||
if key in sdk_credentials:
|
||||
# Convert from SDKOAuthCredentials to SDKAwareCredentials
|
||||
sdk_cred = sdk_credentials[key]
|
||||
return SDKAwareCredentials(
|
||||
use_secrets=sdk_cred.use_secrets,
|
||||
client_id_env_var=sdk_cred.client_id_env_var,
|
||||
client_secret_env_var=sdk_cred.client_secret_env_var,
|
||||
)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# If not found, raise KeyError
|
||||
raise KeyError(key)
|
||||
|
||||
def get(self, key, default=None):
|
||||
try:
|
||||
return self[key]
|
||||
except KeyError:
|
||||
return default
|
||||
|
||||
def __contains__(self, key):
|
||||
if key in _credentials_by_provider:
|
||||
return True
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_credentials = AutoRegistry.get_oauth_credentials()
|
||||
return key in sdk_credentials
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
def keys(self):
|
||||
# Combine all keys into a single dict and return its keys view
|
||||
combined = dict(_credentials_by_provider)
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_credentials = AutoRegistry.get_oauth_credentials()
|
||||
combined.update(sdk_credentials)
|
||||
except ImportError:
|
||||
pass
|
||||
return combined.keys()
|
||||
|
||||
def values(self):
|
||||
combined = dict(_credentials_by_provider)
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_credentials = AutoRegistry.get_oauth_credentials()
|
||||
# Convert SDK credentials to SDKAwareCredentials
|
||||
for key, sdk_cred in sdk_credentials.items():
|
||||
combined[key] = SDKAwareCredentials(
|
||||
use_secrets=sdk_cred.use_secrets,
|
||||
client_id_env_var=sdk_cred.client_id_env_var,
|
||||
client_secret_env_var=sdk_cred.client_secret_env_var,
|
||||
)
|
||||
except ImportError:
|
||||
pass
|
||||
return combined.values()
|
||||
|
||||
def items(self):
|
||||
combined = dict(_credentials_by_provider)
|
||||
try:
|
||||
from backend.sdk import AutoRegistry
|
||||
|
||||
sdk_credentials = AutoRegistry.get_oauth_credentials()
|
||||
# Convert SDK credentials to SDKAwareCredentials
|
||||
for key, sdk_cred in sdk_credentials.items():
|
||||
combined[key] = SDKAwareCredentials(
|
||||
use_secrets=sdk_cred.use_secrets,
|
||||
client_id_env_var=sdk_cred.client_id_env_var,
|
||||
client_secret_env_var=sdk_cred.client_secret_env_var,
|
||||
)
|
||||
except ImportError:
|
||||
pass
|
||||
return combined.items()
|
||||
|
||||
|
||||
HANDLERS_BY_NAME: dict[str, type["BaseOAuthHandler"]] = SDKAwareHandlersDict()
|
||||
CREDENTIALS_BY_PROVIDER: dict[str, SDKAwareCredentials] = SDKAwareCredentialsDict()
|
||||
# --8<-- [end:HANDLERS_BY_NAMEExample]
|
||||
|
||||
__all__ = ["HANDLERS_BY_NAME"]
|
||||
|
||||
@@ -11,7 +11,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
class BaseOAuthHandler(ABC):
|
||||
# --8<-- [start:BaseOAuthHandler1]
|
||||
PROVIDER_NAME: ClassVar[ProviderName | str]
|
||||
PROVIDER_NAME: ClassVar[ProviderName]
|
||||
DEFAULT_SCOPES: ClassVar[list[str]] = []
|
||||
# --8<-- [end:BaseOAuthHandler1]
|
||||
|
||||
@@ -81,6 +81,8 @@ class BaseOAuthHandler(ABC):
|
||||
"""Handles the default scopes for the provider"""
|
||||
# If scopes are empty, use the default scopes for the provider
|
||||
if not scopes:
|
||||
logger.debug(f"Using default scopes for provider {str(self.PROVIDER_NAME)}")
|
||||
logger.debug(
|
||||
f"Using default scopes for provider {self.PROVIDER_NAME.value}"
|
||||
)
|
||||
scopes = self.DEFAULT_SCOPES
|
||||
return scopes
|
||||
|
||||
@@ -1,27 +1,15 @@
|
||||
"""
|
||||
Linear OAuth handler implementation.
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import Optional
|
||||
from urllib.parse import urlencode
|
||||
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
BaseOAuthHandler,
|
||||
OAuth2Credentials,
|
||||
ProviderName,
|
||||
Requests,
|
||||
SecretStr,
|
||||
)
|
||||
from pydantic import SecretStr
|
||||
|
||||
from backend.blocks.linear._api import LinearAPIException
|
||||
from backend.data.model import APIKeyCredentials, OAuth2Credentials
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.util.request import Requests
|
||||
|
||||
class LinearAPIException(Exception):
|
||||
"""Exception for Linear API errors."""
|
||||
|
||||
def __init__(self, message: str, status_code: int):
|
||||
super().__init__(message)
|
||||
self.status_code = status_code
|
||||
from .base import BaseOAuthHandler
|
||||
|
||||
|
||||
class LinearOAuthHandler(BaseOAuthHandler):
|
||||
@@ -29,9 +17,7 @@ class LinearOAuthHandler(BaseOAuthHandler):
|
||||
OAuth2 handler for Linear.
|
||||
"""
|
||||
|
||||
# Provider name will be set dynamically by the SDK when registered
|
||||
# We use a placeholder that will be replaced by AutoRegistry.register_provider()
|
||||
PROVIDER_NAME = ProviderName("linear")
|
||||
PROVIDER_NAME = ProviderName.LINEAR
|
||||
|
||||
def __init__(self, client_id: str, client_secret: str, redirect_uri: str):
|
||||
self.client_id = client_id
|
||||
@@ -44,6 +30,7 @@ class LinearOAuthHandler(BaseOAuthHandler):
|
||||
def get_login_url(
|
||||
self, scopes: list[str], state: str, code_challenge: Optional[str]
|
||||
) -> str:
|
||||
|
||||
params = {
|
||||
"client_id": self.client_id,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
@@ -152,10 +139,9 @@ class LinearOAuthHandler(BaseOAuthHandler):
|
||||
|
||||
async def _request_username(self, access_token: str) -> Optional[str]:
|
||||
# Use the LinearClient to fetch user details using GraphQL
|
||||
from ._api import LinearClient
|
||||
from backend.blocks.linear._api import LinearClient
|
||||
|
||||
try:
|
||||
# Create a temporary OAuth2Credentials object for the LinearClient
|
||||
linear_client = LinearClient(
|
||||
APIKeyCredentials(
|
||||
api_key=SecretStr(access_token),
|
||||
@@ -1,16 +1,8 @@
|
||||
from enum import Enum
|
||||
from typing import Any
|
||||
|
||||
|
||||
# --8<-- [start:ProviderName]
|
||||
class ProviderName(str, Enum):
|
||||
"""
|
||||
Provider names for integrations.
|
||||
|
||||
This enum extends str to accept any string value while maintaining
|
||||
backward compatibility with existing provider constants.
|
||||
"""
|
||||
|
||||
AIML_API = "aiml_api"
|
||||
ANTHROPIC = "anthropic"
|
||||
APOLLO = "apollo"
|
||||
@@ -18,7 +10,9 @@ class ProviderName(str, Enum):
|
||||
DISCORD = "discord"
|
||||
D_ID = "d_id"
|
||||
E2B = "e2b"
|
||||
EXA = "exa"
|
||||
FAL = "fal"
|
||||
GENERIC_WEBHOOK = "generic_webhook"
|
||||
GITHUB = "github"
|
||||
GOOGLE = "google"
|
||||
GOOGLE_MAPS = "google_maps"
|
||||
@@ -27,6 +21,7 @@ class ProviderName(str, Enum):
|
||||
HUBSPOT = "hubspot"
|
||||
IDEOGRAM = "ideogram"
|
||||
JINA = "jina"
|
||||
LINEAR = "linear"
|
||||
LLAMA_API = "llama_api"
|
||||
MEDIUM = "medium"
|
||||
MEM0 = "mem0"
|
||||
@@ -48,57 +43,4 @@ class ProviderName(str, Enum):
|
||||
TODOIST = "todoist"
|
||||
UNREAL_SPEECH = "unreal_speech"
|
||||
ZEROBOUNCE = "zerobounce"
|
||||
|
||||
@classmethod
|
||||
def _missing_(cls, value: Any) -> "ProviderName":
|
||||
"""
|
||||
Allow any string value to be used as a ProviderName.
|
||||
This enables SDK users to define custom providers without
|
||||
modifying the enum.
|
||||
"""
|
||||
if isinstance(value, str):
|
||||
# Create a pseudo-member that behaves like an enum member
|
||||
pseudo_member = str.__new__(cls, value)
|
||||
pseudo_member._name_ = value.upper()
|
||||
pseudo_member._value_ = value
|
||||
return pseudo_member
|
||||
return None # type: ignore
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_json_schema__(cls, schema, handler):
|
||||
"""
|
||||
Custom JSON schema generation that allows any string value,
|
||||
not just the predefined enum values.
|
||||
"""
|
||||
# Get the default schema
|
||||
json_schema = handler(schema)
|
||||
|
||||
# Remove the enum constraint to allow any string
|
||||
if "enum" in json_schema:
|
||||
del json_schema["enum"]
|
||||
|
||||
# Keep the type as string
|
||||
json_schema["type"] = "string"
|
||||
|
||||
# Update description to indicate custom providers are allowed
|
||||
json_schema["description"] = (
|
||||
"Provider name for integrations. "
|
||||
"Can be any string value, including custom provider names."
|
||||
)
|
||||
|
||||
return json_schema
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(cls, source_type, handler):
|
||||
"""
|
||||
Pydantic v2 core schema that allows any string value.
|
||||
"""
|
||||
from pydantic_core import core_schema
|
||||
|
||||
# Create a string schema that validates any string
|
||||
return core_schema.no_info_after_validator_function(
|
||||
cls,
|
||||
core_schema.str_schema(),
|
||||
)
|
||||
|
||||
# --8<-- [end:ProviderName]
|
||||
|
||||
@@ -12,6 +12,7 @@ def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]
|
||||
webhook_managers = {}
|
||||
|
||||
from .compass import CompassWebhookManager
|
||||
from .generic import GenericWebhooksManager
|
||||
from .github import GithubWebhooksManager
|
||||
from .slant3d import Slant3DWebhooksManager
|
||||
|
||||
@@ -22,6 +23,7 @@ def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]
|
||||
CompassWebhookManager,
|
||||
GithubWebhooksManager,
|
||||
Slant3DWebhooksManager,
|
||||
GenericWebhooksManager,
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
@@ -3,7 +3,10 @@ import logging
|
||||
from fastapi import Request
|
||||
from strenum import StrEnum
|
||||
|
||||
from backend.sdk import ManualWebhookManagerBase, Webhook
|
||||
from backend.data import integrations
|
||||
from backend.integrations.providers import ProviderName
|
||||
|
||||
from ._manual_base import ManualWebhookManagerBase
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -13,11 +16,12 @@ class GenericWebhookType(StrEnum):
|
||||
|
||||
|
||||
class GenericWebhooksManager(ManualWebhookManagerBase):
|
||||
PROVIDER_NAME = ProviderName.GENERIC_WEBHOOK
|
||||
WebhookType = GenericWebhookType
|
||||
|
||||
@classmethod
|
||||
async def validate_payload(
|
||||
cls, webhook: Webhook, request: Request
|
||||
cls, webhook: integrations.Webhook, request: Request
|
||||
) -> tuple[dict, str]:
|
||||
payload = await request.json()
|
||||
event_type = GenericWebhookType.PLAIN
|
||||
@@ -1,24 +0,0 @@
|
||||
"""Monitoring module for platform health and alerting."""
|
||||
|
||||
from .block_error_monitor import BlockErrorMonitor, report_block_error_rates
|
||||
from .late_execution_monitor import (
|
||||
LateExecutionException,
|
||||
LateExecutionMonitor,
|
||||
report_late_executions,
|
||||
)
|
||||
from .notification_monitor import (
|
||||
NotificationJobArgs,
|
||||
process_existing_batches,
|
||||
process_weekly_summary,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"BlockErrorMonitor",
|
||||
"LateExecutionMonitor",
|
||||
"LateExecutionException",
|
||||
"NotificationJobArgs",
|
||||
"report_block_error_rates",
|
||||
"report_late_executions",
|
||||
"process_existing_batches",
|
||||
"process_weekly_summary",
|
||||
]
|
||||
@@ -1,291 +0,0 @@
|
||||
"""Block error rate monitoring module."""
|
||||
|
||||
import logging
|
||||
import re
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from backend.data.block import get_block
|
||||
from backend.data.execution import ExecutionStatus, NodeExecutionResult
|
||||
from backend.executor import utils as execution_utils
|
||||
from backend.notifications.notifications import NotificationManagerClient
|
||||
from backend.util.metrics import sentry_capture_error
|
||||
from backend.util.service import get_service_client
|
||||
from backend.util.settings import Config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
config = Config()
|
||||
|
||||
|
||||
class BlockStatsWithSamples(BaseModel):
|
||||
"""Enhanced block stats with error samples."""
|
||||
|
||||
block_id: str
|
||||
block_name: str
|
||||
total_executions: int
|
||||
failed_executions: int
|
||||
error_samples: list[str] = []
|
||||
|
||||
@property
|
||||
def error_rate(self) -> float:
|
||||
"""Calculate error rate as a percentage."""
|
||||
if self.total_executions == 0:
|
||||
return 0.0
|
||||
return (self.failed_executions / self.total_executions) * 100
|
||||
|
||||
|
||||
class BlockErrorMonitor:
|
||||
"""Monitor block error rates and send alerts when thresholds are exceeded."""
|
||||
|
||||
def __init__(self, include_top_blocks: int | None = None):
|
||||
self.config = config
|
||||
self.notification_client = get_service_client(NotificationManagerClient)
|
||||
self.include_top_blocks = (
|
||||
include_top_blocks
|
||||
if include_top_blocks is not None
|
||||
else config.block_error_include_top_blocks
|
||||
)
|
||||
|
||||
def check_block_error_rates(self) -> str:
|
||||
"""Check block error rates and send Discord alerts if thresholds are exceeded."""
|
||||
try:
|
||||
logger.info("Checking block error rates")
|
||||
|
||||
# Get executions from the last 24 hours
|
||||
end_time = datetime.now(timezone.utc)
|
||||
start_time = end_time - timedelta(hours=24)
|
||||
|
||||
# Use SQL aggregation to efficiently count totals and failures by block
|
||||
block_stats = self._get_block_stats_from_db(start_time, end_time)
|
||||
|
||||
# For blocks with high error rates, fetch error samples
|
||||
threshold = self.config.block_error_rate_threshold
|
||||
for block_name, stats in block_stats.items():
|
||||
if stats.total_executions >= 10 and stats.error_rate >= threshold * 100:
|
||||
# Only fetch error samples for blocks that exceed threshold
|
||||
error_samples = self._get_error_samples_for_block(
|
||||
stats.block_id, start_time, end_time, limit=3
|
||||
)
|
||||
stats.error_samples = error_samples
|
||||
|
||||
# Check thresholds and send alerts
|
||||
critical_alerts = self._generate_critical_alerts(block_stats, threshold)
|
||||
|
||||
if critical_alerts:
|
||||
msg = "Block Error Rate Alert:\n\n" + "\n\n".join(critical_alerts)
|
||||
self.notification_client.discord_system_alert(msg)
|
||||
logger.info(
|
||||
f"Sent block error rate alert for {len(critical_alerts)} blocks"
|
||||
)
|
||||
return f"Alert sent for {len(critical_alerts)} blocks with high error rates"
|
||||
|
||||
# If no critical alerts, check if we should show top blocks
|
||||
if self.include_top_blocks > 0:
|
||||
top_blocks_msg = self._generate_top_blocks_alert(
|
||||
block_stats, start_time, end_time
|
||||
)
|
||||
if top_blocks_msg:
|
||||
self.notification_client.discord_system_alert(top_blocks_msg)
|
||||
logger.info("Sent top blocks summary")
|
||||
return "Sent top blocks summary"
|
||||
|
||||
logger.info("No blocks exceeded error rate threshold")
|
||||
return "No errors reported for today"
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Error checking block error rates: {e}")
|
||||
|
||||
error = Exception(f"Error checking block error rates: {e}")
|
||||
msg = str(error)
|
||||
sentry_capture_error(error)
|
||||
self.notification_client.discord_system_alert(msg)
|
||||
return msg
|
||||
|
||||
def _get_block_stats_from_db(
|
||||
self, start_time: datetime, end_time: datetime
|
||||
) -> dict[str, BlockStatsWithSamples]:
|
||||
"""Get block execution stats using efficient SQL aggregation."""
|
||||
|
||||
result = execution_utils.get_db_client().get_block_error_stats(
|
||||
start_time, end_time
|
||||
)
|
||||
|
||||
block_stats = {}
|
||||
for stats in result:
|
||||
block_name = b.name if (b := get_block(stats.block_id)) else "Unknown"
|
||||
|
||||
block_stats[block_name] = BlockStatsWithSamples(
|
||||
block_id=stats.block_id,
|
||||
block_name=block_name,
|
||||
total_executions=stats.total_executions,
|
||||
failed_executions=stats.failed_executions,
|
||||
error_samples=[],
|
||||
)
|
||||
|
||||
return block_stats
|
||||
|
||||
def _generate_critical_alerts(
|
||||
self, block_stats: dict[str, BlockStatsWithSamples], threshold: float
|
||||
) -> list[str]:
|
||||
"""Generate alerts for blocks that exceed the error rate threshold."""
|
||||
alerts = []
|
||||
|
||||
for block_name, stats in block_stats.items():
|
||||
if stats.total_executions >= 10 and stats.error_rate >= threshold * 100:
|
||||
error_groups = self._group_similar_errors(stats.error_samples)
|
||||
|
||||
alert_msg = (
|
||||
f"🚨 Block '{block_name}' has {stats.error_rate:.1f}% error rate "
|
||||
f"({stats.failed_executions}/{stats.total_executions}) in the last 24 hours"
|
||||
)
|
||||
|
||||
if error_groups:
|
||||
alert_msg += "\n\n📊 Error Types:"
|
||||
for error_pattern, count in error_groups.items():
|
||||
alert_msg += f"\n• {error_pattern} ({count}x)"
|
||||
|
||||
alerts.append(alert_msg)
|
||||
|
||||
return alerts
|
||||
|
||||
def _generate_top_blocks_alert(
|
||||
self,
|
||||
block_stats: dict[str, BlockStatsWithSamples],
|
||||
start_time: datetime,
|
||||
end_time: datetime,
|
||||
) -> str | None:
|
||||
"""Generate top blocks summary when no critical alerts exist."""
|
||||
top_error_blocks = sorted(
|
||||
[
|
||||
(name, stats)
|
||||
for name, stats in block_stats.items()
|
||||
if stats.total_executions >= 10 and stats.failed_executions > 0
|
||||
],
|
||||
key=lambda x: x[1].failed_executions,
|
||||
reverse=True,
|
||||
)[: self.include_top_blocks]
|
||||
|
||||
if not top_error_blocks:
|
||||
return "✅ No errors reported for today - all blocks are running smoothly!"
|
||||
|
||||
# Get error samples for top blocks
|
||||
for block_name, stats in top_error_blocks:
|
||||
if not stats.error_samples:
|
||||
stats.error_samples = self._get_error_samples_for_block(
|
||||
stats.block_id, start_time, end_time, limit=2
|
||||
)
|
||||
|
||||
count_text = (
|
||||
f"top {self.include_top_blocks}" if self.include_top_blocks > 1 else "top"
|
||||
)
|
||||
alert_msg = f"📊 Daily Error Summary - {count_text} blocks with most errors:"
|
||||
for block_name, stats in top_error_blocks:
|
||||
alert_msg += f"\n• {block_name}: {stats.failed_executions} errors ({stats.error_rate:.1f}% of {stats.total_executions})"
|
||||
|
||||
if stats.error_samples:
|
||||
error_groups = self._group_similar_errors(stats.error_samples)
|
||||
if error_groups:
|
||||
# Show most common error
|
||||
most_common_error = next(iter(error_groups.items()))
|
||||
alert_msg += f"\n └ Most common: {most_common_error[0]}"
|
||||
|
||||
return alert_msg
|
||||
|
||||
def _get_error_samples_for_block(
|
||||
self, block_id: str, start_time: datetime, end_time: datetime, limit: int = 3
|
||||
) -> list[str]:
|
||||
"""Get error samples for a specific block - just a few recent ones."""
|
||||
# Only fetch a small number of recent failed executions for this specific block
|
||||
executions = execution_utils.get_db_client().get_node_executions(
|
||||
block_ids=[block_id],
|
||||
statuses=[ExecutionStatus.FAILED],
|
||||
created_time_gte=start_time,
|
||||
created_time_lte=end_time,
|
||||
limit=limit, # Just get the limit we need
|
||||
)
|
||||
|
||||
error_samples = []
|
||||
for execution in executions:
|
||||
if error_message := self._extract_error_message(execution):
|
||||
masked_error = self._mask_sensitive_data(error_message)
|
||||
error_samples.append(masked_error)
|
||||
|
||||
if len(error_samples) >= limit: # Stop once we have enough samples
|
||||
break
|
||||
|
||||
return error_samples
|
||||
|
||||
def _extract_error_message(self, execution: NodeExecutionResult) -> str | None:
|
||||
"""Extract error message from execution output."""
|
||||
try:
|
||||
if execution.output_data and (
|
||||
error_msg := execution.output_data.get("error")
|
||||
):
|
||||
return str(error_msg[0])
|
||||
return None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def _mask_sensitive_data(self, error_message):
|
||||
"""Mask sensitive data in error messages to enable grouping."""
|
||||
if not error_message:
|
||||
return ""
|
||||
|
||||
# Convert to string if not already
|
||||
error_str = str(error_message)
|
||||
|
||||
# Mask numbers (replace with X)
|
||||
error_str = re.sub(r"\d+", "X", error_str)
|
||||
|
||||
# Mask all caps words (likely constants/IDs)
|
||||
error_str = re.sub(r"\b[A-Z_]{3,}\b", "MASKED", error_str)
|
||||
|
||||
# Mask words with underscores (likely internal variables)
|
||||
error_str = re.sub(r"\b\w*_\w*\b", "MASKED", error_str)
|
||||
|
||||
# Mask UUIDs and long alphanumeric strings
|
||||
error_str = re.sub(
|
||||
r"\b[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}\b",
|
||||
"UUID",
|
||||
error_str,
|
||||
)
|
||||
error_str = re.sub(r"\b[a-f0-9]{20,}\b", "HASH", error_str)
|
||||
|
||||
# Mask file paths
|
||||
error_str = re.sub(r"(/[^/\s]+)+", "/MASKED/path", error_str)
|
||||
|
||||
# Mask URLs
|
||||
error_str = re.sub(r"https?://[^\s]+", "URL", error_str)
|
||||
|
||||
# Mask email addresses
|
||||
error_str = re.sub(
|
||||
r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b", "EMAIL", error_str
|
||||
)
|
||||
|
||||
# Truncate if too long
|
||||
if len(error_str) > 100:
|
||||
error_str = error_str[:97] + "..."
|
||||
|
||||
return error_str.strip()
|
||||
|
||||
def _group_similar_errors(self, error_samples):
|
||||
"""Group similar error messages and return counts."""
|
||||
if not error_samples:
|
||||
return {}
|
||||
|
||||
error_groups = {}
|
||||
for error in error_samples:
|
||||
if error in error_groups:
|
||||
error_groups[error] += 1
|
||||
else:
|
||||
error_groups[error] = 1
|
||||
|
||||
# Sort by frequency, most common first
|
||||
return dict(sorted(error_groups.items(), key=lambda x: x[1], reverse=True))
|
||||
|
||||
|
||||
def report_block_error_rates(include_top_blocks: int | None = None):
|
||||
"""Check block error rates and send Discord alerts if thresholds are exceeded."""
|
||||
monitor = BlockErrorMonitor(include_top_blocks=include_top_blocks)
|
||||
return monitor.check_block_error_rates()
|
||||
@@ -1,71 +0,0 @@
|
||||
"""Late execution monitoring module."""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
from backend.data.execution import ExecutionStatus
|
||||
from backend.executor import utils as execution_utils
|
||||
from backend.notifications.notifications import NotificationManagerClient
|
||||
from backend.util.metrics import sentry_capture_error
|
||||
from backend.util.service import get_service_client
|
||||
from backend.util.settings import Config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
config = Config()
|
||||
|
||||
|
||||
class LateExecutionException(Exception):
|
||||
"""Exception raised when late executions are detected."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class LateExecutionMonitor:
|
||||
"""Monitor late executions and send alerts when thresholds are exceeded."""
|
||||
|
||||
def __init__(self):
|
||||
self.config = config
|
||||
self.notification_client = get_service_client(NotificationManagerClient)
|
||||
|
||||
def check_late_executions(self) -> str:
|
||||
"""Check for late executions and send alerts if found."""
|
||||
late_executions = execution_utils.get_db_client().get_graph_executions(
|
||||
statuses=[ExecutionStatus.QUEUED],
|
||||
created_time_gte=datetime.now(timezone.utc)
|
||||
- timedelta(
|
||||
seconds=self.config.execution_late_notification_checkrange_secs
|
||||
),
|
||||
created_time_lte=datetime.now(timezone.utc)
|
||||
- timedelta(seconds=self.config.execution_late_notification_threshold_secs),
|
||||
limit=1000,
|
||||
)
|
||||
|
||||
if not late_executions:
|
||||
return "No late executions detected."
|
||||
|
||||
num_late_executions = len(late_executions)
|
||||
num_users = len(set([r.user_id for r in late_executions]))
|
||||
|
||||
late_execution_details = [
|
||||
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Created At: {exec.started_at.isoformat()}`"
|
||||
for exec in late_executions
|
||||
]
|
||||
|
||||
error = LateExecutionException(
|
||||
f"Late executions detected: {num_late_executions} late executions from {num_users} users "
|
||||
f"in the last {self.config.execution_late_notification_checkrange_secs} seconds. "
|
||||
f"Graph has been queued for more than {self.config.execution_late_notification_threshold_secs} seconds. "
|
||||
"Please check the executor status. Details:\n"
|
||||
+ "\n".join(late_execution_details)
|
||||
)
|
||||
msg = str(error)
|
||||
|
||||
sentry_capture_error(error)
|
||||
self.notification_client.discord_system_alert(msg)
|
||||
return msg
|
||||
|
||||
|
||||
def report_late_executions() -> str:
|
||||
"""Check for late executions and send Discord alerts if found."""
|
||||
monitor = LateExecutionMonitor()
|
||||
return monitor.check_late_executions()
|
||||
@@ -1,39 +0,0 @@
|
||||
"""Notification processing monitoring module."""
|
||||
|
||||
import logging
|
||||
|
||||
from prisma.enums import NotificationType
|
||||
from pydantic import BaseModel
|
||||
|
||||
from backend.notifications.notifications import NotificationManagerClient
|
||||
from backend.util.service import get_service_client
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class NotificationJobArgs(BaseModel):
|
||||
notification_types: list[NotificationType]
|
||||
cron: str
|
||||
|
||||
|
||||
def process_existing_batches(**kwargs):
|
||||
"""Process existing notification batches."""
|
||||
args = NotificationJobArgs(**kwargs)
|
||||
try:
|
||||
logging.info(
|
||||
f"Processing existing batches for notification type {args.notification_types}"
|
||||
)
|
||||
get_service_client(NotificationManagerClient).process_existing_batches(
|
||||
args.notification_types
|
||||
)
|
||||
except Exception as e:
|
||||
logger.exception(f"Error processing existing batches: {e}")
|
||||
|
||||
|
||||
def process_weekly_summary(**kwargs):
|
||||
"""Process weekly summary notifications."""
|
||||
try:
|
||||
logging.info("Processing weekly summary")
|
||||
get_service_client(NotificationManagerClient).queue_weekly_summary()
|
||||
except Exception as e:
|
||||
logger.exception(f"Error processing weekly summary: {e}")
|
||||
@@ -1,169 +0,0 @@
|
||||
"""
|
||||
AutoGPT Platform Block Development SDK
|
||||
|
||||
Complete re-export of all dependencies needed for block development.
|
||||
Usage: from backend.sdk import *
|
||||
|
||||
This module provides:
|
||||
- All block base classes and types
|
||||
- All credential and authentication components
|
||||
- All cost tracking components
|
||||
- All webhook components
|
||||
- All utility functions
|
||||
- Auto-registration decorators
|
||||
"""
|
||||
|
||||
# Third-party imports
|
||||
from pydantic import BaseModel, Field, SecretStr
|
||||
|
||||
# === CORE BLOCK SYSTEM ===
|
||||
from backend.data.block import (
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockManualWebhookConfig,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
BlockType,
|
||||
BlockWebhookConfig,
|
||||
)
|
||||
from backend.data.integrations import Webhook
|
||||
from backend.data.model import APIKeyCredentials, Credentials, CredentialsField
|
||||
from backend.data.model import CredentialsMetaInput as _CredentialsMetaInput
|
||||
from backend.data.model import (
|
||||
NodeExecutionStats,
|
||||
OAuth2Credentials,
|
||||
SchemaField,
|
||||
UserPasswordCredentials,
|
||||
)
|
||||
|
||||
# === INTEGRATIONS ===
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.sdk.builder import ProviderBuilder
|
||||
from backend.sdk.cost_integration import cost
|
||||
from backend.sdk.provider import Provider
|
||||
|
||||
# === NEW SDK COMPONENTS (imported early for patches) ===
|
||||
from backend.sdk.registry import AutoRegistry, BlockConfiguration
|
||||
|
||||
# === UTILITIES ===
|
||||
from backend.util import json
|
||||
from backend.util.request import Requests
|
||||
|
||||
# === OPTIONAL IMPORTS WITH TRY/EXCEPT ===
|
||||
# Webhooks
|
||||
try:
|
||||
from backend.integrations.webhooks._base import BaseWebhooksManager
|
||||
except ImportError:
|
||||
BaseWebhooksManager = None
|
||||
|
||||
try:
|
||||
from backend.integrations.webhooks._manual_base import ManualWebhookManagerBase
|
||||
except ImportError:
|
||||
ManualWebhookManagerBase = None
|
||||
|
||||
# Cost System
|
||||
try:
|
||||
from backend.data.cost import BlockCost, BlockCostType
|
||||
except ImportError:
|
||||
from backend.data.block_cost_config import BlockCost, BlockCostType
|
||||
|
||||
try:
|
||||
from backend.data.credit import UsageTransactionMetadata
|
||||
except ImportError:
|
||||
UsageTransactionMetadata = None
|
||||
|
||||
try:
|
||||
from backend.executor.utils import block_usage_cost
|
||||
except ImportError:
|
||||
block_usage_cost = None
|
||||
|
||||
# Utilities
|
||||
try:
|
||||
from backend.util.file import store_media_file
|
||||
except ImportError:
|
||||
store_media_file = None
|
||||
|
||||
try:
|
||||
from backend.util.type import MediaFileType, convert
|
||||
except ImportError:
|
||||
MediaFileType = None
|
||||
convert = None
|
||||
|
||||
try:
|
||||
from backend.util.text import TextFormatter
|
||||
except ImportError:
|
||||
TextFormatter = None
|
||||
|
||||
try:
|
||||
from backend.util.logging import TruncatedLogger
|
||||
except ImportError:
|
||||
TruncatedLogger = None
|
||||
|
||||
|
||||
# OAuth handlers
|
||||
try:
|
||||
from backend.integrations.oauth.base import BaseOAuthHandler
|
||||
except ImportError:
|
||||
BaseOAuthHandler = None
|
||||
|
||||
|
||||
# Credential type with proper provider name
|
||||
from typing import Literal as _Literal
|
||||
|
||||
CredentialsMetaInput = _CredentialsMetaInput[
|
||||
ProviderName, _Literal["api_key", "oauth2", "user_password"]
|
||||
]
|
||||
|
||||
|
||||
# === COMPREHENSIVE __all__ EXPORT ===
|
||||
__all__ = [
|
||||
# Core Block System
|
||||
"Block",
|
||||
"BlockCategory",
|
||||
"BlockOutput",
|
||||
"BlockSchema",
|
||||
"BlockType",
|
||||
"BlockWebhookConfig",
|
||||
"BlockManualWebhookConfig",
|
||||
# Schema and Model Components
|
||||
"SchemaField",
|
||||
"Credentials",
|
||||
"CredentialsField",
|
||||
"CredentialsMetaInput",
|
||||
"APIKeyCredentials",
|
||||
"OAuth2Credentials",
|
||||
"UserPasswordCredentials",
|
||||
"NodeExecutionStats",
|
||||
# Cost System
|
||||
"BlockCost",
|
||||
"BlockCostType",
|
||||
"UsageTransactionMetadata",
|
||||
"block_usage_cost",
|
||||
# Integrations
|
||||
"ProviderName",
|
||||
"BaseWebhooksManager",
|
||||
"ManualWebhookManagerBase",
|
||||
"Webhook",
|
||||
# Provider-Specific (when available)
|
||||
"BaseOAuthHandler",
|
||||
# Utilities
|
||||
"json",
|
||||
"store_media_file",
|
||||
"MediaFileType",
|
||||
"convert",
|
||||
"TextFormatter",
|
||||
"TruncatedLogger",
|
||||
"BaseModel",
|
||||
"Field",
|
||||
"SecretStr",
|
||||
"Requests",
|
||||
# SDK Components
|
||||
"AutoRegistry",
|
||||
"BlockConfiguration",
|
||||
"Provider",
|
||||
"ProviderBuilder",
|
||||
"cost",
|
||||
]
|
||||
|
||||
# Remove None values from __all__
|
||||
__all__ = [name for name in __all__ if globals().get(name) is not None]
|
||||
@@ -1,161 +0,0 @@
|
||||
"""
|
||||
Builder class for creating provider configurations with a fluent API.
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import Callable, List, Optional, Type
|
||||
|
||||
from pydantic import SecretStr
|
||||
|
||||
from backend.data.cost import BlockCost, BlockCostType
|
||||
from backend.data.model import APIKeyCredentials, Credentials, UserPasswordCredentials
|
||||
from backend.integrations.oauth.base import BaseOAuthHandler
|
||||
from backend.integrations.webhooks._base import BaseWebhooksManager
|
||||
from backend.sdk.provider import OAuthConfig, Provider
|
||||
from backend.sdk.registry import AutoRegistry
|
||||
from backend.util.settings import Settings
|
||||
|
||||
|
||||
class ProviderBuilder:
|
||||
"""Builder for creating provider configurations."""
|
||||
|
||||
def __init__(self, name: str):
|
||||
self.name = name
|
||||
self._oauth_config: Optional[OAuthConfig] = None
|
||||
self._webhook_manager: Optional[Type[BaseWebhooksManager]] = None
|
||||
self._default_credentials: List[Credentials] = []
|
||||
self._base_costs: List[BlockCost] = []
|
||||
self._supported_auth_types: set = set()
|
||||
self._api_client_factory: Optional[Callable] = None
|
||||
self._error_handler: Optional[Callable[[Exception], str]] = None
|
||||
self._default_scopes: Optional[List[str]] = None
|
||||
self._client_id_env_var: Optional[str] = None
|
||||
self._client_secret_env_var: Optional[str] = None
|
||||
self._extra_config: dict = {}
|
||||
|
||||
def with_oauth(
|
||||
self,
|
||||
handler_class: Type[BaseOAuthHandler],
|
||||
scopes: Optional[List[str]] = None,
|
||||
client_id_env_var: Optional[str] = None,
|
||||
client_secret_env_var: Optional[str] = None,
|
||||
) -> "ProviderBuilder":
|
||||
"""Add OAuth support."""
|
||||
self._oauth_config = OAuthConfig(
|
||||
oauth_handler=handler_class,
|
||||
scopes=scopes,
|
||||
client_id_env_var=client_id_env_var,
|
||||
client_secret_env_var=client_secret_env_var,
|
||||
)
|
||||
self._supported_auth_types.add("oauth2")
|
||||
return self
|
||||
|
||||
def with_api_key(self, env_var_name: str, title: str) -> "ProviderBuilder":
|
||||
"""Add API key support with environment variable name."""
|
||||
self._supported_auth_types.add("api_key")
|
||||
|
||||
# Register the API key mapping
|
||||
AutoRegistry.register_api_key(self.name, env_var_name)
|
||||
|
||||
# Check if API key exists in environment
|
||||
api_key = os.getenv(env_var_name)
|
||||
if api_key:
|
||||
self._default_credentials.append(
|
||||
APIKeyCredentials(
|
||||
id=f"{self.name}-default",
|
||||
provider=self.name,
|
||||
api_key=SecretStr(api_key),
|
||||
title=title,
|
||||
)
|
||||
)
|
||||
return self
|
||||
|
||||
def with_api_key_from_settings(
|
||||
self, settings_attr: str, title: str
|
||||
) -> "ProviderBuilder":
|
||||
"""Use existing API key from settings."""
|
||||
self._supported_auth_types.add("api_key")
|
||||
|
||||
# Try to get the API key from settings
|
||||
settings = Settings()
|
||||
api_key = getattr(settings.secrets, settings_attr, None)
|
||||
if api_key:
|
||||
self._default_credentials.append(
|
||||
APIKeyCredentials(
|
||||
id=f"{self.name}-default",
|
||||
provider=self.name,
|
||||
api_key=api_key,
|
||||
title=title,
|
||||
)
|
||||
)
|
||||
return self
|
||||
|
||||
def with_user_password(
|
||||
self, username_env_var: str, password_env_var: str, title: str
|
||||
) -> "ProviderBuilder":
|
||||
"""Add username/password support with environment variable names."""
|
||||
self._supported_auth_types.add("user_password")
|
||||
|
||||
# Check if credentials exist in environment
|
||||
username = os.getenv(username_env_var)
|
||||
password = os.getenv(password_env_var)
|
||||
if username and password:
|
||||
self._default_credentials.append(
|
||||
UserPasswordCredentials(
|
||||
id=f"{self.name}-default",
|
||||
provider=self.name,
|
||||
username=SecretStr(username),
|
||||
password=SecretStr(password),
|
||||
title=title,
|
||||
)
|
||||
)
|
||||
return self
|
||||
|
||||
def with_webhook_manager(
|
||||
self, manager_class: Type[BaseWebhooksManager]
|
||||
) -> "ProviderBuilder":
|
||||
"""Register webhook manager for this provider."""
|
||||
self._webhook_manager = manager_class
|
||||
return self
|
||||
|
||||
def with_base_cost(
|
||||
self, amount: int, cost_type: BlockCostType
|
||||
) -> "ProviderBuilder":
|
||||
"""Set base cost for all blocks using this provider."""
|
||||
self._base_costs.append(BlockCost(cost_amount=amount, cost_type=cost_type))
|
||||
return self
|
||||
|
||||
def with_api_client(self, factory: Callable) -> "ProviderBuilder":
|
||||
"""Register API client factory."""
|
||||
self._api_client_factory = factory
|
||||
return self
|
||||
|
||||
def with_error_handler(
|
||||
self, handler: Callable[[Exception], str]
|
||||
) -> "ProviderBuilder":
|
||||
"""Register error handler for provider-specific errors."""
|
||||
self._error_handler = handler
|
||||
return self
|
||||
|
||||
def with_config(self, **kwargs) -> "ProviderBuilder":
|
||||
"""Add additional configuration options."""
|
||||
self._extra_config.update(kwargs)
|
||||
return self
|
||||
|
||||
def build(self) -> Provider:
|
||||
"""Build and register the provider configuration."""
|
||||
provider = Provider(
|
||||
name=self.name,
|
||||
oauth_config=self._oauth_config,
|
||||
webhook_manager=self._webhook_manager,
|
||||
default_credentials=self._default_credentials,
|
||||
base_costs=self._base_costs,
|
||||
supported_auth_types=self._supported_auth_types,
|
||||
api_client_factory=self._api_client_factory,
|
||||
error_handler=self._error_handler,
|
||||
**self._extra_config,
|
||||
)
|
||||
|
||||
# Auto-registration happens here
|
||||
AutoRegistry.register_provider(provider)
|
||||
return provider
|
||||
@@ -1,163 +0,0 @@
|
||||
"""
|
||||
Integration between SDK provider costs and the execution cost system.
|
||||
|
||||
This module provides the glue between provider-defined base costs and the
|
||||
BLOCK_COSTS configuration used by the execution system.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import List, Type
|
||||
|
||||
from backend.data.block import Block
|
||||
from backend.data.block_cost_config import BLOCK_COSTS
|
||||
from backend.data.cost import BlockCost
|
||||
from backend.sdk.registry import AutoRegistry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def register_provider_costs_for_block(block_class: Type[Block]) -> None:
|
||||
"""
|
||||
Register provider base costs for a specific block in BLOCK_COSTS.
|
||||
|
||||
This function checks if the block uses credentials from a provider that has
|
||||
base costs defined, and automatically registers those costs for the block.
|
||||
|
||||
Args:
|
||||
block_class: The block class to register costs for
|
||||
"""
|
||||
# Skip if block already has custom costs defined
|
||||
if block_class in BLOCK_COSTS:
|
||||
logger.debug(
|
||||
f"Block {block_class.__name__} already has costs defined, skipping provider costs"
|
||||
)
|
||||
return
|
||||
|
||||
# Get the block's input schema
|
||||
# We need to instantiate the block to get its input schema
|
||||
try:
|
||||
block_instance = block_class()
|
||||
input_schema = block_instance.input_schema
|
||||
except Exception as e:
|
||||
logger.debug(f"Block {block_class.__name__} cannot be instantiated: {e}")
|
||||
return
|
||||
|
||||
# Look for credentials fields
|
||||
# The cost system works of filtering on credentials fields,
|
||||
# without credentials fields, we can not apply costs
|
||||
# TODO: Improve cost system to allow for costs witout a provider
|
||||
credentials_fields = input_schema.get_credentials_fields()
|
||||
if not credentials_fields:
|
||||
logger.debug(f"Block {block_class.__name__} has no credentials fields")
|
||||
return
|
||||
|
||||
# Get provider information from credentials fields
|
||||
for field_name, field_info in credentials_fields.items():
|
||||
# Get the field schema to extract provider information
|
||||
field_schema = input_schema.get_field_schema(field_name)
|
||||
|
||||
# Extract provider names from json_schema_extra
|
||||
providers = field_schema.get("credentials_provider", [])
|
||||
if not providers:
|
||||
continue
|
||||
|
||||
# For each provider, check if it has base costs
|
||||
block_costs: List[BlockCost] = []
|
||||
for provider_name in providers:
|
||||
provider = AutoRegistry.get_provider(provider_name)
|
||||
if not provider:
|
||||
logger.debug(f"Provider {provider_name} not found in registry")
|
||||
continue
|
||||
|
||||
# Add provider's base costs to the block
|
||||
if provider.base_costs:
|
||||
logger.info(
|
||||
f"Registering {len(provider.base_costs)} base costs from provider {provider_name} for block {block_class.__name__}"
|
||||
)
|
||||
block_costs.extend(provider.base_costs)
|
||||
|
||||
# Register costs if any were found
|
||||
if block_costs:
|
||||
BLOCK_COSTS[block_class] = block_costs
|
||||
logger.info(
|
||||
f"Registered {len(block_costs)} total costs for block {block_class.__name__}"
|
||||
)
|
||||
|
||||
|
||||
def sync_all_provider_costs() -> None:
|
||||
"""
|
||||
Sync all provider base costs to blocks that use them.
|
||||
|
||||
This should be called after all providers and blocks are registered,
|
||||
typically during application startup.
|
||||
"""
|
||||
from backend.blocks import load_all_blocks
|
||||
|
||||
logger.info("Syncing provider costs to blocks...")
|
||||
|
||||
blocks_with_costs = 0
|
||||
total_costs = 0
|
||||
|
||||
for block_id, block_class in load_all_blocks().items():
|
||||
initial_count = len(BLOCK_COSTS.get(block_class, []))
|
||||
register_provider_costs_for_block(block_class)
|
||||
final_count = len(BLOCK_COSTS.get(block_class, []))
|
||||
|
||||
if final_count > initial_count:
|
||||
blocks_with_costs += 1
|
||||
total_costs += final_count - initial_count
|
||||
|
||||
logger.info(f"Synced {total_costs} costs to {blocks_with_costs} blocks")
|
||||
|
||||
|
||||
def get_block_costs(block_class: Type[Block]) -> List[BlockCost]:
|
||||
"""
|
||||
Get all costs for a block, including both explicit and provider costs.
|
||||
|
||||
Args:
|
||||
block_class: The block class to get costs for
|
||||
|
||||
Returns:
|
||||
List of BlockCost objects for the block
|
||||
"""
|
||||
# First ensure provider costs are registered
|
||||
register_provider_costs_for_block(block_class)
|
||||
|
||||
# Return all costs for the block
|
||||
return BLOCK_COSTS.get(block_class, [])
|
||||
|
||||
|
||||
def cost(*costs: BlockCost):
|
||||
"""
|
||||
Decorator to set custom costs for a block.
|
||||
|
||||
This decorator allows blocks to define their own costs, which will override
|
||||
any provider base costs. Multiple costs can be specified with different
|
||||
filters for different pricing tiers (e.g., different models).
|
||||
|
||||
Example:
|
||||
@cost(
|
||||
BlockCost(cost_type=BlockCostType.RUN, cost_amount=10),
|
||||
BlockCost(
|
||||
cost_type=BlockCostType.RUN,
|
||||
cost_amount=20,
|
||||
cost_filter={"model": "premium"}
|
||||
)
|
||||
)
|
||||
class MyBlock(Block):
|
||||
...
|
||||
|
||||
Args:
|
||||
*costs: Variable number of BlockCost objects
|
||||
"""
|
||||
|
||||
def decorator(block_class: Type[Block]) -> Type[Block]:
|
||||
# Register the costs for this block
|
||||
if costs:
|
||||
BLOCK_COSTS[block_class] = list(costs)
|
||||
logger.info(
|
||||
f"Registered {len(costs)} custom costs for block {block_class.__name__}"
|
||||
)
|
||||
return block_class
|
||||
|
||||
return decorator
|
||||
@@ -1,114 +0,0 @@
|
||||
"""
|
||||
Provider configuration class that holds all provider-related settings.
|
||||
"""
|
||||
|
||||
from typing import Any, Callable, List, Optional, Set, Type
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from backend.data.cost import BlockCost
|
||||
from backend.data.model import Credentials, CredentialsField, CredentialsMetaInput
|
||||
from backend.integrations.oauth.base import BaseOAuthHandler
|
||||
from backend.integrations.webhooks._base import BaseWebhooksManager
|
||||
|
||||
|
||||
class OAuthConfig(BaseModel):
|
||||
"""Configuration for OAuth authentication."""
|
||||
|
||||
oauth_handler: Type[BaseOAuthHandler]
|
||||
scopes: Optional[List[str]] = None
|
||||
client_id_env_var: Optional[str] = None
|
||||
client_secret_env_var: Optional[str] = None
|
||||
|
||||
|
||||
class Provider:
|
||||
"""A configured provider that blocks can use.
|
||||
|
||||
A Provider represents a service or platform that blocks can integrate with, like Linear, OpenAI, etc.
|
||||
It contains configuration for:
|
||||
- Authentication (OAuth, API keys)
|
||||
- Default credentials
|
||||
- Base costs for using the provider
|
||||
- Webhook handling
|
||||
- Error handling
|
||||
- API client factory
|
||||
|
||||
Blocks use Provider instances to handle authentication, make API calls, and manage service-specific logic.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
name: str,
|
||||
oauth_config: Optional[OAuthConfig] = None,
|
||||
webhook_manager: Optional[Type[BaseWebhooksManager]] = None,
|
||||
default_credentials: Optional[List[Credentials]] = None,
|
||||
base_costs: Optional[List[BlockCost]] = None,
|
||||
supported_auth_types: Optional[Set[str]] = None,
|
||||
api_client_factory: Optional[Callable] = None,
|
||||
error_handler: Optional[Callable[[Exception], str]] = None,
|
||||
**kwargs,
|
||||
):
|
||||
self.name = name
|
||||
self.oauth_config = oauth_config
|
||||
self.webhook_manager = webhook_manager
|
||||
self.default_credentials = default_credentials or []
|
||||
self.base_costs = base_costs or []
|
||||
self.supported_auth_types = supported_auth_types or set()
|
||||
self._api_client_factory = api_client_factory
|
||||
self._error_handler = error_handler
|
||||
|
||||
# Store any additional configuration
|
||||
self._extra_config = kwargs
|
||||
|
||||
def credentials_field(self, **kwargs) -> CredentialsMetaInput:
|
||||
"""Return a CredentialsField configured for this provider."""
|
||||
# Extract known CredentialsField parameters
|
||||
title = kwargs.pop("title", None)
|
||||
description = kwargs.pop("description", f"{self.name.title()} credentials")
|
||||
required_scopes = kwargs.pop("required_scopes", set())
|
||||
discriminator = kwargs.pop("discriminator", None)
|
||||
discriminator_mapping = kwargs.pop("discriminator_mapping", None)
|
||||
discriminator_values = kwargs.pop("discriminator_values", None)
|
||||
|
||||
# Create json_schema_extra with provider information
|
||||
json_schema_extra = {
|
||||
"credentials_provider": [self.name],
|
||||
"credentials_types": (
|
||||
list(self.supported_auth_types)
|
||||
if self.supported_auth_types
|
||||
else ["api_key"]
|
||||
),
|
||||
}
|
||||
|
||||
# Merge any existing json_schema_extra
|
||||
if "json_schema_extra" in kwargs:
|
||||
json_schema_extra.update(kwargs.pop("json_schema_extra"))
|
||||
|
||||
# Add json_schema_extra to kwargs
|
||||
kwargs["json_schema_extra"] = json_schema_extra
|
||||
|
||||
return CredentialsField(
|
||||
required_scopes=required_scopes,
|
||||
discriminator=discriminator,
|
||||
discriminator_mapping=discriminator_mapping,
|
||||
discriminator_values=discriminator_values,
|
||||
title=title,
|
||||
description=description,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def get_api(self, credentials: Credentials) -> Any:
|
||||
"""Get API client instance for the given credentials."""
|
||||
if self._api_client_factory:
|
||||
return self._api_client_factory(credentials)
|
||||
raise NotImplementedError(f"No API client factory registered for {self.name}")
|
||||
|
||||
def handle_error(self, error: Exception) -> str:
|
||||
"""Handle provider-specific errors."""
|
||||
if self._error_handler:
|
||||
return self._error_handler(error)
|
||||
return str(error)
|
||||
|
||||
def get_config(self, key: str, default: Any = None) -> Any:
|
||||
"""Get additional configuration value."""
|
||||
return self._extra_config.get(key, default)
|
||||
@@ -1,220 +0,0 @@
|
||||
"""
|
||||
Auto-registration system for blocks, providers, and their configurations.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Type
|
||||
|
||||
from pydantic import BaseModel, SecretStr
|
||||
|
||||
from backend.blocks.basic import Block
|
||||
from backend.data.model import APIKeyCredentials, Credentials
|
||||
from backend.integrations.oauth.base import BaseOAuthHandler
|
||||
from backend.integrations.webhooks._base import BaseWebhooksManager
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from backend.sdk.provider import Provider
|
||||
|
||||
|
||||
class SDKOAuthCredentials(BaseModel):
|
||||
"""OAuth credentials configuration for SDK providers."""
|
||||
|
||||
use_secrets: bool = False
|
||||
client_id_env_var: Optional[str] = None
|
||||
client_secret_env_var: Optional[str] = None
|
||||
|
||||
|
||||
class BlockConfiguration:
|
||||
"""Configuration associated with a block."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
provider: str,
|
||||
costs: List[Any],
|
||||
default_credentials: List[Credentials],
|
||||
webhook_manager: Optional[Type[BaseWebhooksManager]] = None,
|
||||
oauth_handler: Optional[Type[BaseOAuthHandler]] = None,
|
||||
):
|
||||
self.provider = provider
|
||||
self.costs = costs
|
||||
self.default_credentials = default_credentials
|
||||
self.webhook_manager = webhook_manager
|
||||
self.oauth_handler = oauth_handler
|
||||
|
||||
|
||||
class AutoRegistry:
|
||||
"""Central registry for all block-related configurations."""
|
||||
|
||||
_lock = threading.Lock()
|
||||
_providers: Dict[str, "Provider"] = {}
|
||||
_default_credentials: List[Credentials] = []
|
||||
_oauth_handlers: Dict[str, Type[BaseOAuthHandler]] = {}
|
||||
_oauth_credentials: Dict[str, SDKOAuthCredentials] = {}
|
||||
_webhook_managers: Dict[str, Type[BaseWebhooksManager]] = {}
|
||||
_block_configurations: Dict[Type[Block], BlockConfiguration] = {}
|
||||
_api_key_mappings: Dict[str, str] = {} # provider -> env_var_name
|
||||
|
||||
@classmethod
|
||||
def register_provider(cls, provider: "Provider") -> None:
|
||||
"""Auto-register provider and all its configurations."""
|
||||
with cls._lock:
|
||||
cls._providers[provider.name] = provider
|
||||
|
||||
# Register OAuth handler if provided
|
||||
if provider.oauth_config:
|
||||
# Dynamically set PROVIDER_NAME if not already set
|
||||
if (
|
||||
not hasattr(provider.oauth_config.oauth_handler, "PROVIDER_NAME")
|
||||
or provider.oauth_config.oauth_handler.PROVIDER_NAME is None
|
||||
):
|
||||
# Import ProviderName to create dynamic enum value
|
||||
from backend.integrations.providers import ProviderName
|
||||
|
||||
# This works because ProviderName has _missing_ method
|
||||
provider.oauth_config.oauth_handler.PROVIDER_NAME = ProviderName(
|
||||
provider.name
|
||||
)
|
||||
cls._oauth_handlers[provider.name] = provider.oauth_config.oauth_handler
|
||||
|
||||
# Register OAuth credentials configuration
|
||||
oauth_creds = SDKOAuthCredentials(
|
||||
use_secrets=False, # SDK providers use custom env vars
|
||||
client_id_env_var=provider.oauth_config.client_id_env_var,
|
||||
client_secret_env_var=provider.oauth_config.client_secret_env_var,
|
||||
)
|
||||
cls._oauth_credentials[provider.name] = oauth_creds
|
||||
|
||||
# Register webhook manager if provided
|
||||
if provider.webhook_manager:
|
||||
# Dynamically set PROVIDER_NAME if not already set
|
||||
if (
|
||||
not hasattr(provider.webhook_manager, "PROVIDER_NAME")
|
||||
or provider.webhook_manager.PROVIDER_NAME is None
|
||||
):
|
||||
# Import ProviderName to create dynamic enum value
|
||||
from backend.integrations.providers import ProviderName
|
||||
|
||||
# This works because ProviderName has _missing_ method
|
||||
provider.webhook_manager.PROVIDER_NAME = ProviderName(provider.name)
|
||||
cls._webhook_managers[provider.name] = provider.webhook_manager
|
||||
|
||||
# Register default credentials
|
||||
cls._default_credentials.extend(provider.default_credentials)
|
||||
|
||||
@classmethod
|
||||
def register_api_key(cls, provider: str, env_var_name: str) -> None:
|
||||
"""Register an environment variable as an API key for a provider."""
|
||||
with cls._lock:
|
||||
cls._api_key_mappings[provider] = env_var_name
|
||||
|
||||
# Dynamically check if the env var exists and create credential
|
||||
import os
|
||||
|
||||
api_key = os.getenv(env_var_name)
|
||||
if api_key:
|
||||
credential = APIKeyCredentials(
|
||||
id=f"{provider}-default",
|
||||
provider=provider,
|
||||
api_key=SecretStr(api_key),
|
||||
title=f"Default {provider} credentials",
|
||||
)
|
||||
# Check if credential already exists to avoid duplicates
|
||||
if not any(c.id == credential.id for c in cls._default_credentials):
|
||||
cls._default_credentials.append(credential)
|
||||
|
||||
@classmethod
|
||||
def get_all_credentials(cls) -> List[Credentials]:
|
||||
"""Replace hardcoded get_all_creds() in credentials_store.py."""
|
||||
with cls._lock:
|
||||
return cls._default_credentials.copy()
|
||||
|
||||
@classmethod
|
||||
def get_oauth_handlers(cls) -> Dict[str, Type[BaseOAuthHandler]]:
|
||||
"""Replace HANDLERS_BY_NAME in oauth/__init__.py."""
|
||||
with cls._lock:
|
||||
return cls._oauth_handlers.copy()
|
||||
|
||||
@classmethod
|
||||
def get_oauth_credentials(cls) -> Dict[str, SDKOAuthCredentials]:
|
||||
"""Get OAuth credentials configuration for SDK providers."""
|
||||
with cls._lock:
|
||||
return cls._oauth_credentials.copy()
|
||||
|
||||
@classmethod
|
||||
def get_webhook_managers(cls) -> Dict[str, Type[BaseWebhooksManager]]:
|
||||
"""Replace load_webhook_managers() in webhooks/__init__.py."""
|
||||
with cls._lock:
|
||||
return cls._webhook_managers.copy()
|
||||
|
||||
@classmethod
|
||||
def register_block_configuration(
|
||||
cls, block_class: Type[Block], config: BlockConfiguration
|
||||
) -> None:
|
||||
"""Register configuration for a specific block class."""
|
||||
with cls._lock:
|
||||
cls._block_configurations[block_class] = config
|
||||
|
||||
@classmethod
|
||||
def get_provider(cls, name: str) -> Optional["Provider"]:
|
||||
"""Get a registered provider by name."""
|
||||
with cls._lock:
|
||||
return cls._providers.get(name)
|
||||
|
||||
@classmethod
|
||||
def get_all_provider_names(cls) -> List[str]:
|
||||
"""Get all registered provider names."""
|
||||
with cls._lock:
|
||||
return list(cls._providers.keys())
|
||||
|
||||
@classmethod
|
||||
def clear(cls) -> None:
|
||||
"""Clear all registrations (useful for testing)."""
|
||||
with cls._lock:
|
||||
cls._providers.clear()
|
||||
cls._default_credentials.clear()
|
||||
cls._oauth_handlers.clear()
|
||||
cls._webhook_managers.clear()
|
||||
cls._block_configurations.clear()
|
||||
cls._api_key_mappings.clear()
|
||||
|
||||
@classmethod
|
||||
def patch_integrations(cls) -> None:
|
||||
"""Patch existing integration points to use AutoRegistry."""
|
||||
# OAuth handlers are handled by SDKAwareHandlersDict in oauth/__init__.py
|
||||
# No patching needed for OAuth handlers
|
||||
|
||||
# Patch webhook managers
|
||||
try:
|
||||
import sys
|
||||
from typing import Any
|
||||
|
||||
# Get the module from sys.modules to respect mocking
|
||||
if "backend.integrations.webhooks" in sys.modules:
|
||||
webhooks: Any = sys.modules["backend.integrations.webhooks"]
|
||||
else:
|
||||
import backend.integrations.webhooks
|
||||
|
||||
webhooks: Any = backend.integrations.webhooks
|
||||
|
||||
if hasattr(webhooks, "load_webhook_managers"):
|
||||
original_load = webhooks.load_webhook_managers
|
||||
|
||||
def patched_load():
|
||||
# Get original managers
|
||||
managers = original_load()
|
||||
# Add SDK-registered managers
|
||||
sdk_managers = cls.get_webhook_managers()
|
||||
if isinstance(sdk_managers, dict):
|
||||
# Import ProviderName for conversion
|
||||
from backend.integrations.providers import ProviderName
|
||||
|
||||
# Convert string keys to ProviderName for consistency
|
||||
for provider_str, manager in sdk_managers.items():
|
||||
provider_name = ProviderName(provider_str)
|
||||
managers[provider_name] = manager
|
||||
return managers
|
||||
|
||||
webhooks.load_webhook_managers = patched_load
|
||||
except Exception as e:
|
||||
logging.warning(f"Failed to patch webhook managers: {e}")
|
||||
@@ -1,6 +1,6 @@
|
||||
import logging
|
||||
from collections import defaultdict
|
||||
from typing import Annotated, Any, Optional, Sequence
|
||||
from typing import Annotated, Any, Dict, List, Optional, Sequence
|
||||
|
||||
from fastapi import APIRouter, Body, Depends, HTTPException
|
||||
from prisma.enums import AgentExecutionStatus, APIKeyPermission
|
||||
@@ -11,6 +11,7 @@ from backend.data import execution as execution_db
|
||||
from backend.data import graph as graph_db
|
||||
from backend.data.api_key import APIKey
|
||||
from backend.data.block import BlockInput, CompletedBlockOutput
|
||||
from backend.data.execution import NodeExecutionResult
|
||||
from backend.executor.utils import add_graph_execution
|
||||
from backend.server.external.middleware import require_permission
|
||||
from backend.util.settings import Settings
|
||||
@@ -29,19 +30,30 @@ class NodeOutput(TypedDict):
|
||||
class ExecutionNode(TypedDict):
|
||||
node_id: str
|
||||
input: Any
|
||||
output: dict[str, Any]
|
||||
output: Dict[str, Any]
|
||||
|
||||
|
||||
class ExecutionNodeOutput(TypedDict):
|
||||
node_id: str
|
||||
outputs: list[NodeOutput]
|
||||
outputs: List[NodeOutput]
|
||||
|
||||
|
||||
class GraphExecutionResult(TypedDict):
|
||||
execution_id: str
|
||||
status: str
|
||||
nodes: list[ExecutionNode]
|
||||
output: Optional[list[dict[str, str]]]
|
||||
nodes: List[ExecutionNode]
|
||||
output: Optional[List[Dict[str, str]]]
|
||||
|
||||
|
||||
def get_outputs_with_names(results: list[NodeExecutionResult]) -> list[dict[str, str]]:
|
||||
outputs = []
|
||||
for result in results:
|
||||
if "output" in result.output_data:
|
||||
output_value = result.output_data["output"][0]
|
||||
name = result.output_data.get("name", [None])[0]
|
||||
if output_value and name:
|
||||
outputs.append({name: output_value})
|
||||
return outputs
|
||||
|
||||
|
||||
@v1_router.get(
|
||||
@@ -110,34 +122,23 @@ async def get_graph_execution_results(
|
||||
if not graph:
|
||||
raise HTTPException(status_code=404, detail=f"Graph #{graph_id} not found.")
|
||||
|
||||
graph_exec = await execution_db.get_graph_execution(
|
||||
user_id=api_key.user_id,
|
||||
execution_id=graph_exec_id,
|
||||
include_node_executions=True,
|
||||
results = await execution_db.get_node_executions(graph_exec_id)
|
||||
last_result = results[-1] if results else None
|
||||
execution_status = (
|
||||
last_result.status if last_result else AgentExecutionStatus.INCOMPLETE
|
||||
)
|
||||
if not graph_exec:
|
||||
raise HTTPException(
|
||||
status_code=404, detail=f"Graph execution #{graph_exec_id} not found."
|
||||
)
|
||||
outputs = get_outputs_with_names(results)
|
||||
|
||||
return GraphExecutionResult(
|
||||
execution_id=graph_exec_id,
|
||||
status=graph_exec.status.value,
|
||||
status=execution_status,
|
||||
nodes=[
|
||||
ExecutionNode(
|
||||
node_id=node_exec.node_id,
|
||||
input=node_exec.input_data.get("value", node_exec.input_data),
|
||||
output={k: v for k, v in node_exec.output_data.items()},
|
||||
node_id=result.node_id,
|
||||
input=result.input_data.get("value", result.input_data),
|
||||
output={k: v for k, v in result.output_data.items()},
|
||||
)
|
||||
for node_exec in graph_exec.node_executions
|
||||
for result in results
|
||||
],
|
||||
output=(
|
||||
[
|
||||
{name: value}
|
||||
for name, values in graph_exec.outputs.items()
|
||||
for value in values
|
||||
]
|
||||
if graph_exec.status == AgentExecutionStatus.COMPLETED
|
||||
else None
|
||||
),
|
||||
output=outputs if execution_status == AgentExecutionStatus.COMPLETED else None,
|
||||
)
|
||||
|
||||
@@ -1,74 +0,0 @@
|
||||
"""
|
||||
Models for integration-related data structures that need to be exposed in the OpenAPI schema.
|
||||
|
||||
This module provides models that will be included in the OpenAPI schema generation,
|
||||
allowing frontend code generators like Orval to create corresponding TypeScript types.
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.sdk.registry import AutoRegistry
|
||||
|
||||
|
||||
def get_all_provider_names() -> list[str]:
|
||||
"""
|
||||
Collect all provider names from both ProviderName enum and AutoRegistry.
|
||||
|
||||
This function should be called at runtime to ensure we get all
|
||||
dynamically registered providers.
|
||||
|
||||
Returns:
|
||||
A sorted list of unique provider names.
|
||||
"""
|
||||
# Get static providers from enum
|
||||
static_providers = [member.value for member in ProviderName]
|
||||
|
||||
# Get dynamic providers from registry
|
||||
dynamic_providers = AutoRegistry.get_all_provider_names()
|
||||
|
||||
# Combine and deduplicate
|
||||
all_providers = list(set(static_providers + dynamic_providers))
|
||||
all_providers.sort()
|
||||
|
||||
return all_providers
|
||||
|
||||
|
||||
# Note: We don't create a static enum here because providers are registered dynamically.
|
||||
# Instead, we expose provider names through API endpoints that can be fetched at runtime.
|
||||
|
||||
|
||||
class ProviderNamesResponse(BaseModel):
|
||||
"""Response containing list of all provider names."""
|
||||
|
||||
providers: list[str] = Field(
|
||||
description="List of all available provider names",
|
||||
default_factory=get_all_provider_names,
|
||||
)
|
||||
|
||||
|
||||
class ProviderConstants(BaseModel):
|
||||
"""
|
||||
Model that exposes all provider names as a constant in the OpenAPI schema.
|
||||
This is designed to be converted by Orval into a TypeScript constant.
|
||||
"""
|
||||
|
||||
PROVIDER_NAMES: dict[str, str] = Field(
|
||||
description="All available provider names as a constant mapping",
|
||||
default_factory=lambda: {
|
||||
name.upper().replace("-", "_"): name for name in get_all_provider_names()
|
||||
},
|
||||
)
|
||||
|
||||
class Config:
|
||||
schema_extra = {
|
||||
"example": {
|
||||
"PROVIDER_NAMES": {
|
||||
"OPENAI": "openai",
|
||||
"ANTHROPIC": "anthropic",
|
||||
"EXA": "exa",
|
||||
"GEM": "gem",
|
||||
"EXAMPLE_SERVICE": "example-service",
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Annotated, Awaitable, List, Literal
|
||||
from typing import TYPE_CHECKING, Annotated, Awaitable, Literal
|
||||
|
||||
from fastapi import (
|
||||
APIRouter,
|
||||
@@ -30,14 +30,9 @@ from backend.data.model import (
|
||||
)
|
||||
from backend.executor.utils import add_graph_execution
|
||||
from backend.integrations.creds_manager import IntegrationCredentialsManager
|
||||
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
|
||||
from backend.integrations.oauth import HANDLERS_BY_NAME
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.integrations.webhooks import get_webhook_manager
|
||||
from backend.server.integrations.models import (
|
||||
ProviderConstants,
|
||||
ProviderNamesResponse,
|
||||
get_all_provider_names,
|
||||
)
|
||||
from backend.server.v2.library.db import set_preset_webhook, update_preset
|
||||
from backend.util.exceptions import NeedConfirmation, NotFoundError
|
||||
from backend.util.settings import Settings
|
||||
@@ -477,49 +472,14 @@ async def remove_all_webhooks_for_credentials(
|
||||
def _get_provider_oauth_handler(
|
||||
req: Request, provider_name: ProviderName
|
||||
) -> "BaseOAuthHandler":
|
||||
# Ensure blocks are loaded so SDK providers are available
|
||||
try:
|
||||
from backend.blocks import load_all_blocks
|
||||
|
||||
load_all_blocks() # This is cached, so it only runs once
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to load blocks: {e}")
|
||||
|
||||
# Convert provider_name to string for lookup
|
||||
provider_key = (
|
||||
provider_name.value if hasattr(provider_name, "value") else str(provider_name)
|
||||
)
|
||||
|
||||
if provider_key not in HANDLERS_BY_NAME:
|
||||
if provider_name not in HANDLERS_BY_NAME:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail=f"Provider '{provider_key}' does not support OAuth",
|
||||
)
|
||||
|
||||
# Check if this provider has custom OAuth credentials
|
||||
oauth_credentials = CREDENTIALS_BY_PROVIDER.get(provider_key)
|
||||
|
||||
if oauth_credentials and not oauth_credentials.use_secrets:
|
||||
# SDK provider with custom env vars
|
||||
import os
|
||||
|
||||
client_id = (
|
||||
os.getenv(oauth_credentials.client_id_env_var)
|
||||
if oauth_credentials.client_id_env_var
|
||||
else None
|
||||
)
|
||||
client_secret = (
|
||||
os.getenv(oauth_credentials.client_secret_env_var)
|
||||
if oauth_credentials.client_secret_env_var
|
||||
else None
|
||||
)
|
||||
else:
|
||||
# Original provider using settings.secrets
|
||||
client_id = getattr(settings.secrets, f"{provider_name.value}_client_id", None)
|
||||
client_secret = getattr(
|
||||
settings.secrets, f"{provider_name.value}_client_secret", None
|
||||
detail=f"Provider '{provider_name.value}' does not support OAuth",
|
||||
)
|
||||
|
||||
client_id = getattr(settings.secrets, f"{provider_name.value}_client_id")
|
||||
client_secret = getattr(settings.secrets, f"{provider_name.value}_client_secret")
|
||||
if not (client_id and client_secret):
|
||||
logger.error(
|
||||
f"Attempt to use unconfigured {provider_name.value} OAuth integration"
|
||||
@@ -532,84 +492,14 @@ def _get_provider_oauth_handler(
|
||||
},
|
||||
)
|
||||
|
||||
handler_class = HANDLERS_BY_NAME[provider_key]
|
||||
frontend_base_url = settings.config.frontend_base_url
|
||||
|
||||
if not frontend_base_url:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="Frontend base URL is not configured",
|
||||
)
|
||||
|
||||
handler_class = HANDLERS_BY_NAME[provider_name]
|
||||
frontend_base_url = (
|
||||
settings.config.frontend_base_url
|
||||
or settings.config.platform_base_url
|
||||
or str(req.base_url)
|
||||
)
|
||||
return handler_class(
|
||||
client_id=client_id,
|
||||
client_secret=client_secret,
|
||||
redirect_uri=f"{frontend_base_url}/auth/integrations/oauth_callback",
|
||||
)
|
||||
|
||||
|
||||
# === PROVIDER DISCOVERY ENDPOINTS ===
|
||||
|
||||
|
||||
@router.get("/providers", response_model=List[str])
|
||||
async def list_providers() -> List[str]:
|
||||
"""
|
||||
Get a list of all available provider names.
|
||||
|
||||
Returns both statically defined providers (from ProviderName enum)
|
||||
and dynamically registered providers (from SDK decorators).
|
||||
|
||||
Note: The complete list of provider names is also available as a constant
|
||||
in the generated TypeScript client via PROVIDER_NAMES.
|
||||
"""
|
||||
# Get all providers at runtime
|
||||
all_providers = get_all_provider_names()
|
||||
return all_providers
|
||||
|
||||
|
||||
@router.get("/providers/names", response_model=ProviderNamesResponse)
|
||||
async def get_provider_names() -> ProviderNamesResponse:
|
||||
"""
|
||||
Get all provider names in a structured format.
|
||||
|
||||
This endpoint is specifically designed to expose the provider names
|
||||
in the OpenAPI schema so that code generators like Orval can create
|
||||
appropriate TypeScript constants.
|
||||
"""
|
||||
return ProviderNamesResponse()
|
||||
|
||||
|
||||
@router.get("/providers/constants", response_model=ProviderConstants)
|
||||
async def get_provider_constants() -> ProviderConstants:
|
||||
"""
|
||||
Get provider names as constants.
|
||||
|
||||
This endpoint returns a model with provider names as constants,
|
||||
specifically designed for OpenAPI code generation tools to create
|
||||
TypeScript constants.
|
||||
"""
|
||||
return ProviderConstants()
|
||||
|
||||
|
||||
class ProviderEnumResponse(BaseModel):
|
||||
"""Response containing a provider from the enum."""
|
||||
|
||||
provider: str = Field(
|
||||
description="A provider name from the complete list of providers"
|
||||
)
|
||||
|
||||
|
||||
@router.get("/providers/enum-example", response_model=ProviderEnumResponse)
|
||||
async def get_provider_enum_example() -> ProviderEnumResponse:
|
||||
"""
|
||||
Example endpoint that uses the CompleteProviderNames enum.
|
||||
|
||||
This endpoint exists to ensure that the CompleteProviderNames enum is included
|
||||
in the OpenAPI schema, which will cause Orval to generate it as a
|
||||
TypeScript enum/constant.
|
||||
"""
|
||||
# Return the first provider as an example
|
||||
all_providers = get_all_provider_names()
|
||||
return ProviderEnumResponse(
|
||||
provider=all_providers[0] if all_providers else "openai"
|
||||
)
|
||||
|
||||
@@ -62,10 +62,6 @@ def launch_darkly_context():
|
||||
async def lifespan_context(app: fastapi.FastAPI):
|
||||
await backend.data.db.connect()
|
||||
await backend.data.block.initialize_blocks()
|
||||
|
||||
# SDK auto-registration is now handled by AutoRegistry.patch_integrations()
|
||||
# which is called when the SDK module is imported
|
||||
|
||||
await backend.data.user.migrate_and_encrypt_user_integrations()
|
||||
await backend.data.graph.fix_llm_provider_credentials()
|
||||
await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)
|
||||
|
||||
@@ -448,10 +448,10 @@ class DeleteGraphResponse(TypedDict):
|
||||
tags=["graphs"],
|
||||
dependencies=[Depends(auth_middleware)],
|
||||
)
|
||||
async def list_graphs(
|
||||
async def get_graphs(
|
||||
user_id: Annotated[str, Depends(get_user_id)],
|
||||
) -> Sequence[graph_db.GraphMeta]:
|
||||
return await graph_db.list_graphs(filter_by="active", user_id=user_id)
|
||||
) -> Sequence[graph_db.GraphModel]:
|
||||
return await graph_db.get_graphs(filter_by="active", user_id=user_id)
|
||||
|
||||
|
||||
@v1_router.get(
|
||||
@@ -669,17 +669,36 @@ async def execute_graph(
|
||||
)
|
||||
async def stop_graph_run(
|
||||
graph_id: str, graph_exec_id: str, user_id: Annotated[str, Depends(get_user_id)]
|
||||
) -> execution_db.GraphExecutionMeta | None:
|
||||
) -> execution_db.GraphExecutionMeta:
|
||||
res = await _stop_graph_run(
|
||||
user_id=user_id,
|
||||
graph_id=graph_id,
|
||||
graph_exec_id=graph_exec_id,
|
||||
)
|
||||
if not res:
|
||||
return None
|
||||
raise HTTPException(
|
||||
status_code=HTTP_404_NOT_FOUND,
|
||||
detail=f"Graph execution #{graph_exec_id} not found.",
|
||||
)
|
||||
return res[0]
|
||||
|
||||
|
||||
@v1_router.post(
|
||||
path="/executions",
|
||||
summary="Stop graph executions",
|
||||
tags=["graphs"],
|
||||
dependencies=[Depends(auth_middleware)],
|
||||
)
|
||||
async def stop_graph_runs(
|
||||
graph_id: str, graph_exec_id: str, user_id: Annotated[str, Depends(get_user_id)]
|
||||
) -> list[execution_db.GraphExecutionMeta]:
|
||||
return await _stop_graph_run(
|
||||
user_id=user_id,
|
||||
graph_id=graph_id,
|
||||
graph_exec_id=graph_exec_id,
|
||||
)
|
||||
|
||||
|
||||
async def _stop_graph_run(
|
||||
user_id: str,
|
||||
graph_id: Optional[str] = None,
|
||||
|
||||
@@ -270,7 +270,7 @@ def test_get_graphs(
|
||||
)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.graph_db.list_graphs",
|
||||
"backend.server.routers.v1.graph_db.get_graphs",
|
||||
return_value=[mock_graph],
|
||||
)
|
||||
|
||||
|
||||
@@ -187,7 +187,7 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent
|
||||
async def get_library_agent_by_store_version_id(
|
||||
store_listing_version_id: str,
|
||||
user_id: str,
|
||||
) -> library_model.LibraryAgent | None:
|
||||
):
|
||||
"""
|
||||
Get the library agent metadata for a given store listing version ID and user ID.
|
||||
"""
|
||||
@@ -202,7 +202,7 @@ async def get_library_agent_by_store_version_id(
|
||||
)
|
||||
if not store_listing_version:
|
||||
logger.warning(f"Store listing version not found: {store_listing_version_id}")
|
||||
raise NotFoundError(
|
||||
raise store_exceptions.AgentNotFoundError(
|
||||
f"Store listing version {store_listing_version_id} not found or invalid"
|
||||
)
|
||||
|
||||
@@ -214,9 +214,12 @@ async def get_library_agent_by_store_version_id(
|
||||
"agentGraphVersion": store_listing_version.agentGraphVersion,
|
||||
"isDeleted": False,
|
||||
},
|
||||
include=library_agent_include(user_id),
|
||||
include={"AgentGraph": True},
|
||||
)
|
||||
return library_model.LibraryAgent.from_db(agent) if agent else None
|
||||
if agent:
|
||||
return library_model.LibraryAgent.from_db(agent)
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
async def get_library_agent_by_graph_id(
|
||||
|
||||
@@ -127,9 +127,9 @@ class LibraryAgent(pydantic.BaseModel):
|
||||
description=graph.description,
|
||||
input_schema=graph.input_schema,
|
||||
credentials_input_schema=(
|
||||
graph.credentials_input_schema if sub_graphs is not None else None
|
||||
graph.credentials_input_schema if sub_graphs else None
|
||||
),
|
||||
has_external_trigger=graph.has_external_trigger,
|
||||
has_external_trigger=graph.has_webhook_trigger,
|
||||
trigger_setup_info=(
|
||||
LibraryAgentTriggerInfo(
|
||||
provider=trigger_block.webhook_config.provider,
|
||||
@@ -262,19 +262,6 @@ class LibraryAgentPresetUpdatable(pydantic.BaseModel):
|
||||
is_active: Optional[bool] = None
|
||||
|
||||
|
||||
class TriggeredPresetSetupRequest(pydantic.BaseModel):
|
||||
name: str
|
||||
description: str = ""
|
||||
|
||||
graph_id: str
|
||||
graph_version: int
|
||||
|
||||
trigger_config: dict[str, Any]
|
||||
agent_credentials: dict[str, CredentialsMetaInput] = pydantic.Field(
|
||||
default_factory=dict
|
||||
)
|
||||
|
||||
|
||||
class LibraryAgentPreset(LibraryAgentPresetCreatable):
|
||||
"""Represents a preset configuration for a library agent."""
|
||||
|
||||
|
||||
@@ -1,13 +1,18 @@
|
||||
import logging
|
||||
from typing import Optional
|
||||
from typing import Any, Optional
|
||||
|
||||
import autogpt_libs.auth as autogpt_auth_lib
|
||||
from fastapi import APIRouter, Body, Depends, HTTPException, Query, status
|
||||
from fastapi import APIRouter, Body, Depends, HTTPException, Path, Query, status
|
||||
from fastapi.responses import Response
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
import backend.server.v2.library.db as library_db
|
||||
import backend.server.v2.library.model as library_model
|
||||
import backend.server.v2.store.exceptions as store_exceptions
|
||||
from backend.data.graph import get_graph
|
||||
from backend.data.model import CredentialsMetaInput
|
||||
from backend.executor.utils import make_node_credentials_input_map
|
||||
from backend.integrations.webhooks.utils import setup_webhook_for_block
|
||||
from backend.util.exceptions import NotFoundError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -108,11 +113,12 @@ async def get_library_agent_by_graph_id(
|
||||
"/marketplace/{store_listing_version_id}",
|
||||
summary="Get Agent By Store ID",
|
||||
tags=["store, library"],
|
||||
response_model=library_model.LibraryAgent | None,
|
||||
)
|
||||
async def get_library_agent_by_store_listing_version_id(
|
||||
store_listing_version_id: str,
|
||||
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
|
||||
) -> library_model.LibraryAgent | None:
|
||||
):
|
||||
"""
|
||||
Get Library Agent from Store Listing Version ID.
|
||||
"""
|
||||
@@ -289,3 +295,81 @@ async def fork_library_agent(
|
||||
library_agent_id=library_agent_id,
|
||||
user_id=user_id,
|
||||
)
|
||||
|
||||
|
||||
class TriggeredPresetSetupParams(BaseModel):
|
||||
name: str
|
||||
description: str = ""
|
||||
|
||||
trigger_config: dict[str, Any]
|
||||
agent_credentials: dict[str, CredentialsMetaInput] = Field(default_factory=dict)
|
||||
|
||||
|
||||
@router.post("/{library_agent_id}/setup-trigger")
|
||||
async def setup_trigger(
|
||||
library_agent_id: str = Path(..., description="ID of the library agent"),
|
||||
params: TriggeredPresetSetupParams = Body(),
|
||||
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
|
||||
) -> library_model.LibraryAgentPreset:
|
||||
"""
|
||||
Sets up a webhook-triggered `LibraryAgentPreset` for a `LibraryAgent`.
|
||||
Returns the correspondingly created `LibraryAgentPreset` with `webhook_id` set.
|
||||
"""
|
||||
library_agent = await library_db.get_library_agent(
|
||||
id=library_agent_id, user_id=user_id
|
||||
)
|
||||
if not library_agent:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail=f"Library agent #{library_agent_id} not found",
|
||||
)
|
||||
|
||||
graph = await get_graph(
|
||||
library_agent.graph_id, version=library_agent.graph_version, user_id=user_id
|
||||
)
|
||||
if not graph:
|
||||
raise HTTPException(
|
||||
status.HTTP_410_GONE,
|
||||
f"Graph #{library_agent.graph_id} not accessible (anymore)",
|
||||
)
|
||||
if not (trigger_node := graph.webhook_input_node):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Graph #{library_agent.graph_id} does not have a webhook node",
|
||||
)
|
||||
|
||||
trigger_config_with_credentials = {
|
||||
**params.trigger_config,
|
||||
**(
|
||||
make_node_credentials_input_map(graph, params.agent_credentials).get(
|
||||
trigger_node.id
|
||||
)
|
||||
or {}
|
||||
),
|
||||
}
|
||||
|
||||
new_webhook, feedback = await setup_webhook_for_block(
|
||||
user_id=user_id,
|
||||
trigger_block=trigger_node.block,
|
||||
trigger_config=trigger_config_with_credentials,
|
||||
)
|
||||
if not new_webhook:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Could not set up webhook: {feedback}",
|
||||
)
|
||||
|
||||
new_preset = await library_db.create_preset(
|
||||
user_id=user_id,
|
||||
preset=library_model.LibraryAgentPresetCreatable(
|
||||
graph_id=library_agent.graph_id,
|
||||
graph_version=library_agent.graph_version,
|
||||
name=params.name,
|
||||
description=params.description,
|
||||
inputs=trigger_config_with_credentials,
|
||||
credentials=params.agent_credentials,
|
||||
webhook_id=new_webhook.id,
|
||||
is_active=True,
|
||||
),
|
||||
)
|
||||
return new_preset
|
||||
|
||||
@@ -138,66 +138,6 @@ async def create_preset(
|
||||
)
|
||||
|
||||
|
||||
@router.post("/presets/setup-trigger")
|
||||
async def setup_trigger(
|
||||
params: models.TriggeredPresetSetupRequest = Body(),
|
||||
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
|
||||
) -> models.LibraryAgentPreset:
|
||||
"""
|
||||
Sets up a webhook-triggered `LibraryAgentPreset` for a `LibraryAgent`.
|
||||
Returns the correspondingly created `LibraryAgentPreset` with `webhook_id` set.
|
||||
"""
|
||||
graph = await get_graph(
|
||||
params.graph_id, version=params.graph_version, user_id=user_id
|
||||
)
|
||||
if not graph:
|
||||
raise HTTPException(
|
||||
status.HTTP_410_GONE,
|
||||
f"Graph #{params.graph_id} not accessible (anymore)",
|
||||
)
|
||||
if not (trigger_node := graph.webhook_input_node):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Graph #{params.graph_id} does not have a webhook node",
|
||||
)
|
||||
|
||||
trigger_config_with_credentials = {
|
||||
**params.trigger_config,
|
||||
**(
|
||||
make_node_credentials_input_map(graph, params.agent_credentials).get(
|
||||
trigger_node.id
|
||||
)
|
||||
or {}
|
||||
),
|
||||
}
|
||||
|
||||
new_webhook, feedback = await setup_webhook_for_block(
|
||||
user_id=user_id,
|
||||
trigger_block=trigger_node.block,
|
||||
trigger_config=trigger_config_with_credentials,
|
||||
)
|
||||
if not new_webhook:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Could not set up webhook: {feedback}",
|
||||
)
|
||||
|
||||
new_preset = await db.create_preset(
|
||||
user_id=user_id,
|
||||
preset=models.LibraryAgentPresetCreatable(
|
||||
graph_id=params.graph_id,
|
||||
graph_version=params.graph_version,
|
||||
name=params.name,
|
||||
description=params.description,
|
||||
inputs=trigger_config_with_credentials,
|
||||
credentials=params.agent_credentials,
|
||||
webhook_id=new_webhook.id,
|
||||
is_active=True,
|
||||
),
|
||||
)
|
||||
return new_preset
|
||||
|
||||
|
||||
@router.patch(
|
||||
"/presets/{preset_id}",
|
||||
summary="Update an existing preset",
|
||||
|
||||
@@ -7,15 +7,10 @@ import prisma.errors
|
||||
import prisma.models
|
||||
import prisma.types
|
||||
|
||||
import backend.data.graph
|
||||
import backend.server.v2.store.exceptions
|
||||
import backend.server.v2.store.model
|
||||
from backend.data.graph import (
|
||||
GraphMeta,
|
||||
GraphModel,
|
||||
get_graph,
|
||||
get_graph_as_admin,
|
||||
get_sub_graphs,
|
||||
)
|
||||
from backend.data.graph import GraphModel, get_sub_graphs
|
||||
from backend.data.includes import AGENT_GRAPH_INCLUDE
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -198,7 +193,9 @@ async def get_store_agent_details(
|
||||
) from e
|
||||
|
||||
|
||||
async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
|
||||
async def get_available_graph(
|
||||
store_listing_version_id: str,
|
||||
):
|
||||
try:
|
||||
# Get avaialble, non-deleted store listing version
|
||||
store_listing_version = (
|
||||
@@ -218,7 +215,18 @@ async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
|
||||
detail=f"Store listing version {store_listing_version_id} not found",
|
||||
)
|
||||
|
||||
return GraphModel.from_db(store_listing_version.AgentGraph).meta()
|
||||
graph = GraphModel.from_db(store_listing_version.AgentGraph)
|
||||
# We return graph meta, without nodes, they cannot be just removed
|
||||
# because then input_schema would be empty
|
||||
return {
|
||||
"id": graph.id,
|
||||
"version": graph.version,
|
||||
"is_active": graph.is_active,
|
||||
"name": graph.name,
|
||||
"description": graph.description,
|
||||
"input_schema": graph.input_schema,
|
||||
"output_schema": graph.output_schema,
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting agent: {e}")
|
||||
@@ -1016,7 +1024,7 @@ async def get_agent(
|
||||
if not store_listing_version:
|
||||
raise ValueError(f"Store listing version {store_listing_version_id} not found")
|
||||
|
||||
graph = await get_graph(
|
||||
graph = await backend.data.graph.get_graph(
|
||||
user_id=user_id,
|
||||
graph_id=store_listing_version.agentGraphId,
|
||||
version=store_listing_version.agentGraphVersion,
|
||||
@@ -1375,7 +1383,7 @@ async def get_agent_as_admin(
|
||||
if not store_listing_version:
|
||||
raise ValueError(f"Store listing version {store_listing_version_id} not found")
|
||||
|
||||
graph = await get_graph_as_admin(
|
||||
graph = await backend.data.graph.get_graph_as_admin(
|
||||
user_id=user_id,
|
||||
graph_id=store_listing_version.agentGraphId,
|
||||
version=store_listing_version.agentGraphVersion,
|
||||
|
||||
@@ -12,7 +12,7 @@ from backend.util import json
|
||||
|
||||
def _tok_len(text: str, enc) -> int:
|
||||
"""True token length of *text* in tokenizer *enc* (no wrapper cost)."""
|
||||
return len(enc.encode(str(text)))
|
||||
return len(enc.encode(text))
|
||||
|
||||
|
||||
def _msg_tokens(msg: dict, enc) -> int:
|
||||
@@ -29,7 +29,7 @@ def _truncate_middle_tokens(text: str, enc, max_tok: int) -> str:
|
||||
Return *text* shortened to ≈max_tok tokens by keeping the head & tail
|
||||
and inserting an ellipsis token in the middle.
|
||||
"""
|
||||
ids = enc.encode(str(text))
|
||||
ids = enc.encode(text)
|
||||
if len(ids) <= max_tok:
|
||||
return text # nothing to do
|
||||
|
||||
|
||||
@@ -124,19 +124,6 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
|
||||
description="Time in seconds for how far back to check for the late executions.",
|
||||
)
|
||||
|
||||
block_error_rate_threshold: float = Field(
|
||||
default=0.5,
|
||||
description="Error rate threshold (0.0-1.0) for triggering block error alerts.",
|
||||
)
|
||||
block_error_rate_check_interval_secs: int = Field(
|
||||
default=24 * 60 * 60, # 24 hours
|
||||
description="Interval in seconds between block error rate checks.",
|
||||
)
|
||||
block_error_include_top_blocks: int = Field(
|
||||
default=3,
|
||||
description="Number of top blocks with most errors to show when no blocks exceed threshold (0 to disable).",
|
||||
)
|
||||
|
||||
model_config = SettingsConfigDict(
|
||||
env_file=".env",
|
||||
extra="allow",
|
||||
@@ -276,11 +263,6 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
|
||||
description="Whether to mark failed scans as clean or not",
|
||||
)
|
||||
|
||||
enable_example_blocks: bool = Field(
|
||||
default=False,
|
||||
description="Whether to enable example blocks in production",
|
||||
)
|
||||
|
||||
@field_validator("platform_base_url", "frontend_base_url")
|
||||
@classmethod
|
||||
def validate_platform_base_url(cls, v: str, info: ValidationInfo) -> str:
|
||||
|
||||
@@ -1,21 +1,3 @@
|
||||
"""
|
||||
Test Data Creator for AutoGPT Platform
|
||||
|
||||
This script creates test data for the AutoGPT platform database.
|
||||
|
||||
Image/Video URL Domains Used:
|
||||
- Images: picsum.photos (for all image URLs - avatars, store listing images, etc.)
|
||||
- Videos: youtube.com (for store listing video URLs)
|
||||
|
||||
Add these domains to your Next.js config:
|
||||
```javascript
|
||||
// next.config.js
|
||||
images: {
|
||||
domains: ['picsum.photos'],
|
||||
}
|
||||
```
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import random
|
||||
from datetime import datetime
|
||||
@@ -32,7 +14,6 @@ from prisma.types import (
|
||||
AnalyticsMetricsCreateInput,
|
||||
APIKeyCreateInput,
|
||||
CreditTransactionCreateInput,
|
||||
IntegrationWebhookCreateInput,
|
||||
ProfileCreateInput,
|
||||
StoreListingReviewCreateInput,
|
||||
UserCreateInput,
|
||||
@@ -72,26 +53,10 @@ MAX_REVIEWS_PER_VERSION = 5 # Total reviews depends on number of versions creat
|
||||
|
||||
|
||||
def get_image():
|
||||
"""Generate a consistent image URL using picsum.photos service."""
|
||||
width = random.choice([200, 300, 400, 500, 600, 800])
|
||||
height = random.choice([200, 300, 400, 500, 600, 800])
|
||||
# Use a random seed to get different images
|
||||
seed = random.randint(1, 1000)
|
||||
return f"https://picsum.photos/seed/{seed}/{width}/{height}"
|
||||
|
||||
|
||||
def get_video_url():
|
||||
"""Generate a consistent video URL using a placeholder service."""
|
||||
# Using YouTube as a consistent source for video URLs
|
||||
video_ids = [
|
||||
"dQw4w9WgXcQ", # Example video IDs
|
||||
"9bZkp7q19f0",
|
||||
"kJQP7kiw5Fk",
|
||||
"RgKAFK5djSk",
|
||||
"L_jWHffIx5E",
|
||||
]
|
||||
video_id = random.choice(video_ids)
|
||||
return f"https://www.youtube.com/watch?v={video_id}"
|
||||
url = faker.image_url()
|
||||
while "placekitten.com" in url:
|
||||
url = faker.image_url()
|
||||
return url
|
||||
|
||||
|
||||
async def main():
|
||||
@@ -182,27 +147,12 @@ async def main():
|
||||
)
|
||||
agent_presets.append(preset)
|
||||
|
||||
# Insert Profiles first (before LibraryAgents)
|
||||
profiles = []
|
||||
print(f"Inserting {NUM_USERS} profiles")
|
||||
for user in users:
|
||||
profile = await db.profile.create(
|
||||
data=ProfileCreateInput(
|
||||
userId=user.id,
|
||||
name=user.name or faker.name(),
|
||||
username=faker.unique.user_name(),
|
||||
description=faker.text(),
|
||||
links=[faker.url() for _ in range(3)],
|
||||
avatarUrl=get_image(),
|
||||
)
|
||||
)
|
||||
profiles.append(profile)
|
||||
|
||||
# Insert LibraryAgents
|
||||
library_agents = []
|
||||
print("Inserting library agents")
|
||||
# Insert UserAgents
|
||||
user_agents = []
|
||||
print(f"Inserting {NUM_USERS * MAX_AGENTS_PER_USER} user agents")
|
||||
for user in users:
|
||||
num_agents = random.randint(MIN_AGENTS_PER_USER, MAX_AGENTS_PER_USER)
|
||||
|
||||
# Get a shuffled list of graphs to ensure uniqueness per user
|
||||
available_graphs = agent_graphs.copy()
|
||||
random.shuffle(available_graphs)
|
||||
@@ -212,27 +162,18 @@ async def main():
|
||||
|
||||
for i in range(num_agents):
|
||||
graph = available_graphs[i] # Use unique graph for each library agent
|
||||
|
||||
# Get creator profile for this graph's owner
|
||||
creator_profile = next(
|
||||
(p for p in profiles if p.userId == graph.userId), None
|
||||
)
|
||||
|
||||
library_agent = await db.libraryagent.create(
|
||||
user_agent = await db.libraryagent.create(
|
||||
data={
|
||||
"userId": user.id,
|
||||
"agentGraphId": graph.id,
|
||||
"agentGraphVersion": graph.version,
|
||||
"creatorId": creator_profile.id if creator_profile else None,
|
||||
"imageUrl": get_image() if random.random() < 0.5 else None,
|
||||
"useGraphIsActiveVersion": random.choice([True, False]),
|
||||
"isFavorite": random.choice([True, False]),
|
||||
"isCreatedByUser": random.choice([True, False]),
|
||||
"isArchived": random.choice([True, False]),
|
||||
"isDeleted": random.choice([True, False]),
|
||||
}
|
||||
)
|
||||
library_agents.append(library_agent)
|
||||
user_agents.append(user_agent)
|
||||
|
||||
# Insert AgentGraphExecutions
|
||||
agent_graph_executions = []
|
||||
@@ -384,9 +325,25 @@ async def main():
|
||||
)
|
||||
)
|
||||
|
||||
# Insert Profiles
|
||||
profiles = []
|
||||
print(f"Inserting {NUM_USERS} profiles")
|
||||
for user in users:
|
||||
profile = await db.profile.create(
|
||||
data=ProfileCreateInput(
|
||||
userId=user.id,
|
||||
name=user.name or faker.name(),
|
||||
username=faker.unique.user_name(),
|
||||
description=faker.text(),
|
||||
links=[faker.url() for _ in range(3)],
|
||||
avatarUrl=get_image(),
|
||||
)
|
||||
)
|
||||
profiles.append(profile)
|
||||
|
||||
# Insert StoreListings
|
||||
store_listings = []
|
||||
print("Inserting store listings")
|
||||
print(f"Inserting {NUM_USERS} store listings")
|
||||
for graph in agent_graphs:
|
||||
user = random.choice(users)
|
||||
slug = faker.slug()
|
||||
@@ -403,7 +360,7 @@ async def main():
|
||||
|
||||
# Insert StoreListingVersions
|
||||
store_listing_versions = []
|
||||
print("Inserting store listing versions")
|
||||
print(f"Inserting {NUM_USERS} store listing versions")
|
||||
for listing in store_listings:
|
||||
graph = [g for g in agent_graphs if g.id == listing.agentGraphId][0]
|
||||
version = await db.storelistingversion.create(
|
||||
@@ -412,7 +369,7 @@ async def main():
|
||||
"agentGraphVersion": graph.version,
|
||||
"name": graph.name or faker.sentence(nb_words=3),
|
||||
"subHeading": faker.sentence(),
|
||||
"videoUrl": get_video_url() if random.random() < 0.3 else None,
|
||||
"videoUrl": faker.url(),
|
||||
"imageUrls": [get_image() for _ in range(3)],
|
||||
"description": faker.text(),
|
||||
"categories": [faker.word() for _ in range(3)],
|
||||
@@ -431,7 +388,7 @@ async def main():
|
||||
store_listing_versions.append(version)
|
||||
|
||||
# Insert StoreListingReviews
|
||||
print("Inserting store listing reviews")
|
||||
print(f"Inserting {NUM_USERS * MAX_REVIEWS_PER_VERSION} store listing reviews")
|
||||
for version in store_listing_versions:
|
||||
# Create a copy of users list and shuffle it to avoid duplicates
|
||||
available_reviewers = users.copy()
|
||||
@@ -454,92 +411,26 @@ async def main():
|
||||
)
|
||||
)
|
||||
|
||||
# Insert UserOnboarding for some users
|
||||
print("Inserting user onboarding data")
|
||||
for user in random.sample(
|
||||
users, k=int(NUM_USERS * 0.7)
|
||||
): # 70% of users have onboarding data
|
||||
completed_steps = []
|
||||
possible_steps = list(prisma.enums.OnboardingStep)
|
||||
# Randomly complete some steps
|
||||
if random.random() < 0.8:
|
||||
num_steps = random.randint(1, len(possible_steps))
|
||||
completed_steps = random.sample(possible_steps, k=num_steps)
|
||||
|
||||
try:
|
||||
await db.useronboarding.create(
|
||||
data={
|
||||
"userId": user.id,
|
||||
"completedSteps": completed_steps,
|
||||
"notificationDot": random.choice([True, False]),
|
||||
"notified": (
|
||||
random.sample(completed_steps, k=min(3, len(completed_steps)))
|
||||
if completed_steps
|
||||
else []
|
||||
),
|
||||
"rewardedFor": (
|
||||
random.sample(completed_steps, k=min(2, len(completed_steps)))
|
||||
if completed_steps
|
||||
else []
|
||||
),
|
||||
"usageReason": (
|
||||
random.choice(["personal", "business", "research", "learning"])
|
||||
if random.random() < 0.7
|
||||
else None
|
||||
),
|
||||
"integrations": random.sample(
|
||||
["github", "google", "discord", "slack"], k=random.randint(0, 2)
|
||||
),
|
||||
"otherIntegrations": (
|
||||
faker.word() if random.random() < 0.2 else None
|
||||
),
|
||||
"selectedStoreListingVersionId": (
|
||||
random.choice(store_listing_versions).id
|
||||
if store_listing_versions and random.random() < 0.5
|
||||
else None
|
||||
),
|
||||
"agentInput": (
|
||||
Json({"test": "data"}) if random.random() < 0.3 else None
|
||||
),
|
||||
"onboardingAgentExecutionId": (
|
||||
random.choice(agent_graph_executions).id
|
||||
if agent_graph_executions and random.random() < 0.3
|
||||
else None
|
||||
),
|
||||
"agentRuns": random.randint(0, 10),
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Error creating onboarding for user {user.id}: {e}")
|
||||
# Try simpler version
|
||||
await db.useronboarding.create(
|
||||
data={
|
||||
"userId": user.id,
|
||||
}
|
||||
)
|
||||
|
||||
# Insert IntegrationWebhooks for some users
|
||||
print("Inserting integration webhooks")
|
||||
for user in random.sample(
|
||||
users, k=int(NUM_USERS * 0.3)
|
||||
): # 30% of users have webhooks
|
||||
for _ in range(random.randint(1, 3)):
|
||||
await db.integrationwebhook.create(
|
||||
data=IntegrationWebhookCreateInput(
|
||||
userId=user.id,
|
||||
provider=random.choice(["github", "slack", "discord"]),
|
||||
credentialsId=str(faker.uuid4()),
|
||||
webhookType=random.choice(["repo", "channel", "server"]),
|
||||
resource=faker.slug(),
|
||||
events=[
|
||||
random.choice(["created", "updated", "deleted"])
|
||||
for _ in range(random.randint(1, 3))
|
||||
],
|
||||
config=prisma.Json({"url": faker.url()}),
|
||||
secret=str(faker.sha256()),
|
||||
providerWebhookId=str(faker.uuid4()),
|
||||
)
|
||||
)
|
||||
# Update StoreListingVersions with submission status (StoreListingSubmissions table no longer exists)
|
||||
print(f"Updating {NUM_USERS} store listing versions with submission status")
|
||||
for version in store_listing_versions:
|
||||
reviewer = random.choice(users)
|
||||
status: prisma.enums.SubmissionStatus = random.choice(
|
||||
[
|
||||
prisma.enums.SubmissionStatus.PENDING,
|
||||
prisma.enums.SubmissionStatus.APPROVED,
|
||||
prisma.enums.SubmissionStatus.REJECTED,
|
||||
]
|
||||
)
|
||||
await db.storelistingversion.update(
|
||||
where={"id": version.id},
|
||||
data={
|
||||
"submissionStatus": status,
|
||||
"Reviewer": {"connect": {"id": reviewer.id}},
|
||||
"reviewComments": faker.text(),
|
||||
"reviewedAt": datetime.now(),
|
||||
},
|
||||
)
|
||||
|
||||
# Insert APIKeys
|
||||
print(f"Inserting {NUM_USERS} api keys")
|
||||
@@ -560,12 +451,7 @@ async def main():
|
||||
)
|
||||
)
|
||||
|
||||
# Refresh materialized views
|
||||
print("Refreshing materialized views...")
|
||||
await db.execute_raw("SELECT refresh_store_materialized_views();")
|
||||
|
||||
await db.disconnect()
|
||||
print("Test data creation completed successfully!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
@@ -1,6 +1,5 @@
|
||||
import json
|
||||
import types
|
||||
from typing import Any, Type, TypeVar, Union, cast, get_args, get_origin, overload
|
||||
from typing import Any, Type, TypeVar, cast, get_args, get_origin
|
||||
|
||||
from prisma import Json as PrismaJson
|
||||
|
||||
@@ -105,37 +104,9 @@ def __convert_bool(value: Any) -> bool:
|
||||
return bool(value)
|
||||
|
||||
|
||||
def _try_convert(value: Any, target_type: Any, raise_on_mismatch: bool) -> Any:
|
||||
def _try_convert(value: Any, target_type: Type, raise_on_mismatch: bool) -> Any:
|
||||
origin = get_origin(target_type)
|
||||
args = get_args(target_type)
|
||||
|
||||
# Handle Union types (including Optional which is Union[T, None])
|
||||
if origin is Union or origin is types.UnionType:
|
||||
# Handle None values for Optional types
|
||||
if value is None:
|
||||
if type(None) in args:
|
||||
return None
|
||||
elif raise_on_mismatch:
|
||||
raise TypeError(f"Value {value} is not of expected type {target_type}")
|
||||
else:
|
||||
return value
|
||||
|
||||
# Try to convert to each type in the union, excluding None
|
||||
non_none_types = [arg for arg in args if arg is not type(None)]
|
||||
|
||||
# Try each type in the union, using the original raise_on_mismatch behavior
|
||||
for arg_type in non_none_types:
|
||||
try:
|
||||
return _try_convert(value, arg_type, raise_on_mismatch)
|
||||
except (TypeError, ValueError, ConversionError):
|
||||
continue
|
||||
|
||||
# If no conversion succeeded
|
||||
if raise_on_mismatch:
|
||||
raise TypeError(f"Value {value} is not of expected type {target_type}")
|
||||
else:
|
||||
return value
|
||||
|
||||
if origin is None:
|
||||
origin = target_type
|
||||
if origin not in [list, dict, tuple, str, set, int, float, bool]:
|
||||
@@ -218,19 +189,11 @@ def type_match(value: Any, target_type: Type[T]) -> T:
|
||||
return cast(T, _try_convert(value, target_type, raise_on_mismatch=True))
|
||||
|
||||
|
||||
@overload
|
||||
def convert(value: Any, target_type: Type[T]) -> T: ...
|
||||
|
||||
|
||||
@overload
|
||||
def convert(value: Any, target_type: Any) -> Any: ...
|
||||
|
||||
|
||||
def convert(value: Any, target_type: Any) -> Any:
|
||||
def convert(value: Any, target_type: Type[T]) -> T:
|
||||
try:
|
||||
if isinstance(value, PrismaJson):
|
||||
value = value.data
|
||||
return _try_convert(value, target_type, raise_on_mismatch=False)
|
||||
return cast(T, _try_convert(value, target_type, raise_on_mismatch=False))
|
||||
except Exception as e:
|
||||
raise ConversionError(f"Failed to convert {value} to {target_type}") from e
|
||||
|
||||
@@ -240,7 +203,6 @@ class FormattedStringType(str):
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(cls, source_type, handler):
|
||||
_ = source_type # unused parameter required by pydantic
|
||||
return handler(str)
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
from typing import List, Optional
|
||||
|
||||
from backend.util.type import convert
|
||||
|
||||
|
||||
@@ -7,8 +5,6 @@ def test_type_conversion():
|
||||
assert convert(5.5, int) == 5
|
||||
assert convert("5.5", int) == 5
|
||||
assert convert([1, 2, 3], int) == 3
|
||||
assert convert("7", Optional[int]) == 7
|
||||
assert convert("7", int | None) == 7
|
||||
|
||||
assert convert("5.5", float) == 5.5
|
||||
assert convert(5, float) == 5.0
|
||||
@@ -29,6 +25,8 @@ def test_type_conversion():
|
||||
assert convert([1, 2, 3], dict) == {0: 1, 1: 2, 2: 3}
|
||||
assert convert((1, 2, 3), dict) == {0: 1, 1: 2, 2: 3}
|
||||
|
||||
from typing import List
|
||||
|
||||
assert convert("5", List[int]) == [5]
|
||||
assert convert("[5,4,2]", List[int]) == [5, 4, 2]
|
||||
assert convert([5, 4, 2], List[str]) == ["5", "4", "2"]
|
||||
|
||||
@@ -1,101 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Clean the test database by removing all data while preserving the schema.
|
||||
|
||||
Usage:
|
||||
poetry run python clean_test_db.py [--yes]
|
||||
|
||||
Options:
|
||||
--yes Skip confirmation prompt
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
|
||||
from prisma import Prisma
|
||||
|
||||
|
||||
async def main():
|
||||
db = Prisma()
|
||||
await db.connect()
|
||||
|
||||
print("=" * 60)
|
||||
print("Cleaning Test Database")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
# Get initial counts
|
||||
user_count = await db.user.count()
|
||||
agent_count = await db.agentgraph.count()
|
||||
|
||||
print(f"Current data: {user_count} users, {agent_count} agent graphs")
|
||||
|
||||
if user_count == 0 and agent_count == 0:
|
||||
print("Database is already clean!")
|
||||
await db.disconnect()
|
||||
return
|
||||
|
||||
# Check for --yes flag
|
||||
skip_confirm = "--yes" in sys.argv
|
||||
|
||||
if not skip_confirm:
|
||||
response = input("\nDo you want to clean all data? (yes/no): ")
|
||||
if response.lower() != "yes":
|
||||
print("Aborted.")
|
||||
await db.disconnect()
|
||||
return
|
||||
|
||||
print("\nCleaning database...")
|
||||
|
||||
# Delete in reverse order of dependencies
|
||||
tables = [
|
||||
("UserNotificationBatch", db.usernotificationbatch),
|
||||
("NotificationEvent", db.notificationevent),
|
||||
("CreditRefundRequest", db.creditrefundrequest),
|
||||
("StoreListingReview", db.storelistingreview),
|
||||
("StoreListingVersion", db.storelistingversion),
|
||||
("StoreListing", db.storelisting),
|
||||
("AgentNodeExecutionInputOutput", db.agentnodeexecutioninputoutput),
|
||||
("AgentNodeExecution", db.agentnodeexecution),
|
||||
("AgentGraphExecution", db.agentgraphexecution),
|
||||
("AgentNodeLink", db.agentnodelink),
|
||||
("LibraryAgent", db.libraryagent),
|
||||
("AgentPreset", db.agentpreset),
|
||||
("IntegrationWebhook", db.integrationwebhook),
|
||||
("AgentNode", db.agentnode),
|
||||
("AgentGraph", db.agentgraph),
|
||||
("AgentBlock", db.agentblock),
|
||||
("APIKey", db.apikey),
|
||||
("CreditTransaction", db.credittransaction),
|
||||
("AnalyticsMetrics", db.analyticsmetrics),
|
||||
("AnalyticsDetails", db.analyticsdetails),
|
||||
("Profile", db.profile),
|
||||
("UserOnboarding", db.useronboarding),
|
||||
("User", db.user),
|
||||
]
|
||||
|
||||
for table_name, table in tables:
|
||||
try:
|
||||
count = await table.count()
|
||||
if count > 0:
|
||||
await table.delete_many()
|
||||
print(f"✓ Deleted {count} records from {table_name}")
|
||||
except Exception as e:
|
||||
print(f"⚠ Error cleaning {table_name}: {e}")
|
||||
|
||||
# Refresh materialized views (they should be empty now)
|
||||
try:
|
||||
await db.execute_raw("SELECT refresh_store_materialized_views();")
|
||||
print("\n✓ Refreshed materialized views")
|
||||
except Exception as e:
|
||||
print(f"\n⚠ Could not refresh materialized views: {e}")
|
||||
|
||||
await db.disconnect()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Database cleaned successfully!")
|
||||
print("=" * 60)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -1,60 +1,35 @@
|
||||
networks:
|
||||
app-network:
|
||||
name: app-network
|
||||
shared-network:
|
||||
name: shared-network
|
||||
|
||||
volumes:
|
||||
supabase-config:
|
||||
|
||||
x-agpt-services:
|
||||
&agpt-services
|
||||
networks:
|
||||
- app-network
|
||||
- shared-network
|
||||
|
||||
x-supabase-services:
|
||||
&supabase-services
|
||||
networks:
|
||||
- app-network
|
||||
- shared-network
|
||||
|
||||
|
||||
volumes:
|
||||
clamav-data:
|
||||
|
||||
services:
|
||||
|
||||
db:
|
||||
<<: *supabase-services
|
||||
extends:
|
||||
file: ../db/docker/docker-compose.yml
|
||||
service: db
|
||||
postgres-test:
|
||||
image: ankane/pgvector:latest
|
||||
environment:
|
||||
- POSTGRES_USER=${DB_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${DB_PASS:-postgres}
|
||||
- POSTGRES_DB=${DB_NAME:-postgres}
|
||||
- POSTGRES_PORT=${DB_PORT:-5432}
|
||||
healthcheck:
|
||||
test: pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
ports:
|
||||
- ${POSTGRES_PORT}:5432 # We don't use Supavisor locally, so we expose the db directly.
|
||||
|
||||
vector:
|
||||
<<: *supabase-services
|
||||
extends:
|
||||
file: ../db/docker/docker-compose.yml
|
||||
service: vector
|
||||
|
||||
redis:
|
||||
<<: *agpt-services
|
||||
- "${DB_PORT:-5432}:5432"
|
||||
networks:
|
||||
- app-network-test
|
||||
redis-test:
|
||||
image: redis:latest
|
||||
command: redis-server --requirepass password
|
||||
ports:
|
||||
- "6379:6379"
|
||||
networks:
|
||||
- app-network-test
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
rabbitmq:
|
||||
<<: *agpt-services
|
||||
rabbitmq-test:
|
||||
image: rabbitmq:management
|
||||
container_name: rabbitmq
|
||||
container_name: rabbitmq-test
|
||||
healthcheck:
|
||||
test: rabbitmq-diagnostics -q ping
|
||||
interval: 30s
|
||||
@@ -63,28 +38,11 @@ services:
|
||||
start_period: 10s
|
||||
environment:
|
||||
- RABBITMQ_DEFAULT_USER=rabbitmq_user_default
|
||||
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
|
||||
- RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7 # CHANGE THIS TO A RANDOM PASSWORD IN PRODUCTION -- everywhere lol
|
||||
ports:
|
||||
- "5672:5672"
|
||||
- "15672:15672"
|
||||
clamav:
|
||||
image: clamav/clamav-debian:latest
|
||||
ports:
|
||||
- "3310:3310"
|
||||
volumes:
|
||||
- clamav-data:/var/lib/clamav
|
||||
environment:
|
||||
- CLAMAV_NO_FRESHCLAMD=false
|
||||
- CLAMD_CONF_StreamMaxLength=50M
|
||||
- CLAMD_CONF_MaxFileSize=100M
|
||||
- CLAMD_CONF_MaxScanSize=100M
|
||||
- CLAMD_CONF_MaxThreads=12
|
||||
- CLAMD_CONF_ReadTimeout=300
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clamdscan --version || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
networks:
|
||||
app-network-test:
|
||||
driver: bridge
|
||||
|
||||
@@ -1,254 +0,0 @@
|
||||
-- This migration creates materialized views for performance optimization
|
||||
--
|
||||
-- IMPORTANT: For production environments, pg_cron is REQUIRED for automatic refresh
|
||||
-- Prerequisites for production:
|
||||
-- 1. pg_cron extension must be installed: CREATE EXTENSION pg_cron;
|
||||
-- 2. pg_cron must be configured in postgresql.conf:
|
||||
-- shared_preload_libraries = 'pg_cron'
|
||||
-- cron.database_name = 'your_database_name'
|
||||
--
|
||||
-- For development environments without pg_cron:
|
||||
-- The migration will succeed but you must manually refresh views with:
|
||||
-- SELECT refresh_store_materialized_views();
|
||||
|
||||
-- Check if pg_cron extension is installed and set a flag
|
||||
DO $$
|
||||
DECLARE
|
||||
has_pg_cron BOOLEAN;
|
||||
BEGIN
|
||||
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pg_cron') INTO has_pg_cron;
|
||||
|
||||
IF NOT has_pg_cron THEN
|
||||
RAISE WARNING 'pg_cron extension is not installed!';
|
||||
RAISE WARNING 'Materialized views will be created but WILL NOT refresh automatically.';
|
||||
RAISE WARNING 'For production use, install pg_cron with: CREATE EXTENSION pg_cron;';
|
||||
RAISE WARNING 'For development, manually refresh with: SELECT refresh_store_materialized_views();';
|
||||
|
||||
-- For production deployments, uncomment the following line to make pg_cron mandatory:
|
||||
-- RAISE EXCEPTION 'pg_cron is required for production deployments';
|
||||
END IF;
|
||||
|
||||
-- Store the flag for later use in the migration
|
||||
PERFORM set_config('migration.has_pg_cron', has_pg_cron::text, false);
|
||||
END
|
||||
$$;
|
||||
|
||||
-- CreateIndex
|
||||
-- Optimized: Only include owningUserId in index columns since isDeleted and hasApprovedVersion are in WHERE clause
|
||||
CREATE INDEX IF NOT EXISTS "idx_store_listing_approved" ON "StoreListing"("owningUserId") WHERE "isDeleted" = false AND "hasApprovedVersion" = true;
|
||||
|
||||
-- CreateIndex
|
||||
-- Optimized: Only include storeListingId since submissionStatus is in WHERE clause
|
||||
CREATE INDEX IF NOT EXISTS "idx_store_listing_version_status" ON "StoreListingVersion"("storeListingId") WHERE "submissionStatus" = 'APPROVED';
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX IF NOT EXISTS "idx_slv_categories_gin" ON "StoreListingVersion" USING GIN ("categories") WHERE "submissionStatus" = 'APPROVED';
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX IF NOT EXISTS "idx_slv_agent" ON "StoreListingVersion"("agentGraphId", "agentGraphVersion") WHERE "submissionStatus" = 'APPROVED';
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX IF NOT EXISTS "idx_store_listing_review_version" ON "StoreListingReview"("storeListingVersionId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX IF NOT EXISTS "idx_agent_graph_execution_agent" ON "AgentGraphExecution"("agentGraphId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX IF NOT EXISTS "idx_profile_user" ON "Profile"("userId");
|
||||
|
||||
-- Additional performance indexes
|
||||
CREATE INDEX IF NOT EXISTS "idx_store_listing_version_approved_listing" ON "StoreListingVersion"("storeListingId", "version") WHERE "submissionStatus" = 'APPROVED';
|
||||
|
||||
-- Create materialized view for agent run counts
|
||||
CREATE MATERIALIZED VIEW IF NOT EXISTS "mv_agent_run_counts" AS
|
||||
SELECT
|
||||
"agentGraphId",
|
||||
COUNT(*) AS run_count
|
||||
FROM "AgentGraphExecution"
|
||||
GROUP BY "agentGraphId";
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS "idx_mv_agent_run_counts" ON "mv_agent_run_counts"("agentGraphId");
|
||||
|
||||
-- Create materialized view for review statistics
|
||||
CREATE MATERIALIZED VIEW IF NOT EXISTS "mv_review_stats" AS
|
||||
SELECT
|
||||
sl.id AS "storeListingId",
|
||||
COUNT(sr.id) AS review_count,
|
||||
AVG(sr.score::numeric) AS avg_rating
|
||||
FROM "StoreListing" sl
|
||||
JOIN "StoreListingVersion" slv ON slv."storeListingId" = sl.id
|
||||
LEFT JOIN "StoreListingReview" sr ON sr."storeListingVersionId" = slv.id
|
||||
WHERE sl."isDeleted" = false
|
||||
AND slv."submissionStatus" = 'APPROVED'
|
||||
GROUP BY sl.id;
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS "idx_mv_review_stats" ON "mv_review_stats"("storeListingId");
|
||||
|
||||
-- DropForeignKey (if any exist on the views)
|
||||
-- None needed as views don't have foreign keys
|
||||
|
||||
-- DropView
|
||||
DROP VIEW IF EXISTS "Creator";
|
||||
|
||||
-- DropView
|
||||
DROP VIEW IF EXISTS "StoreAgent";
|
||||
|
||||
-- CreateView
|
||||
CREATE OR REPLACE VIEW "StoreAgent" AS
|
||||
WITH agent_versions AS (
|
||||
SELECT
|
||||
"storeListingId",
|
||||
array_agg(DISTINCT version::text ORDER BY version::text) AS versions
|
||||
FROM "StoreListingVersion"
|
||||
WHERE "submissionStatus" = 'APPROVED'
|
||||
GROUP BY "storeListingId"
|
||||
)
|
||||
SELECT
|
||||
sl.id AS listing_id,
|
||||
slv.id AS "storeListingVersionId",
|
||||
slv."createdAt" AS updated_at,
|
||||
sl.slug,
|
||||
COALESCE(slv.name, '') AS agent_name,
|
||||
slv."videoUrl" AS agent_video,
|
||||
COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image,
|
||||
slv."isFeatured" AS featured,
|
||||
p.username AS creator_username,
|
||||
p."avatarUrl" AS creator_avatar,
|
||||
slv."subHeading" AS sub_heading,
|
||||
slv.description,
|
||||
slv.categories,
|
||||
COALESCE(ar.run_count, 0::bigint) AS runs,
|
||||
COALESCE(rs.avg_rating, 0.0)::double precision AS rating,
|
||||
COALESCE(av.versions, ARRAY[slv.version::text]) AS versions
|
||||
FROM "StoreListing" sl
|
||||
INNER JOIN "StoreListingVersion" slv
|
||||
ON slv."storeListingId" = sl.id
|
||||
AND slv."submissionStatus" = 'APPROVED'
|
||||
JOIN "AgentGraph" a
|
||||
ON slv."agentGraphId" = a.id
|
||||
AND slv."agentGraphVersion" = a.version
|
||||
LEFT JOIN "Profile" p
|
||||
ON sl."owningUserId" = p."userId"
|
||||
LEFT JOIN "mv_review_stats" rs
|
||||
ON sl.id = rs."storeListingId"
|
||||
LEFT JOIN "mv_agent_run_counts" ar
|
||||
ON a.id = ar."agentGraphId"
|
||||
LEFT JOIN agent_versions av
|
||||
ON sl.id = av."storeListingId"
|
||||
WHERE sl."isDeleted" = false
|
||||
AND sl."hasApprovedVersion" = true;
|
||||
|
||||
-- CreateView
|
||||
CREATE OR REPLACE VIEW "Creator" AS
|
||||
WITH creator_listings AS (
|
||||
SELECT
|
||||
sl."owningUserId",
|
||||
sl.id AS listing_id,
|
||||
slv."agentGraphId",
|
||||
slv.categories,
|
||||
sr.score,
|
||||
ar.run_count
|
||||
FROM "StoreListing" sl
|
||||
INNER JOIN "StoreListingVersion" slv
|
||||
ON slv."storeListingId" = sl.id
|
||||
AND slv."submissionStatus" = 'APPROVED'
|
||||
LEFT JOIN "StoreListingReview" sr
|
||||
ON sr."storeListingVersionId" = slv.id
|
||||
LEFT JOIN "mv_agent_run_counts" ar
|
||||
ON ar."agentGraphId" = slv."agentGraphId"
|
||||
WHERE sl."isDeleted" = false
|
||||
AND sl."hasApprovedVersion" = true
|
||||
),
|
||||
creator_stats AS (
|
||||
SELECT
|
||||
cl."owningUserId",
|
||||
COUNT(DISTINCT cl.listing_id) AS num_agents,
|
||||
AVG(COALESCE(cl.score, 0)::numeric) AS agent_rating,
|
||||
SUM(DISTINCT COALESCE(cl.run_count, 0)) AS agent_runs,
|
||||
array_agg(DISTINCT cat ORDER BY cat) FILTER (WHERE cat IS NOT NULL) AS all_categories
|
||||
FROM creator_listings cl
|
||||
LEFT JOIN LATERAL unnest(COALESCE(cl.categories, ARRAY[]::text[])) AS cat ON true
|
||||
GROUP BY cl."owningUserId"
|
||||
)
|
||||
SELECT
|
||||
p.username,
|
||||
p.name,
|
||||
p."avatarUrl" AS avatar_url,
|
||||
p.description,
|
||||
cs.all_categories AS top_categories,
|
||||
p.links,
|
||||
p."isFeatured" AS is_featured,
|
||||
COALESCE(cs.num_agents, 0::bigint) AS num_agents,
|
||||
COALESCE(cs.agent_rating, 0.0) AS agent_rating,
|
||||
COALESCE(cs.agent_runs, 0::numeric) AS agent_runs
|
||||
FROM "Profile" p
|
||||
LEFT JOIN creator_stats cs ON cs."owningUserId" = p."userId";
|
||||
|
||||
-- Create refresh function that works with the current schema
|
||||
CREATE OR REPLACE FUNCTION refresh_store_materialized_views()
|
||||
RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS $$
|
||||
DECLARE
|
||||
current_schema_name text;
|
||||
BEGIN
|
||||
-- Get the current schema
|
||||
current_schema_name := current_schema();
|
||||
|
||||
-- Use CONCURRENTLY for better performance during refresh
|
||||
EXECUTE format('REFRESH MATERIALIZED VIEW CONCURRENTLY %I."mv_agent_run_counts"', current_schema_name);
|
||||
EXECUTE format('REFRESH MATERIALIZED VIEW CONCURRENTLY %I."mv_review_stats"', current_schema_name);
|
||||
RAISE NOTICE 'Materialized views refreshed in schema % at %', current_schema_name, NOW();
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
-- Fallback to non-concurrent refresh if concurrent fails
|
||||
EXECUTE format('REFRESH MATERIALIZED VIEW %I."mv_agent_run_counts"', current_schema_name);
|
||||
EXECUTE format('REFRESH MATERIALIZED VIEW %I."mv_review_stats"', current_schema_name);
|
||||
RAISE NOTICE 'Materialized views refreshed (non-concurrent) in schema % at % due to: %', current_schema_name, NOW(), SQLERRM;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Initial refresh of materialized views
|
||||
SELECT refresh_store_materialized_views();
|
||||
|
||||
-- Schedule automatic refresh every 15 minutes (only if pg_cron is available)
|
||||
DO $$
|
||||
DECLARE
|
||||
has_pg_cron BOOLEAN;
|
||||
current_schema_name text;
|
||||
job_name text;
|
||||
BEGIN
|
||||
-- Get the flag we set earlier
|
||||
has_pg_cron := current_setting('migration.has_pg_cron', true)::boolean;
|
||||
|
||||
-- Get current schema name
|
||||
current_schema_name := current_schema();
|
||||
|
||||
-- Create a unique job name for this schema
|
||||
job_name := format('refresh-store-views-%s', current_schema_name);
|
||||
|
||||
IF has_pg_cron THEN
|
||||
-- Try to unschedule existing job (ignore errors if it doesn't exist)
|
||||
BEGIN
|
||||
PERFORM cron.unschedule(job_name);
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
-- Job doesn't exist, that's fine
|
||||
NULL;
|
||||
END;
|
||||
|
||||
-- Schedule the refresh job with schema-specific command
|
||||
PERFORM cron.schedule(
|
||||
job_name,
|
||||
'*/15 * * * *',
|
||||
format('SELECT %I.refresh_store_materialized_views();', current_schema_name)
|
||||
);
|
||||
RAISE NOTICE 'Scheduled automatic refresh of materialized views every 15 minutes for schema %', current_schema_name;
|
||||
ELSE
|
||||
RAISE WARNING '⚠️ Automatic refresh NOT configured - pg_cron is not available';
|
||||
RAISE WARNING '⚠️ You must manually refresh views with: SELECT refresh_store_materialized_views();';
|
||||
RAISE WARNING '⚠️ Or install pg_cron for automatic refresh in production';
|
||||
END IF;
|
||||
END;
|
||||
$$;
|
||||
@@ -1,155 +0,0 @@
|
||||
-- Unschedule cron job (if it exists)
|
||||
DO $$
|
||||
BEGIN
|
||||
IF EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pg_cron') THEN
|
||||
PERFORM cron.unschedule('refresh-store-views');
|
||||
RAISE NOTICE 'Unscheduled automatic refresh of materialized views';
|
||||
END IF;
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
RAISE NOTICE 'Could not unschedule cron job (may not exist): %', SQLERRM;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- DropView
|
||||
DROP VIEW IF EXISTS "Creator";
|
||||
|
||||
-- DropView
|
||||
DROP VIEW IF EXISTS "StoreAgent";
|
||||
|
||||
-- CreateView (restore original StoreAgent)
|
||||
CREATE VIEW "StoreAgent" AS
|
||||
WITH reviewstats AS (
|
||||
SELECT sl_1.id AS "storeListingId",
|
||||
count(sr.id) AS review_count,
|
||||
avg(sr.score::numeric) AS avg_rating
|
||||
FROM "StoreListing" sl_1
|
||||
JOIN "StoreListingVersion" slv_1
|
||||
ON slv_1."storeListingId" = sl_1.id
|
||||
JOIN "StoreListingReview" sr
|
||||
ON sr."storeListingVersionId" = slv_1.id
|
||||
WHERE sl_1."isDeleted" = false
|
||||
GROUP BY sl_1.id
|
||||
), agentruns AS (
|
||||
SELECT "AgentGraphExecution"."agentGraphId",
|
||||
count(*) AS run_count
|
||||
FROM "AgentGraphExecution"
|
||||
GROUP BY "AgentGraphExecution"."agentGraphId"
|
||||
)
|
||||
SELECT sl.id AS listing_id,
|
||||
slv.id AS "storeListingVersionId",
|
||||
slv."createdAt" AS updated_at,
|
||||
sl.slug,
|
||||
COALESCE(slv.name, '') AS agent_name,
|
||||
slv."videoUrl" AS agent_video,
|
||||
COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image,
|
||||
slv."isFeatured" AS featured,
|
||||
p.username AS creator_username,
|
||||
p."avatarUrl" AS creator_avatar,
|
||||
slv."subHeading" AS sub_heading,
|
||||
slv.description,
|
||||
slv.categories,
|
||||
COALESCE(ar.run_count, 0::bigint) AS runs,
|
||||
COALESCE(rs.avg_rating, 0.0)::double precision AS rating,
|
||||
array_agg(DISTINCT slv.version::text) AS versions
|
||||
FROM "StoreListing" sl
|
||||
JOIN "StoreListingVersion" slv
|
||||
ON slv."storeListingId" = sl.id
|
||||
JOIN "AgentGraph" a
|
||||
ON slv."agentGraphId" = a.id
|
||||
AND slv."agentGraphVersion" = a.version
|
||||
LEFT JOIN "Profile" p
|
||||
ON sl."owningUserId" = p."userId"
|
||||
LEFT JOIN reviewstats rs
|
||||
ON sl.id = rs."storeListingId"
|
||||
LEFT JOIN agentruns ar
|
||||
ON a.id = ar."agentGraphId"
|
||||
WHERE sl."isDeleted" = false
|
||||
AND sl."hasApprovedVersion" = true
|
||||
AND slv."submissionStatus" = 'APPROVED'
|
||||
GROUP BY sl.id, slv.id, sl.slug, slv."createdAt", slv.name, slv."videoUrl",
|
||||
slv."imageUrls", slv."isFeatured", p.username, p."avatarUrl",
|
||||
slv."subHeading", slv.description, slv.categories, ar.run_count,
|
||||
rs.avg_rating;
|
||||
|
||||
-- CreateView (restore original Creator)
|
||||
CREATE VIEW "Creator" AS
|
||||
WITH agentstats AS (
|
||||
SELECT p_1.username,
|
||||
count(DISTINCT sl.id) AS num_agents,
|
||||
avg(COALESCE(sr.score, 0)::numeric) AS agent_rating,
|
||||
sum(COALESCE(age.run_count, 0::bigint)) AS agent_runs
|
||||
FROM "Profile" p_1
|
||||
LEFT JOIN "StoreListing" sl
|
||||
ON sl."owningUserId" = p_1."userId"
|
||||
LEFT JOIN "StoreListingVersion" slv
|
||||
ON slv."storeListingId" = sl.id
|
||||
LEFT JOIN "StoreListingReview" sr
|
||||
ON sr."storeListingVersionId" = slv.id
|
||||
LEFT JOIN (
|
||||
SELECT "AgentGraphExecution"."agentGraphId",
|
||||
count(*) AS run_count
|
||||
FROM "AgentGraphExecution"
|
||||
GROUP BY "AgentGraphExecution"."agentGraphId"
|
||||
) age ON age."agentGraphId" = slv."agentGraphId"
|
||||
WHERE sl."isDeleted" = false
|
||||
AND sl."hasApprovedVersion" = true
|
||||
AND slv."submissionStatus" = 'APPROVED'
|
||||
GROUP BY p_1.username
|
||||
)
|
||||
SELECT p.username,
|
||||
p.name,
|
||||
p."avatarUrl" AS avatar_url,
|
||||
p.description,
|
||||
array_agg(DISTINCT cats.c) FILTER (WHERE cats.c IS NOT NULL) AS top_categories,
|
||||
p.links,
|
||||
p."isFeatured" AS is_featured,
|
||||
COALESCE(ast.num_agents, 0::bigint) AS num_agents,
|
||||
COALESCE(ast.agent_rating, 0.0) AS agent_rating,
|
||||
COALESCE(ast.agent_runs, 0::numeric) AS agent_runs
|
||||
FROM "Profile" p
|
||||
LEFT JOIN agentstats ast
|
||||
ON ast.username = p.username
|
||||
LEFT JOIN LATERAL (
|
||||
SELECT unnest(slv.categories) AS c
|
||||
FROM "StoreListing" sl
|
||||
JOIN "StoreListingVersion" slv
|
||||
ON slv."storeListingId" = sl.id
|
||||
WHERE sl."owningUserId" = p."userId"
|
||||
AND sl."isDeleted" = false
|
||||
AND sl."hasApprovedVersion" = true
|
||||
AND slv."submissionStatus" = 'APPROVED'
|
||||
) cats ON true
|
||||
GROUP BY p.username, p.name, p."avatarUrl", p.description, p.links,
|
||||
p."isFeatured", ast.num_agents, ast.agent_rating, ast.agent_runs;
|
||||
|
||||
-- Drop function
|
||||
DROP FUNCTION IF EXISTS platform.refresh_store_materialized_views();
|
||||
|
||||
-- Drop materialized views
|
||||
DROP MATERIALIZED VIEW IF EXISTS "mv_review_stats";
|
||||
DROP MATERIALIZED VIEW IF EXISTS "mv_agent_run_counts";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_profile_user";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_agent_graph_execution_agent";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_store_listing_review_version";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_slv_agent";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_slv_categories_gin";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_store_listing_version_status";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_store_listing_approved";
|
||||
|
||||
-- DropIndex
|
||||
DROP INDEX IF EXISTS "idx_store_listing_version_approved_listing";
|
||||
425
autogpt_platform/backend/poetry.lock
generated
425
autogpt_platform/backend/poetry.lock
generated
@@ -31,18 +31,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "aiodns"
|
||||
version = "3.5.0"
|
||||
version = "3.4.0"
|
||||
description = "Simple DNS resolver for asyncio"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "aiodns-3.5.0-py3-none-any.whl", hash = "sha256:6d0404f7d5215849233f6ee44854f2bb2481adf71b336b2279016ea5990ca5c5"},
|
||||
{file = "aiodns-3.5.0.tar.gz", hash = "sha256:11264edbab51896ecf546c18eb0dd56dff0428c6aa6d2cd87e643e07300eb310"},
|
||||
{file = "aiodns-3.4.0-py3-none-any.whl", hash = "sha256:4da2b25f7475343f3afbb363a2bfe46afa544f2b318acb9a945065e622f4ed24"},
|
||||
{file = "aiodns-3.4.0.tar.gz", hash = "sha256:24b0ae58410530367f21234d0c848e4de52c1f16fbddc111726a4ab536ec1b2f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
pycares = ">=4.9.0"
|
||||
pycares = ">=4.0.0"
|
||||
|
||||
[[package]]
|
||||
name = "aiofiles"
|
||||
@@ -222,14 +222,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "anthropic"
|
||||
version = "0.57.1"
|
||||
version = "0.51.0"
|
||||
description = "The official Python library for the anthropic API"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "anthropic-0.57.1-py3-none-any.whl", hash = "sha256:33afc1f395af207d07ff1bffc0a3d1caac53c371793792569c5d2f09283ea306"},
|
||||
{file = "anthropic-0.57.1.tar.gz", hash = "sha256:7815dd92245a70d21f65f356f33fc80c5072eada87fb49437767ea2918b2c4b0"},
|
||||
{file = "anthropic-0.51.0-py3-none-any.whl", hash = "sha256:b8b47d482c9aa1f81b923555cebb687c2730309a20d01be554730c8302e0f62a"},
|
||||
{file = "anthropic-0.51.0.tar.gz", hash = "sha256:6f824451277992af079554430d5b2c8ff5bc059cc2c968cdc3f06824437da201"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -242,7 +242,6 @@ sniffio = "*"
|
||||
typing-extensions = ">=4.10,<5"
|
||||
|
||||
[package.extras]
|
||||
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.6)"]
|
||||
bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"]
|
||||
vertex = ["google-auth[requests] (>=2,<3)"]
|
||||
|
||||
@@ -1006,14 +1005,14 @@ pgp = ["gpg"]
|
||||
|
||||
[[package]]
|
||||
name = "e2b"
|
||||
version = "1.5.4"
|
||||
version = "1.5.0"
|
||||
description = "E2B SDK that give agents cloud environments"
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "e2b-1.5.4-py3-none-any.whl", hash = "sha256:9c8d22f9203311dff890e037823596daaba3d793300238117f2efc5426888f2c"},
|
||||
{file = "e2b-1.5.4.tar.gz", hash = "sha256:49f1c115d0198244beef5854d19cc857fda9382e205f137b98d3dae0e7e0b2d2"},
|
||||
{file = "e2b-1.5.0-py3-none-any.whl", hash = "sha256:875a843d1d314a9945e24bfb78c9b1b5cac7e2ecb1e799664d827a26a0b2276a"},
|
||||
{file = "e2b-1.5.0.tar.gz", hash = "sha256:905730eea5c07f271d073d4b5d2a9ef44c8ac04b9b146a99fa0235db77bf6854"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1027,19 +1026,19 @@ typing-extensions = ">=4.1.0"
|
||||
|
||||
[[package]]
|
||||
name = "e2b-code-interpreter"
|
||||
version = "1.5.2"
|
||||
version = "1.5.0"
|
||||
description = "E2B Code Interpreter - Stateful code execution"
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "e2b_code_interpreter-1.5.2-py3-none-any.whl", hash = "sha256:5c3188d8f25226b28fef4b255447cc6a4c36afb748bdd5180b45be486d5169f3"},
|
||||
{file = "e2b_code_interpreter-1.5.2.tar.gz", hash = "sha256:3bd6ea70596290e85aaf0a2f19f28bf37a5e73d13086f5e6a0080bb591c5a547"},
|
||||
{file = "e2b_code_interpreter-1.5.0-py3-none-any.whl", hash = "sha256:299f5641a3754264a07f8edc3cccb744d6b009f10dc9285789a9352e24989a9b"},
|
||||
{file = "e2b_code_interpreter-1.5.0.tar.gz", hash = "sha256:cd6028b6f20c4231e88a002de86484b9d4a99ea588b5be183b9ec7189a0f3cf6"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=21.3.0"
|
||||
e2b = ">=1.5.4,<2.0.0"
|
||||
e2b = ">=1.4.0,<2.0.0"
|
||||
httpx = ">=0.20.0,<1.0.0"
|
||||
|
||||
[[package]]
|
||||
@@ -1110,14 +1109,14 @@ typing-extensions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "fastapi"
|
||||
version = "0.115.14"
|
||||
version = "0.115.12"
|
||||
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "fastapi-0.115.14-py3-none-any.whl", hash = "sha256:6c0c8bf9420bd58f565e585036d971872472b4f7d3f6c73b698e10cffdefb3ca"},
|
||||
{file = "fastapi-0.115.14.tar.gz", hash = "sha256:b1de15cdc1c499a4da47914db35d0e4ef8f1ce62b624e94e0e5824421df99739"},
|
||||
{file = "fastapi-0.115.12-py3-none-any.whl", hash = "sha256:e94613d6c05e27be7ffebdd6ea5f388112e5e430c8f7d6494a9d1d88d43e814d"},
|
||||
{file = "fastapi-0.115.12.tar.gz", hash = "sha256:1e2c2a2646905f9e83d32f04a3f86aff4a286669c6c950ca95b5fd68c2602681"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1193,20 +1192,20 @@ packaging = ">=20"
|
||||
|
||||
[[package]]
|
||||
name = "flake8"
|
||||
version = "7.3.0"
|
||||
version = "7.2.0"
|
||||
description = "the modular source code checker: pep8 pyflakes and co"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "flake8-7.3.0-py2.py3-none-any.whl", hash = "sha256:b9696257b9ce8beb888cdbe31cf885c90d31928fe202be0889a7cdafad32f01e"},
|
||||
{file = "flake8-7.3.0.tar.gz", hash = "sha256:fe044858146b9fc69b551a4b490d69cf960fcb78ad1edcb84e7fbb1b4a8e3872"},
|
||||
{file = "flake8-7.2.0-py2.py3-none-any.whl", hash = "sha256:93b92ba5bdb60754a6da14fa3b93a9361fd00a59632ada61fd7b130436c40343"},
|
||||
{file = "flake8-7.2.0.tar.gz", hash = "sha256:fa558ae3f6f7dbf2b4f22663e5343b6b6023620461f8d4ff2019ef4b5ee70426"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
mccabe = ">=0.7.0,<0.8.0"
|
||||
pycodestyle = ">=2.14.0,<2.15.0"
|
||||
pyflakes = ">=3.4.0,<3.5.0"
|
||||
pycodestyle = ">=2.13.0,<2.14.0"
|
||||
pyflakes = ">=3.3.0,<3.4.0"
|
||||
|
||||
[[package]]
|
||||
name = "frozenlist"
|
||||
@@ -1357,14 +1356,14 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "google-api-python-client"
|
||||
version = "2.176.0"
|
||||
version = "2.170.0"
|
||||
description = "Google API Client Library for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "google_api_python_client-2.176.0-py3-none-any.whl", hash = "sha256:e22239797f1d085341e12cd924591fc65c56d08e0af02549d7606092e6296510"},
|
||||
{file = "google_api_python_client-2.176.0.tar.gz", hash = "sha256:2b451cdd7fd10faeb5dd20f7d992f185e1e8f4124c35f2cdcc77c843139a4cf1"},
|
||||
{file = "google_api_python_client-2.170.0-py3-none-any.whl", hash = "sha256:7bf518a0527ad23322f070fa69f4f24053170d5c766821dc970ff0571ec22748"},
|
||||
{file = "google_api_python_client-2.170.0.tar.gz", hash = "sha256:75f3a1856f11418ea3723214e0abc59d9b217fd7ed43dcf743aab7f06ab9e2b1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1517,27 +1516,27 @@ protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4
|
||||
|
||||
[[package]]
|
||||
name = "google-cloud-storage"
|
||||
version = "3.2.0"
|
||||
version = "3.1.0"
|
||||
description = "Google Cloud Storage API client library"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "google_cloud_storage-3.2.0-py3-none-any.whl", hash = "sha256:ff7a9a49666954a7c3d1598291220c72d3b9e49d9dfcf9dfaecb301fc4fb0b24"},
|
||||
{file = "google_cloud_storage-3.2.0.tar.gz", hash = "sha256:decca843076036f45633198c125d1861ffbf47ebf5c0e3b98dcb9b2db155896c"},
|
||||
{file = "google_cloud_storage-3.1.0-py2.py3-none-any.whl", hash = "sha256:eaf36966b68660a9633f03b067e4a10ce09f1377cae3ff9f2c699f69a81c66c6"},
|
||||
{file = "google_cloud_storage-3.1.0.tar.gz", hash = "sha256:944273179897c7c8a07ee15f2e6466a02da0c7c4b9ecceac2a26017cb2972049"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
google-api-core = ">=2.15.0,<3.0.0"
|
||||
google-auth = ">=2.26.1,<3.0.0"
|
||||
google-cloud-core = ">=2.4.2,<3.0.0"
|
||||
google-crc32c = ">=1.1.3,<2.0.0"
|
||||
google-resumable-media = ">=2.7.2,<3.0.0"
|
||||
requests = ">=2.22.0,<3.0.0"
|
||||
google-api-core = ">=2.15.0,<3.0.0dev"
|
||||
google-auth = ">=2.26.1,<3.0dev"
|
||||
google-cloud-core = ">=2.4.2,<3.0dev"
|
||||
google-crc32c = ">=1.0,<2.0dev"
|
||||
google-resumable-media = ">=2.7.2"
|
||||
requests = ">=2.18.0,<3.0.0dev"
|
||||
|
||||
[package.extras]
|
||||
protobuf = ["protobuf (>=3.20.2,<7.0.0)"]
|
||||
tracing = ["opentelemetry-api (>=1.1.0,<2.0.0)"]
|
||||
protobuf = ["protobuf (<6.0.0dev)"]
|
||||
tracing = ["opentelemetry-api (>=1.1.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "google-crc32c"
|
||||
@@ -1745,14 +1744,14 @@ test = ["objgraph", "psutil"]
|
||||
|
||||
[[package]]
|
||||
name = "groq"
|
||||
version = "0.29.0"
|
||||
version = "0.24.0"
|
||||
description = "The official Python library for the groq API"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "groq-0.29.0-py3-none-any.whl", hash = "sha256:03515ec46be1ef1feef0cd9d876b6f30a39ee2742e76516153d84acd7c97f23a"},
|
||||
{file = "groq-0.29.0.tar.gz", hash = "sha256:109dc4d696c05d44e4c2cd157652c4c6600c3e96f093f6e158facb5691e37847"},
|
||||
{file = "groq-0.24.0-py3-none-any.whl", hash = "sha256:0020e6b0b2b267263c9eb7c318deef13c12f399c6525734200b11d777b00088e"},
|
||||
{file = "groq-0.24.0.tar.gz", hash = "sha256:e821559de8a77fb81d2585b3faec80ff923d6d64fd52339b33f6c94997d6f7f5"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1763,9 +1762,6 @@ pydantic = ">=1.9.0,<3"
|
||||
sniffio = "*"
|
||||
typing-extensions = ">=4.10,<5"
|
||||
|
||||
[package.extras]
|
||||
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "grpc-google-iam-v1"
|
||||
version = "0.14.2"
|
||||
@@ -2552,14 +2548,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "mem0ai"
|
||||
version = "0.1.114"
|
||||
version = "0.1.102"
|
||||
description = "Long-term memory for AI Agents"
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "mem0ai-0.1.114-py3-none-any.whl", hash = "sha256:dfb7f0079ee282f5d9782e220f6f09707bcf5e107925d1901dbca30d8dd83f9b"},
|
||||
{file = "mem0ai-0.1.114.tar.gz", hash = "sha256:b27886132eaec78544e8b8b54f0b14a36728f3c99da54cb7cb417150e2fad7e1"},
|
||||
{file = "mem0ai-0.1.102-py3-none-any.whl", hash = "sha256:1401ccfd2369e2182ce78abb61b817e739fe49508b5a8ad98abcd4f8ad4db0b4"},
|
||||
{file = "mem0ai-0.1.102.tar.gz", hash = "sha256:7358dba4fbe954b9c3f33204c14df7babaf9067e2eb48241d89a32e6bc774988"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2572,11 +2568,8 @@ sqlalchemy = ">=2.0.31"
|
||||
|
||||
[package.extras]
|
||||
dev = ["isort (>=5.13.2)", "pytest (>=8.2.2)", "ruff (>=0.6.5)"]
|
||||
extras = ["boto3 (>=1.34.0)", "elasticsearch (>=8.0.0)", "langchain-community (>=0.0.0)", "langchain-memgraph (>=0.1.0)", "opensearch-py (>=2.0.0)", "sentence-transformers (>=5.0.0)"]
|
||||
graph = ["langchain-aws (>=0.2.23)", "langchain-neo4j (>=0.4.0)", "neo4j (>=5.23.1)", "rank-bm25 (>=0.2.2)"]
|
||||
llms = ["google-genai (>=1.0.0)", "google-generativeai (>=0.3.0)", "groq (>=0.3.0)", "litellm (>=0.1.0)", "ollama (>=0.1.0)", "together (>=0.2.10)", "vertexai (>=0.1.0)"]
|
||||
graph = ["langchain-neo4j (>=0.4.0)", "neo4j (>=5.23.1)", "rank-bm25 (>=0.2.2)"]
|
||||
test = ["pytest (>=8.2.2)", "pytest-asyncio (>=0.23.7)", "pytest-mock (>=3.14.0)"]
|
||||
vector-stores = ["azure-search-documents (>=11.4.0b8)", "chromadb (>=0.4.24)", "faiss-cpu (>=1.7.4)", "pinecone (<=7.3.0)", "pinecone-text (>=0.10.0)", "pymochow (>=2.2.9)", "pymongo (>=4.13.2)", "upstash-vector (>=0.1.0)", "vecs (>=0.4.0)", "weaviate-client (>=4.4.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "more-itertools"
|
||||
@@ -2915,14 +2908,14 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
|
||||
|
||||
[[package]]
|
||||
name = "ollama"
|
||||
version = "0.5.1"
|
||||
version = "0.4.9"
|
||||
description = "The official Python client for Ollama."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "ollama-0.5.1-py3-none-any.whl", hash = "sha256:4c8839f35bc173c7057b1eb2cbe7f498c1a7e134eafc9192824c8aecb3617506"},
|
||||
{file = "ollama-0.5.1.tar.gz", hash = "sha256:5a799e4dc4e7af638b11e3ae588ab17623ee019e496caaf4323efbaa8feeff93"},
|
||||
{file = "ollama-0.4.9-py3-none-any.whl", hash = "sha256:18c8c85358c54d7f73d6a66cda495b0e3ba99fdb88f824ae470d740fbb211a50"},
|
||||
{file = "ollama-0.4.9.tar.gz", hash = "sha256:5266d4d29b5089a01489872b8e8f980f018bccbdd1082b3903448af1d5615ce7"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2931,14 +2924,14 @@ pydantic = ">=2.9"
|
||||
|
||||
[[package]]
|
||||
name = "openai"
|
||||
version = "1.93.2"
|
||||
version = "1.82.1"
|
||||
description = "The official Python library for the openai API"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "openai-1.93.2-py3-none-any.whl", hash = "sha256:5adbbebd48eae160e6d68efc4c0a4f7cb1318a44c62d9fc626cec229f418eab4"},
|
||||
{file = "openai-1.93.2.tar.gz", hash = "sha256:4a7312b426b5e4c98b78dfa1148b5683371882de3ad3d5f7c8e0c74f3cc90778"},
|
||||
{file = "openai-1.82.1-py3-none-any.whl", hash = "sha256:334eb5006edf59aa464c9e932b9d137468d810b2659e5daea9b3a8c39d052395"},
|
||||
{file = "openai-1.82.1.tar.gz", hash = "sha256:ffc529680018e0417acac85f926f92aa0bbcbc26e82e2621087303c66bc7f95d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2952,7 +2945,6 @@ tqdm = ">4"
|
||||
typing-extensions = ">=4.11,<5"
|
||||
|
||||
[package.extras]
|
||||
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.6)"]
|
||||
datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"]
|
||||
realtime = ["websockets (>=13,<16)"]
|
||||
voice-helpers = ["numpy (>=2.0.2)", "sounddevice (>=0.5.1)"]
|
||||
@@ -3267,14 +3259,14 @@ testing = ["coverage", "pytest", "pytest-benchmark"]
|
||||
|
||||
[[package]]
|
||||
name = "poethepoet"
|
||||
version = "0.36.0"
|
||||
description = "A task runner that works well with poetry and uv."
|
||||
version = "0.34.0"
|
||||
description = "A task runner that works well with poetry."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "poethepoet-0.36.0-py3-none-any.whl", hash = "sha256:693e3c1eae9f6731d3613c3c0c40f747d3c5c68a375beda42e590a63c5623308"},
|
||||
{file = "poethepoet-0.36.0.tar.gz", hash = "sha256:2217b49cb4e4c64af0b42ff8c4814b17f02e107d38bc461542517348ede25663"},
|
||||
{file = "poethepoet-0.34.0-py3-none-any.whl", hash = "sha256:c472d6f0fdb341b48d346f4ccd49779840c15b30dfd6bc6347a80d6274b5e34e"},
|
||||
{file = "poethepoet-0.34.0.tar.gz", hash = "sha256:86203acce555bbfe45cb6ccac61ba8b16a5784264484195874da457ddabf5850"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3500,14 +3492,14 @@ tqdm = "*"
|
||||
|
||||
[[package]]
|
||||
name = "prometheus-client"
|
||||
version = "0.22.1"
|
||||
version = "0.21.1"
|
||||
description = "Python client for the Prometheus monitoring system."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "prometheus_client-0.22.1-py3-none-any.whl", hash = "sha256:cca895342e308174341b2cbf99a56bef291fbc0ef7b9e5412a0f26d653ba7094"},
|
||||
{file = "prometheus_client-0.22.1.tar.gz", hash = "sha256:190f1331e783cf21eb60bca559354e0a4d4378facecf78f5428c39b675d20d28"},
|
||||
{file = "prometheus_client-0.21.1-py3-none-any.whl", hash = "sha256:594b45c410d6f4f8888940fe80b5cc2521b305a1fafe1c58609ef715a001f301"},
|
||||
{file = "prometheus_client-0.21.1.tar.gz", hash = "sha256:252505a722ac04b0456be05c05f75f45d760c2911ffc45f2a06bcaed9f3ae3fb"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -3791,88 +3783,83 @@ pyasn1 = ">=0.6.1,<0.7.0"
|
||||
|
||||
[[package]]
|
||||
name = "pycares"
|
||||
version = "4.9.0"
|
||||
version = "4.8.0"
|
||||
description = "Python interface for c-ares"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pycares-4.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0b8bd9a3ee6e9bc990e1933dc7e7e2f44d4184f49a90fa444297ac12ab6c0c84"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:417a5c20861f35977240ad4961479a6778125bcac21eb2ad1c3aad47e2ff7fab"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ab290faa4ea53ce53e3ceea1b3a42822daffce2d260005533293a52525076750"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7b1df81193084c9717734e4615e8c5074b9852478c9007d1a8bb242f7f580e67"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:20c7a6af0c2ccd17cc5a70d76e299a90e7ebd6c4d8a3d7fff5ae533339f61431"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:370f41442a5b034aebdb2719b04ee04d3e805454a20d3f64f688c1c49f9137c3"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:340e4a3bbfd14d73c01ec0793a321b8a4a93f64c508225883291078b7ee17ac8"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f0ec94785856ea4f5556aa18f4c027361ba4b26cb36c4ad97d2105ef4eec68ba"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:dd6b7e23a4a9e2039b5d67dfa0499d2d5f114667dc13fb5d7d03eed230c7ac4f"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:490c978b0be9d35a253a5e31dd598f6d66b453625f0eb7dc2d81b22b8c3bb3f4"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e433faaf07f44e44f1a1b839fee847480fe3db9431509dafc9f16d618d491d0f"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cf6d8851a06b79d10089962c9dadcb34dad00bf027af000f7102297a54aaff2e"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-win32.whl", hash = "sha256:4f803e7d66ac7d8342998b8b07393788991353a46b05bbaad0b253d6f3484ea8"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:8e17bd32267e3870855de3baed7d0efa6337344d68f44853fd9195c919f39400"},
|
||||
{file = "pycares-4.9.0-cp310-cp310-win_arm64.whl", hash = "sha256:6b74f75d8e430f9bb11a1cc99b2e328eed74b17d8d4b476de09126f38d419eb9"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:16a97ee83ec60d35c7f716f117719932c27d428b1bb56b242ba1c4aa55521747"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:78748521423a211ce699a50c27cc5c19e98b7db610ccea98daad652ace373990"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8818b2c7a57d9d6d41e8b64d9ff87992b8ea2522fc0799686725228bc3cff6c5"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:96df8990f16013ca5194d6ece19dddb4ef9cd7c3efaab9f196ec3ccd44b40f8d"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:61af86fd58b8326e723b0d20fb96b56acaec2261c3a7c9a1c29d0a79659d613a"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ec72edb276bda559813cc807bc47b423d409ffab2402417a5381077e9c2c6be1"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:832fb122c7376c76cab62f8862fa5e398b9575fb7c9ff6bc9811086441ee64ca"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cdcfaef24f771a471671470ccfd676c0366ab6b0616fd8217b8f356c40a02b83"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:52cb056d06ff55d78a8665b97ae948abaaba2ca200ca59b10346d4526bce1e7d"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:54985ed3f2e8a87315269f24cb73441622857a7830adfc3a27c675a94c3261c1"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:08048e223615d4aef3dac81fe0ea18fb18d6fc97881f1eb5be95bb1379969b8d"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:cc60037421ce05a409484287b2cd428e1363cca73c999b5f119936bb8f255208"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-win32.whl", hash = "sha256:62b86895b60cfb91befb3086caa0792b53f949231c6c0c3053c7dfee3f1386ab"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:7046b3c80954beaabf2db52b09c3d6fe85f6c4646af973e61be79d1c51589932"},
|
||||
{file = "pycares-4.9.0-cp311-cp311-win_arm64.whl", hash = "sha256:fcbda3fdf44e94d3962ca74e6ba3dc18c0d7029106f030d61c04c0876f319403"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d68ca2da1001aeccdc81c4a2fb1f1f6cfdafd3d00e44e7c1ed93e3e05437f666"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4f0c8fa5a384d79551a27eafa39eed29529e66ba8fa795ee432ab88d050432a3"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0eb8c428cf3b9c6ff9c641ba50ab6357b4480cd737498733e6169b0ac8a1a89b"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6845bd4a43abf6dab7fedbf024ef458ac3750a25b25076ea9913e5ac5fec4548"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5e28f4acc3b97e46610cf164665ebf914f709daea6ced0ca4358ce55bc1c3d6b"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9464a39861840ce35a79352c34d653a9db44f9333af7c9feddb97998d3e00c07"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e0611c1bd46d1fc6bdd9305b8850eb84c77df485769f72c574ed7b8389dfbee2"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d4fb5a38a51d03b75ac4320357e632c2e72e03fdeb13263ee333a40621415fdc"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:df5edae05fb3e1370ab7639e67e8891fdaa9026cb10f05dbd57893713f7a9cfe"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:397123ea53d261007bb0aa7e767ef238778f45026db40bed8196436da2cc73de"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:bb0d874d0b131b29894fd8a0f842be91ac21d50f90ec04cff4bb3f598464b523"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:497cc03a61ec1585eb17d2cb086a29a6a67d24babf1e9be519b47222916a3b06"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-win32.whl", hash = "sha256:b46e46313fdb5e82da15478652aac0fd15e1c9f33e08153bad845aa4007d6f84"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:12547a06445777091605a7581da15a0da158058beb8a05a3ebbf7301fd1f58d4"},
|
||||
{file = "pycares-4.9.0-cp312-cp312-win_arm64.whl", hash = "sha256:f1e10bf1e8eb80b08e5c828627dba1ebc4acd54803bd0a27d92b9063b6aa99d8"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:574d815112a95ab09d75d0a9dc7dea737c06985e3125cf31c32ba6a3ed6ca006"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50e5ab06361d59625a27a7ad93d27e067dc7c9f6aa529a07d691eb17f3b43605"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:785f5fd11ff40237d9bc8afa441551bb449e2812c74334d1d10859569e07515c"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e194a500e403eba89b91fb863c917495c5b3dfcd1ce0ee8dc3a6f99a1360e2fc"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:112dd49cdec4e6150a8d95b197e8b6b7b4468a3170b30738ed9b248cb2240c04"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:94aa3c2f3eb0aa69160137134775501f06c901188e722aac63d2a210d4084f99"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b510d71255cf5a92ccc2643a553548fcb0623d6ed11c8c633b421d99d7fa4167"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5c6aa30b1492b8130f7832bf95178642c710ce6b7ba610c2b17377f77177e3cd"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:e5767988e044faffe2aff6a76aa08df99a8b6ef2641be8b00ea16334ce5dea93"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:b9928a942820a82daa3207509eaba9e0fa9660756ac56667ec2e062815331fcb"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:556c854174da76d544714cdfab10745ed5d4b99eec5899f7b13988cd26ff4763"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:d42e2202ca9aa9a0a9a6e43a4a4408bbe0311aaa44800fa27b8fd7f82b20152a"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-win32.whl", hash = "sha256:cce8ef72c9ed4982c84114e6148a4e42e989d745de7862a0ad8b3f1cdc05def2"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-win_amd64.whl", hash = "sha256:318cdf24f826f1d2f0c5a988730bd597e1683296628c8f1be1a5b96643c284fe"},
|
||||
{file = "pycares-4.9.0-cp313-cp313-win_arm64.whl", hash = "sha256:faa9de8e647ed06757a2c117b70a7645a755561def814da6aca0d766cf71a402"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8310d27d68fa25be9781ce04d330f4860634a2ac34dd9265774b5f404679b41f"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:99cf98452d3285307eec123049f2c9c50b109e06751b0727c6acefb6da30c6a0"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ffd6e8c8250655504602b076f106653e085e6b1e15318013442558101aa4777"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4065858d8c812159c9a55601fda73760d9e5e3300f7868d9e546eab1084f36c"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91ee6818113faf9013945c2b54bcd6b123d0ac192ae3099cf4288cedaf2dbb25"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:21f0602059ec11857ab7ad608c7ec8bc6f7a302c04559ec06d33e82f040585f8"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e22e5b46ed9b12183091da56e4a5a20813b5436c4f13135d7a1c20a84027ca8a"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:9eded8649867bfd7aea7589c5755eae4d37686272f6ed7a995da40890d02de71"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:f71d31cbbe066657a2536c98aad850724a9ab7b1cd2624f491832ae9667ea8e7"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:2b30945982ab4741f097efc5b0853051afc3c11df26996ed53a700c7575175af"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:54a8f1f067d64810426491d33033f5353b54f35e5339126440ad4e6afbf3f149"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:41556a269a192349e92eee953f62eddd867e9eddb27f444b261e2c1c4a4a9eff"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-win32.whl", hash = "sha256:524d6c14eaa167ed098a4fe54856d1248fa20c296cdd6976f9c1b838ba32d014"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:15f930c733d36aa487b4ad60413013bd811281b5ea4ca620070fa38505d84df4"},
|
||||
{file = "pycares-4.9.0-cp39-cp39-win_arm64.whl", hash = "sha256:79b7addb2a41267d46650ac0d9c4f3b3233b036f186b85606f7586881dfb4b69"},
|
||||
{file = "pycares-4.9.0.tar.gz", hash = "sha256:8ee484ddb23dbec4d88d14ed5b6d592c1960d2e93c385d5e52b6fad564d82395"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f40d9f4a8de398b110fdf226cdfadd86e8c7eb71d5298120ec41cf8d94b0012f"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:339de06fc849a51015968038d2bbed68fc24047522404af9533f32395ca80d25"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:372a236c1502b9056b0bea195c64c329603b4efa70b593a33b7ae37fbb7fad00"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03f66a5e143d102ccc204bd4e29edd70bed28420f707efd2116748241e30cb73"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ef50504296cd5fc58cfd6318f82e20af24fbe2c83004f6ff16259adb13afdf14"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d1bc541b627c7951dd36136b18bd185c5244a0fb2af5b1492ffb8acaceec1c5b"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:938d188ed6bed696099be67ebdcdf121827b9432b17a9ea9e40dc35fd9d85363"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:327837ffdc0c7adda09c98e1263c64b2aff814eea51a423f66733c75ccd9a642"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:a6b9b8d08c4508c45bd39e0c74e9e7052736f18ca1d25a289365bb9ac36e5849"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:feac07d5e6d2d8f031c71237c21c21b8c995b41a1eba64560e8cf1e42ac11bc6"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:5bcdbf37012fd2323ca9f2a1074421a9ccf277d772632f8f0ce8c46ec7564250"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e3ebb692cb43fcf34fe0d26f2cf9a0ea53fdfb136463845b81fad651277922db"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-win32.whl", hash = "sha256:d98447ec0efff3fa868ccc54dcc56e71faff498f8848ecec2004c3108efb4da2"},
|
||||
{file = "pycares-4.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:1abb8f40917960ead3c2771277f0bdee1967393b0fdf68743c225b606787da68"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5e25db89005ddd8d9c5720293afe6d6dd92e682fc6bc7a632535b84511e2060d"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6f9665ef116e6ee216c396f5f927756c2164f9f3316aec7ff1a9a1e1e7ec9b2a"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54a96893133471f6889b577147adcc21a480dbe316f56730871028379c8313f3"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51024b3a69762bd3100d94986a29922be15e13f56f991aaefb41f5bcd3d7f0bb"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:47ff9db50c599e4d965ae3bec99cc30941c1d2b0f078ec816680b70d052dd54a"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:27ef8ff4e0f60ea6769a60d1c3d1d2aefed1d832e7bb83fc3934884e2dba5cdd"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63511af7a3f9663f562fbb6bfa3591a259505d976e2aba1fa2da13dde43c6ca7"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:73c3219b47616e6a5ad1810de96ed59721c7751f19b70ae7bf24997a8365408f"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:da42a45207c18f37be5e491c14b6d1063cfe1e46620eb661735d0cedc2b59099"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8a068e898bb5dd09cd654e19cd2abf20f93d0cc59d5d955135ed48ea0f806aa1"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:962aed95675bb66c0b785a2fbbd1bb58ce7f009e283e4ef5aaa4a1f2dc00d217"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ce8b1a16c1e4517a82a0ebd7664783a327166a3764d844cf96b1fb7b9dd1e493"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-win32.whl", hash = "sha256:b3749ddbcbd216376c3b53d42d8b640b457133f1a12b0e003f3838f953037ae7"},
|
||||
{file = "pycares-4.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:5ce8a4e1b485b2360ab666c4ea1db97f57ede345a3b566d80bfa52b17e616610"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3273e01a75308ed06d2492d83c7ba476e579a60a24d9f20fe178ce5e9d8d028b"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fcedaadea1f452911fd29935749f98d144dae758d6003b7e9b6c5d5bd47d1dff"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aae6cb33e287e06a4aabcbc57626df682c9a4fa8026207f5b498697f1c2fb562"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25038b930e5be82839503fb171385b2aefd6d541bc5b7da0938bdb67780467d2"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cc8499b6e7dfbe4af65f6938db710ce9acd1debf34af2cbb93b898b1e5da6a5a"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c4e1c6a68ef56a7622f6176d9946d4e51f3c853327a0123ef35a5380230c84cd"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7cc8c3c9114b9c84e4062d25ca9b4bddc80a65d0b074c7cb059275273382f89"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:4404014069d3e362abf404c9932d4335bb9c07ba834cfe7d683c725b92e0f9da"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:ee0a58c32ec2a352cef0e1d20335a7caf9871cd79b73be2ca2896fe70f09c9d7"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:35f32f52b486b8fede3cbebf088f30b01242d0321b5216887c28e80490595302"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:ecbb506e27a3b3a2abc001c77beeccf265475c84b98629a6b3e61bd9f2987eaa"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:9392b2a34adbf60cb9e38f4a0d363413ecea8d835b5a475122f50f76676d59dd"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-win32.whl", hash = "sha256:f0fbefe68403ffcff19c869b8d621c88a6d2cef18d53cf0dab0fa9458a6ca712"},
|
||||
{file = "pycares-4.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:fa8aab6085a2ddfb1b43a06ddf1b498347117bb47cd620d9b12c43383c9c2737"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:358a9a2c6fed59f62788e63d88669224955443048a1602016d4358e92aedb365"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0e3e1278967fa8d4a0056be3fcc8fc551b8bad1fc7d0e5172196dccb8ddb036a"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:79befb773e370a8f97de9f16f5ea2c7e7fa0e3c6c74fbea6d332bf58164d7d06"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2b00d3695db64ce98a34e632e1d53f5a1cdb25451489f227bec2a6c03ff87ee8"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:37bdc4f2ff0612d60fc4f7547e12ff02cdcaa9a9e42e827bb64d4748994719f1"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd92c44498ec7a6139888b464b28c49f7ba975933689bd67ea8d572b94188404"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2665a0d810e2bbc41e97f3c3e5ea7950f666b3aa19c5f6c99d6b018ccd2e0052"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:45a629a6470a33478514c566bce50c63f1b17d1c5f2f964c9a6790330dc105fb"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:47bb378f1773f41cca8e31dcdf009ce4a9b8aff8a30c7267aaff9a099c407ba5"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:fb3feae38458005cc101956e38f16eb3145fff8cd793e35cd4bdef6bf1aa2623"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:14bc28aeaa66b0f4331ac94455e8043c8a06b3faafd78cc49d4b677bae0d0b08"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:62c82b871470f2864a1febf7b96bb1d108ce9063e6d3d43727e8a46f0028a456"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-win32.whl", hash = "sha256:01afa8964c698c8f548b46d726f766aa7817b2d4386735af1f7996903d724920"},
|
||||
{file = "pycares-4.8.0-cp313-cp313-win_amd64.whl", hash = "sha256:22f86f81b12ab17b0a7bd0da1e27938caaed11715225c1168763af97f8bb51a7"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:61325d13a95255e858f42a7a1a9e482ff47ef2233f95ad9a4f308a3bd8ecf903"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:dfec3a7d42336fa46a1e7e07f67000fd4b97860598c59a894c08f81378629e4e"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b65067e4b4f5345688817fff6be06b9b1f4ec3619b0b9ecc639bc681b73f646b"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0322ad94bbaa7016139b5bbdcd0de6f6feb9d146d69e03a82aaca342e06830a6"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:456c60f170c997f9a43c7afa1085fced8efb7e13ae49dd5656f998ae13c4bdb4"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:57a2c4c9ce423a85b0e0227409dbaf0d478f5e0c31d9e626768e77e1e887d32f"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:478d9c479108b7527266864c0affe3d6e863492c9bc269217e36100c8fd89b91"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:aed56bca096990ca0aa9bbf95761fc87e02880e04b0845922b5c12ea9abe523f"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:ef265a390928ee2f77f8901c2273c53293157860451ad453ce7f45dd268b72f9"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:a5f17d7a76d8335f1c90a8530c8f1e8bb22e9a1d70a96f686efaed946de1c908"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:891f981feb2ef34367378f813fc17b3d706ce95b6548eeea0c9fe7705d7e54b1"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:4102f6d9117466cc0a1f527907a1454d109cc9e8551b8074888071ef16050fe3"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-win32.whl", hash = "sha256:d6775308659652adc88c82c53eda59b5e86a154aaba5ad1e287bbb3e0be77076"},
|
||||
{file = "pycares-4.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:8bc05462aa44788d48544cca3d2532466fed2cdc5a2f24a43a92b620a61c9d19"},
|
||||
{file = "pycares-4.8.0.tar.gz", hash = "sha256:2fc2ebfab960f654b3e3cf08a732486950da99393a657f8b44618ad3ed2d39c1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3883,14 +3870,14 @@ idna = ["idna (>=2.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "pycodestyle"
|
||||
version = "2.14.0"
|
||||
version = "2.13.0"
|
||||
description = "Python style guide checker"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pycodestyle-2.14.0-py2.py3-none-any.whl", hash = "sha256:dd6bf7cb4ee77f8e016f9c8e74a35ddd9f67e1d5fd4184d86c3b98e07099f42d"},
|
||||
{file = "pycodestyle-2.14.0.tar.gz", hash = "sha256:c4b5b517d278089ff9d0abdec919cd97262a3367449ea1c8b49b91529167b783"},
|
||||
{file = "pycodestyle-2.13.0-py2.py3-none-any.whl", hash = "sha256:35863c5974a271c7a726ed228a14a4f6daf49df369d8c50cd9a6f58a5e143ba9"},
|
||||
{file = "pycodestyle-2.13.0.tar.gz", hash = "sha256:c8415bf09abe81d9c7f872502a6eee881fbe85d8763dd5b9924bb0a01d67efae"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3907,14 +3894,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "pydantic"
|
||||
version = "2.11.7"
|
||||
version = "2.11.5"
|
||||
description = "Data validation using Python type hints"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pydantic-2.11.7-py3-none-any.whl", hash = "sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b"},
|
||||
{file = "pydantic-2.11.7.tar.gz", hash = "sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db"},
|
||||
{file = "pydantic-2.11.5-py3-none-any.whl", hash = "sha256:f9c26ba06f9747749ca1e5c94d6a85cb84254577553c8785576fd38fa64dc0f7"},
|
||||
{file = "pydantic-2.11.5.tar.gz", hash = "sha256:7f853db3d0ce78ce8bbb148c401c2cdd6431b3473c0cdff2755c7690952a7b7a"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -4042,14 +4029,14 @@ typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
|
||||
|
||||
[[package]]
|
||||
name = "pydantic-settings"
|
||||
version = "2.10.1"
|
||||
version = "2.9.1"
|
||||
description = "Settings management using Pydantic"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pydantic_settings-2.10.1-py3-none-any.whl", hash = "sha256:a60952460b99cf661dc25c29c0ef171721f98bfcb52ef8d9ea4c943d7c8cc796"},
|
||||
{file = "pydantic_settings-2.10.1.tar.gz", hash = "sha256:06f0062169818d0f5524420a360d632d5857b83cffd4d42fe29597807a1614ee"},
|
||||
{file = "pydantic_settings-2.9.1-py3-none-any.whl", hash = "sha256:59b4f431b1defb26fe620c71a7d3968a710d719f5f4cdbbdb7926edeb770f6ef"},
|
||||
{file = "pydantic_settings-2.9.1.tar.gz", hash = "sha256:c509bf79d27563add44e8446233359004ed85066cd096d8b510f715e6ef5d268"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -4066,31 +4053,16 @@ yaml = ["pyyaml (>=6.0.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyflakes"
|
||||
version = "3.4.0"
|
||||
version = "3.3.2"
|
||||
description = "passive checker of Python programs"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyflakes-3.4.0-py2.py3-none-any.whl", hash = "sha256:f742a7dbd0d9cb9ea41e9a24a918996e8170c799fa528688d40dd582c8265f4f"},
|
||||
{file = "pyflakes-3.4.0.tar.gz", hash = "sha256:b24f96fafb7d2ab0ec5075b7350b3d2d2218eab42003821c06344973d3ea2f58"},
|
||||
{file = "pyflakes-3.3.2-py2.py3-none-any.whl", hash = "sha256:5039c8339cbb1944045f4ee5466908906180f13cc99cc9949348d10f82a5c32a"},
|
||||
{file = "pyflakes-3.3.2.tar.gz", hash = "sha256:6dfd61d87b97fba5dcfaaf781171ac16be16453be6d816147989e7f6e6a9576b"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.19.2"
|
||||
description = "Pygments is a syntax highlighting package written in Python."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
|
||||
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
windows-terminal = ["colorama (>=0.4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyjwt"
|
||||
version = "2.10.1"
|
||||
@@ -4150,14 +4122,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "pyright"
|
||||
version = "1.1.402"
|
||||
version = "1.1.401"
|
||||
description = "Command line wrapper for pyright"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pyright-1.1.402-py3-none-any.whl", hash = "sha256:2c721f11869baac1884e846232800fe021c33f1b4acb3929cff321f7ea4e2982"},
|
||||
{file = "pyright-1.1.402.tar.gz", hash = "sha256:85a33c2d40cd4439c66aa946fd4ce71ab2f3f5b8c22ce36a623f59ac22937683"},
|
||||
{file = "pyright-1.1.401-py3-none-any.whl", hash = "sha256:6fde30492ba5b0d7667c16ecaf6c699fab8d7a1263f6a18549e0b00bf7724c06"},
|
||||
{file = "pyright-1.1.401.tar.gz", hash = "sha256:788a82b6611fa5e34a326a921d86d898768cddf59edde8e93e56087d277cc6f1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -4171,27 +4143,26 @@ nodejs = ["nodejs-wheel-binaries"]
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "8.4.1"
|
||||
version = "8.3.5"
|
||||
description = "pytest: simple powerful testing with Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7"},
|
||||
{file = "pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"},
|
||||
{file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"},
|
||||
{file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = ">=0.4", markers = "sys_platform == \"win32\""}
|
||||
exceptiongroup = {version = ">=1", markers = "python_version < \"3.11\""}
|
||||
iniconfig = ">=1"
|
||||
packaging = ">=20"
|
||||
colorama = {version = "*", markers = "sys_platform == \"win32\""}
|
||||
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
|
||||
iniconfig = "*"
|
||||
packaging = "*"
|
||||
pluggy = ">=1.5,<2"
|
||||
pygments = ">=2.7.2"
|
||||
tomli = {version = ">=1", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "requests", "setuptools", "xmlschema"]
|
||||
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-asyncio"
|
||||
@@ -4278,14 +4249,14 @@ six = ">=1.5"
|
||||
|
||||
[[package]]
|
||||
name = "python-dotenv"
|
||||
version = "1.1.1"
|
||||
version = "1.1.0"
|
||||
description = "Read key-value pairs from a .env file and set them as environment variables"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc"},
|
||||
{file = "python_dotenv-1.1.1.tar.gz", hash = "sha256:a8a6399716257f45be6a007360200409fce5cda2661e3dec71d23dc15f6189ab"},
|
||||
{file = "python_dotenv-1.1.0-py3-none-any.whl", hash = "sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d"},
|
||||
{file = "python_dotenv-1.1.0.tar.gz", hash = "sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -4731,19 +4702,19 @@ typing_extensions = ">=4.5.0"
|
||||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.32.4"
|
||||
version = "2.32.3"
|
||||
description = "Python HTTP for Humans."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c"},
|
||||
{file = "requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422"},
|
||||
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
|
||||
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=2017.4.17"
|
||||
charset_normalizer = ">=2,<4"
|
||||
charset-normalizer = ">=2,<4"
|
||||
idna = ">=2.5,<4"
|
||||
urllib3 = ">=1.21.1,<3"
|
||||
|
||||
@@ -4929,30 +4900,30 @@ pyasn1 = ">=0.1.3"
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.12.2"
|
||||
version = "0.11.12"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "ruff-0.12.2-py3-none-linux_armv6l.whl", hash = "sha256:093ea2b221df1d2b8e7ad92fc6ffdca40a2cb10d8564477a987b44fd4008a7be"},
|
||||
{file = "ruff-0.12.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:09e4cf27cc10f96b1708100fa851e0daf21767e9709e1649175355280e0d950e"},
|
||||
{file = "ruff-0.12.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:8ae64755b22f4ff85e9c52d1f82644abd0b6b6b6deedceb74bd71f35c24044cc"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3eb3a6b2db4d6e2c77e682f0b988d4d61aff06860158fdb413118ca133d57922"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:73448de992d05517170fc37169cbca857dfeaeaa8c2b9be494d7bcb0d36c8f4b"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b8b94317cbc2ae4a2771af641739f933934b03555e51515e6e021c64441532d"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:45fc42c3bf1d30d2008023a0a9a0cfb06bf9835b147f11fe0679f21ae86d34b1"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce48f675c394c37e958bf229fb5c1e843e20945a6d962cf3ea20b7a107dcd9f4"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:793d8859445ea47591272021a81391350205a4af65a9392401f418a95dfb75c9"},
|
||||
{file = "ruff-0.12.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6932323db80484dda89153da3d8e58164d01d6da86857c79f1961934354992da"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:6aa7e623a3a11538108f61e859ebf016c4f14a7e6e4eba1980190cacb57714ce"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2a4a20aeed74671b2def096bdf2eac610c7d8ffcbf4fb0e627c06947a1d7078d"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:71a4c550195612f486c9d1f2b045a600aeba851b298c667807ae933478fcef04"},
|
||||
{file = "ruff-0.12.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:4987b8f4ceadf597c927beee65a5eaf994c6e2b631df963f86d8ad1bdea99342"},
|
||||
{file = "ruff-0.12.2-py3-none-win32.whl", hash = "sha256:369ffb69b70cd55b6c3fc453b9492d98aed98062db9fec828cdfd069555f5f1a"},
|
||||
{file = "ruff-0.12.2-py3-none-win_amd64.whl", hash = "sha256:dca8a3b6d6dc9810ed8f328d406516bf4d660c00caeaef36eb831cf4871b0639"},
|
||||
{file = "ruff-0.12.2-py3-none-win_arm64.whl", hash = "sha256:48d6c6bfb4761df68bc05ae630e24f506755e702d4fb08f08460be778c7ccb12"},
|
||||
{file = "ruff-0.12.2.tar.gz", hash = "sha256:d7b4f55cd6f325cb7621244f19c873c565a08aff5a4ba9c69aa7355f3f7afd3e"},
|
||||
{file = "ruff-0.11.12-py3-none-linux_armv6l.whl", hash = "sha256:c7680aa2f0d4c4f43353d1e72123955c7a2159b8646cd43402de6d4a3a25d7cc"},
|
||||
{file = "ruff-0.11.12-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:2cad64843da9f134565c20bcc430642de897b8ea02e2e79e6e02a76b8dcad7c3"},
|
||||
{file = "ruff-0.11.12-py3-none-macosx_11_0_arm64.whl", hash = "sha256:9b6886b524a1c659cee1758140138455d3c029783d1b9e643f3624a5ee0cb0aa"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cc3a3690aad6e86c1958d3ec3c38c4594b6ecec75c1f531e84160bd827b2012"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f97fdbc2549f456c65b3b0048560d44ddd540db1f27c778a938371424b49fe4a"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74adf84960236961090e2d1348c1a67d940fd12e811a33fb3d107df61eef8fc7"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:b56697e5b8bcf1d61293ccfe63873aba08fdbcbbba839fc046ec5926bdb25a3a"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4d47afa45e7b0eaf5e5969c6b39cbd108be83910b5c74626247e366fd7a36a13"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:692bf9603fe1bf949de8b09a2da896f05c01ed7a187f4a386cdba6760e7f61be"},
|
||||
{file = "ruff-0.11.12-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:08033320e979df3b20dba567c62f69c45e01df708b0f9c83912d7abd3e0801cd"},
|
||||
{file = "ruff-0.11.12-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:929b7706584f5bfd61d67d5070f399057d07c70585fa8c4491d78ada452d3bef"},
|
||||
{file = "ruff-0.11.12-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:7de4a73205dc5756b8e09ee3ed67c38312dce1aa28972b93150f5751199981b5"},
|
||||
{file = "ruff-0.11.12-py3-none-musllinux_1_2_i686.whl", hash = "sha256:2635c2a90ac1b8ca9e93b70af59dfd1dd2026a40e2d6eebaa3efb0465dd9cf02"},
|
||||
{file = "ruff-0.11.12-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:d05d6a78a89166f03f03a198ecc9d18779076ad0eec476819467acb401028c0c"},
|
||||
{file = "ruff-0.11.12-py3-none-win32.whl", hash = "sha256:f5a07f49767c4be4772d161bfc049c1f242db0cfe1bd976e0f0886732a4765d6"},
|
||||
{file = "ruff-0.11.12-py3-none-win_amd64.whl", hash = "sha256:5a4d9f8030d8c3a45df201d7fb3ed38d0219bccd7955268e863ee4a115fa0832"},
|
||||
{file = "ruff-0.11.12-py3-none-win_arm64.whl", hash = "sha256:65194e37853158d368e333ba282217941029a28ea90913c67e558c611d04daa5"},
|
||||
{file = "ruff-0.11.12.tar.gz", hash = "sha256:43cf7f69c7d7c7d7513b9d59c5d8cafd704e05944f978614aa9faff6ac202603"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4986,14 +4957,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "2.32.0"
|
||||
version = "2.29.1"
|
||||
description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "sentry_sdk-2.32.0-py2.py3-none-any.whl", hash = "sha256:6cf51521b099562d7ce3606da928c473643abe99b00ce4cb5626ea735f4ec345"},
|
||||
{file = "sentry_sdk-2.32.0.tar.gz", hash = "sha256:9016c75d9316b0f6921ac14c8cd4fb938f26002430ac5be9945ab280f78bec6b"},
|
||||
{file = "sentry_sdk-2.29.1-py2.py3-none-any.whl", hash = "sha256:90862fe0616ded4572da6c9dadb363121a1ae49a49e21c418f0634e9d10b4c19"},
|
||||
{file = "sentry_sdk-2.29.1.tar.gz", hash = "sha256:8d4a0206b95fa5fe85e5e7517ed662e3888374bdc342c00e435e10e6d831aa6d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -5280,23 +5251,23 @@ typing-extensions = {version = ">=4.5.0", markers = "python_version >= \"3.7\""}
|
||||
|
||||
[[package]]
|
||||
name = "supabase"
|
||||
version = "2.16.0"
|
||||
version = "2.15.1"
|
||||
description = "Supabase client for Python."
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "supabase-2.16.0-py3-none-any.whl", hash = "sha256:99065caab3d90a56650bf39fbd0e49740995da3738ab28706c61bd7f2401db55"},
|
||||
{file = "supabase-2.16.0.tar.gz", hash = "sha256:98f3810158012d4ec0e3083f2e5515f5e10b32bd71e7d458662140e963c1d164"},
|
||||
{file = "supabase-2.15.1-py3-none-any.whl", hash = "sha256:749299cdd74ecf528f52045c1e60d9dba81cc2054656f754c0ca7fba0dd34827"},
|
||||
{file = "supabase-2.15.1.tar.gz", hash = "sha256:66e847dab9346062aa6a25b4e81ac786b972c5d4299827c57d1d5bd6a0346070"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
gotrue = ">=2.11.0,<3.0.0"
|
||||
httpx = ">=0.26,<0.29"
|
||||
postgrest = ">0.19,<1.2"
|
||||
realtime = ">=2.4.0,<2.6.0"
|
||||
storage3 = ">=0.10,<0.13"
|
||||
supafunc = ">=0.9,<0.11"
|
||||
postgrest = ">0.19,<1.1"
|
||||
realtime = ">=2.4.0,<2.5.0"
|
||||
storage3 = ">=0.10,<0.12"
|
||||
supafunc = ">=0.9,<0.10"
|
||||
|
||||
[[package]]
|
||||
name = "supafunc"
|
||||
@@ -5503,14 +5474,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "tweepy"
|
||||
version = "4.16.0"
|
||||
description = "Library for accessing the X API (Twitter)"
|
||||
version = "4.15.0"
|
||||
description = "Twitter library for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "tweepy-4.16.0-py3-none-any.whl", hash = "sha256:48d1a1eb311d2c4b8990abcfa6f9fa2b2ad61be05c723b1a9b4f242656badae2"},
|
||||
{file = "tweepy-4.16.0.tar.gz", hash = "sha256:1d95cbdc50bf6353a387f881f2584eaf60d14e00dbbdd8872a73de79c66878e3"},
|
||||
{file = "tweepy-4.15.0-py3-none-any.whl", hash = "sha256:64adcea317158937059e4e2897b3ceb750b0c2dd5df58938c2da8f7eb3b88e6a"},
|
||||
{file = "tweepy-4.15.0.tar.gz", hash = "sha256:1345cbcdf0a75e2d89f424c559fd49fda4d8cd7be25cd5131e3b57bad8a21d76"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -5521,6 +5492,8 @@ requests-oauthlib = ">=1.2.0,<3"
|
||||
[package.extras]
|
||||
async = ["aiohttp (>=3.7.3,<4)", "async-lru (>=1.0.3,<3)"]
|
||||
dev = ["coverage (>=4.4.2)", "coveralls (>=2.1.0)", "tox (>=3.21.0)"]
|
||||
docs = ["myst-parser (==0.15.2)", "readthedocs-sphinx-search (==0.1.1)", "sphinx (==4.2.0)", "sphinx-hoverxref (==0.7b1)", "sphinx-tabs (==3.2.0)", "sphinx_rtd_theme (==1.0.0)"]
|
||||
socks = ["requests[socks] (>=2.27.0,<3)"]
|
||||
test = ["urllib3 (<2)", "vcrpy (>=1.10.3)"]
|
||||
|
||||
[[package]]
|
||||
@@ -6280,14 +6253,14 @@ requests = "*"
|
||||
|
||||
[[package]]
|
||||
name = "zerobouncesdk"
|
||||
version = "1.1.2"
|
||||
version = "1.1.1"
|
||||
description = "ZeroBounce Python API - https://www.zerobounce.net."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "zerobouncesdk-1.1.2-py3-none-any.whl", hash = "sha256:a89febfb3adade01c314e6bad2113ad093f1e1cca6ddf9fcf445a8b2a9a458b4"},
|
||||
{file = "zerobouncesdk-1.1.2.tar.gz", hash = "sha256:24810a2e39c963bc75b4732356b0fc8b10091f2c892f0c8b08fbb32640fdccaf"},
|
||||
{file = "zerobouncesdk-1.1.1-py3-none-any.whl", hash = "sha256:9fb9dfa44fe4ce35d6f2e43d5144c31ca03544a3317d75643cb9f86b0c028675"},
|
||||
{file = "zerobouncesdk-1.1.1.tar.gz", hash = "sha256:00aa537263d5bc21534c0007dd9f94ce8e0986caa530c5a0bbe0bd917451f236"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -6429,4 +6402,4 @@ cffi = ["cffi (>=1.11)"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.10,<3.13"
|
||||
content-hash = "476228d2bf59b90edc5425c462c1263cbc1f2d346f79a826ac5e7efe7823aaa6"
|
||||
content-hash = "b5c1201f27ee8d05d5d8c89702123df4293f124301d1aef7451591a351872260"
|
||||
|
||||
@@ -10,61 +10,61 @@ packages = [{ include = "backend", format = "sdist" }]
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<3.13"
|
||||
aio-pika = "^9.5.5"
|
||||
aiodns = "^3.5.0"
|
||||
anthropic = "^0.57.1"
|
||||
aiodns = "^3.1.1"
|
||||
anthropic = "^0.51.0"
|
||||
apscheduler = "^3.11.0"
|
||||
autogpt-libs = { path = "../autogpt_libs", develop = true }
|
||||
bleach = { extras = ["css"], version = "^6.2.0" }
|
||||
click = "^8.2.0"
|
||||
cryptography = "^43.0"
|
||||
discord-py = "^2.5.2"
|
||||
e2b-code-interpreter = "^1.5.2"
|
||||
fastapi = "^0.115.14"
|
||||
e2b-code-interpreter = "^1.5.0"
|
||||
fastapi = "^0.115.12"
|
||||
feedparser = "^6.0.11"
|
||||
flake8 = "^7.3.0"
|
||||
google-api-python-client = "^2.176.0"
|
||||
flake8 = "^7.2.0"
|
||||
google-api-python-client = "^2.169.0"
|
||||
google-auth-oauthlib = "^1.2.2"
|
||||
google-cloud-storage = "^3.2.0"
|
||||
google-cloud-storage = "^3.1.0"
|
||||
googlemaps = "^4.10.0"
|
||||
gravitasml = "^0.1.3"
|
||||
groq = "^0.29.0"
|
||||
groq = "^0.24.0"
|
||||
jinja2 = "^3.1.6"
|
||||
jsonref = "^1.1.0"
|
||||
jsonschema = "^4.22.0"
|
||||
launchdarkly-server-sdk = "^9.11.0"
|
||||
mem0ai = "^0.1.114"
|
||||
mem0ai = "^0.1.98"
|
||||
moviepy = "^2.1.2"
|
||||
ollama = "^0.5.1"
|
||||
openai = "^1.93.2"
|
||||
ollama = "^0.4.8"
|
||||
openai = "^1.78.1"
|
||||
pika = "^1.3.2"
|
||||
pinecone = "^5.3.1"
|
||||
poetry = "2.1.1" # CHECK DEPENDABOT SUPPORT BEFORE UPGRADING
|
||||
postmarker = "^1.0"
|
||||
praw = "~7.8.1"
|
||||
prisma = "^0.15.0"
|
||||
prometheus-client = "^0.22.1"
|
||||
prometheus-client = "^0.21.1"
|
||||
psutil = "^7.0.0"
|
||||
psycopg2-binary = "^2.9.10"
|
||||
pydantic = { extras = ["email"], version = "^2.11.7" }
|
||||
pydantic-settings = "^2.10.1"
|
||||
pytest = "^8.4.1"
|
||||
pydantic = { extras = ["email"], version = "^2.11.4" }
|
||||
pydantic-settings = "^2.9.1"
|
||||
pytest = "^8.3.5"
|
||||
pytest-asyncio = "^0.26.0"
|
||||
python-dotenv = "^1.1.1"
|
||||
python-dotenv = "^1.1.0"
|
||||
python-multipart = "^0.0.20"
|
||||
redis = "^5.2.0"
|
||||
replicate = "^1.0.6"
|
||||
sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlalchemy"], version = "^2.32.0"}
|
||||
sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlalchemy"], version = "^2.28.0"}
|
||||
sqlalchemy = "^2.0.40"
|
||||
strenum = "^0.4.9"
|
||||
stripe = "^11.5.0"
|
||||
supabase = "2.16.0"
|
||||
supabase = "2.15.1"
|
||||
tenacity = "^9.1.2"
|
||||
todoist-api-python = "^2.1.7"
|
||||
tweepy = "^4.16.0"
|
||||
tweepy = "^4.14.0"
|
||||
uvicorn = { extras = ["standard"], version = "^0.34.2" }
|
||||
websockets = "^14.2"
|
||||
youtube-transcript-api = "^0.6.2"
|
||||
zerobouncesdk = "^1.1.2"
|
||||
zerobouncesdk = "^1.1.1"
|
||||
# NOTE: please insert new dependencies in their alphabetical location
|
||||
pytest-snapshot = "^0.9.0"
|
||||
aiofiles = "^24.1.0"
|
||||
@@ -78,12 +78,12 @@ black = "^24.10.0"
|
||||
faker = "^33.3.1"
|
||||
httpx = "^0.28.1"
|
||||
isort = "^5.13.2"
|
||||
poethepoet = "^0.36.0"
|
||||
pyright = "^1.1.402"
|
||||
poethepoet = "^0.34.0"
|
||||
pyright = "^1.1.400"
|
||||
pytest-mock = "^3.14.0"
|
||||
pytest-watcher = "^0.4.2"
|
||||
requests = "^2.32.4"
|
||||
ruff = "^0.12.2"
|
||||
requests = "^2.32.3"
|
||||
ruff = "^0.11.10"
|
||||
# NOTE: please insert new dependencies in their alphabetical location
|
||||
|
||||
[build-system]
|
||||
@@ -123,4 +123,3 @@ filterwarnings = [
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py310"
|
||||
|
||||
|
||||
@@ -1,110 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Run test data creation and update scripts in sequence.
|
||||
|
||||
Usage:
|
||||
poetry run python run_test_data.py
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def run_command(cmd: list[str], cwd: Path | None = None) -> bool:
|
||||
"""Run a command and return True if successful."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd, check=True, capture_output=True, text=True, cwd=cwd
|
||||
)
|
||||
if result.stdout:
|
||||
print(result.stdout)
|
||||
return True
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error running command: {' '.join(cmd)}")
|
||||
print(f"Error: {e.stderr}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main function to run test data scripts."""
|
||||
print("=" * 60)
|
||||
print("Running Test Data Scripts for AutoGPT Platform")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
# Get the backend directory
|
||||
backend_dir = Path(__file__).parent
|
||||
test_dir = backend_dir / "test"
|
||||
|
||||
# Check if we're in the right directory
|
||||
if not (backend_dir / "pyproject.toml").exists():
|
||||
print("ERROR: This script must be run from the backend directory")
|
||||
sys.exit(1)
|
||||
|
||||
print("1. Checking database connection...")
|
||||
print("-" * 40)
|
||||
|
||||
# Import here to ensure proper environment setup
|
||||
try:
|
||||
from prisma import Prisma
|
||||
|
||||
db = Prisma()
|
||||
await db.connect()
|
||||
print("✓ Database connection successful")
|
||||
await db.disconnect()
|
||||
except Exception as e:
|
||||
print(f"✗ Database connection failed: {e}")
|
||||
print("\nPlease ensure:")
|
||||
print("1. The database services are running (docker compose up -d)")
|
||||
print("2. The DATABASE_URL in .env is correct")
|
||||
print("3. Migrations have been run (poetry run prisma migrate deploy)")
|
||||
sys.exit(1)
|
||||
|
||||
print()
|
||||
print("2. Running test data creator...")
|
||||
print("-" * 40)
|
||||
|
||||
# Run test_data_creator.py
|
||||
if run_command(["poetry", "run", "python", "test_data_creator.py"], cwd=test_dir):
|
||||
print()
|
||||
print("✅ Test data created successfully!")
|
||||
|
||||
print()
|
||||
print("3. Running test data updater...")
|
||||
print("-" * 40)
|
||||
|
||||
# Run test_data_updater.py
|
||||
if run_command(
|
||||
["poetry", "run", "python", "test_data_updater.py"], cwd=test_dir
|
||||
):
|
||||
print()
|
||||
print("✅ Test data updated successfully!")
|
||||
else:
|
||||
print()
|
||||
print("❌ Test data updater failed!")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print()
|
||||
print("❌ Test data creator failed!")
|
||||
sys.exit(1)
|
||||
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("Test data setup completed successfully!")
|
||||
print("=" * 60)
|
||||
print()
|
||||
print("The materialized views have been populated with test data:")
|
||||
print("- mv_agent_run_counts: Agent execution statistics")
|
||||
print("- mv_review_stats: Store listing review statistics")
|
||||
print()
|
||||
print("You can now:")
|
||||
print("1. Run tests: poetry run test")
|
||||
print("2. Start the backend: poetry run serve")
|
||||
print("3. View data in the database")
|
||||
print()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -13,10 +13,8 @@ def wait_for_postgres(max_retries=5, delay=5):
|
||||
"compose",
|
||||
"-f",
|
||||
"docker-compose.test.yaml",
|
||||
"--env-file",
|
||||
"../.env",
|
||||
"exec",
|
||||
"db",
|
||||
"postgres-test",
|
||||
"pg_isready",
|
||||
"-U",
|
||||
"postgres",
|
||||
@@ -53,8 +51,6 @@ def test():
|
||||
"compose",
|
||||
"-f",
|
||||
"docker-compose.test.yaml",
|
||||
"--env-file",
|
||||
"../.env",
|
||||
"up",
|
||||
"-d",
|
||||
]
|
||||
@@ -78,20 +74,11 @@ def test():
|
||||
# to their development database, running tests would wipe their local data!
|
||||
test_env = os.environ.copy()
|
||||
|
||||
# Load database configuration from .env file
|
||||
dotenv_path = os.path.join(os.path.dirname(__file__), "../.env")
|
||||
if os.path.exists(dotenv_path):
|
||||
with open(dotenv_path) as f:
|
||||
for line in f:
|
||||
if line.strip() and not line.startswith("#"):
|
||||
key, value = line.strip().split("=", 1)
|
||||
os.environ[key] = value
|
||||
|
||||
# Get database config from environment (now populated from .env)
|
||||
db_user = os.getenv("POSTGRES_USER", "postgres")
|
||||
db_pass = os.getenv("POSTGRES_PASSWORD", "postgres")
|
||||
db_name = os.getenv("POSTGRES_DB", "postgres")
|
||||
db_port = os.getenv("POSTGRES_PORT", "5432")
|
||||
# Use environment variables if set, otherwise use defaults that match docker-compose.test.yaml
|
||||
db_user = os.getenv("DB_USER", "postgres")
|
||||
db_pass = os.getenv("DB_PASS", "postgres")
|
||||
db_name = os.getenv("DB_NAME", "postgres")
|
||||
db_port = os.getenv("DB_PORT", "5432")
|
||||
|
||||
# Construct the test database URL - this ensures we're always pointing to the test container
|
||||
test_env["DATABASE_URL"] = (
|
||||
|
||||
@@ -599,23 +599,7 @@ view Creator {
|
||||
agent_runs Int
|
||||
is_featured Boolean
|
||||
|
||||
// Note: Prisma doesn't support indexes on views, but the following indexes exist in the database:
|
||||
//
|
||||
// Optimized indexes (partial indexes to reduce size and improve performance):
|
||||
// - idx_profile_user on Profile(userId)
|
||||
// - idx_store_listing_approved on StoreListing(owningUserId) WHERE isDeleted = false AND hasApprovedVersion = true
|
||||
// - idx_store_listing_version_status on StoreListingVersion(storeListingId) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_slv_categories_gin - GIN index on StoreListingVersion(categories) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_slv_agent on StoreListingVersion(agentGraphId, agentGraphVersion) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_store_listing_review_version on StoreListingReview(storeListingVersionId)
|
||||
// - idx_store_listing_version_approved_listing on StoreListingVersion(storeListingId, version) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_agent_graph_execution_agent on AgentGraphExecution(agentGraphId)
|
||||
//
|
||||
// Materialized views used (refreshed every 15 minutes via pg_cron):
|
||||
// - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId
|
||||
// - mv_review_stats - Pre-aggregated review statistics (count, avg rating) by storeListingId
|
||||
//
|
||||
// Query strategy: Uses CTEs to efficiently aggregate creator statistics leveraging materialized views
|
||||
// Index or unique are not applied to views
|
||||
}
|
||||
|
||||
view StoreAgent {
|
||||
@@ -638,30 +622,7 @@ view StoreAgent {
|
||||
rating Float
|
||||
versions String[]
|
||||
|
||||
// Note: Prisma doesn't support indexes on views, but the following indexes exist in the database:
|
||||
//
|
||||
// Optimized indexes (partial indexes to reduce size and improve performance):
|
||||
// - idx_store_listing_approved on StoreListing(owningUserId) WHERE isDeleted = false AND hasApprovedVersion = true
|
||||
// - idx_store_listing_version_status on StoreListingVersion(storeListingId) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_slv_categories_gin - GIN index on StoreListingVersion(categories) WHERE submissionStatus = 'APPROVED' for array searches
|
||||
// - idx_slv_agent on StoreListingVersion(agentGraphId, agentGraphVersion) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_store_listing_review_version on StoreListingReview(storeListingVersionId)
|
||||
// - idx_store_listing_version_approved_listing on StoreListingVersion(storeListingId, version) WHERE submissionStatus = 'APPROVED'
|
||||
// - idx_agent_graph_execution_agent on AgentGraphExecution(agentGraphId)
|
||||
// - idx_profile_user on Profile(userId)
|
||||
//
|
||||
// Additional indexes from earlier migrations:
|
||||
// - StoreListing_agentId_owningUserId_idx
|
||||
// - StoreListing_isDeleted_isApproved_idx (replaced by idx_store_listing_approved)
|
||||
// - StoreListing_isDeleted_idx
|
||||
// - StoreListing_agentId_key (unique on agentGraphId)
|
||||
// - StoreListingVersion_agentId_agentVersion_isDeleted_idx
|
||||
//
|
||||
// Materialized views used (refreshed every 15 minutes via pg_cron):
|
||||
// - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId
|
||||
// - mv_review_stats - Pre-aggregated review statistics (count, avg rating) by storeListingId
|
||||
//
|
||||
// Query strategy: Uses CTE for version aggregation and joins with materialized views for performance
|
||||
// Index or unique are not applied to views
|
||||
}
|
||||
|
||||
view StoreSubmission {
|
||||
@@ -688,33 +649,6 @@ view StoreSubmission {
|
||||
// Index or unique are not applied to views
|
||||
}
|
||||
|
||||
// Note: This is actually a MATERIALIZED VIEW in the database
|
||||
// Refreshed automatically every 15 minutes via pg_cron (with fallback to manual refresh)
|
||||
view mv_agent_run_counts {
|
||||
agentGraphId String @unique
|
||||
run_count Int
|
||||
|
||||
// Pre-aggregated count of AgentGraphExecution records by agentGraphId
|
||||
// Used by StoreAgent and Creator views for performance optimization
|
||||
// Unique index created automatically on agentGraphId for fast lookups
|
||||
// Refresh uses CONCURRENTLY to avoid blocking reads
|
||||
}
|
||||
|
||||
// Note: This is actually a MATERIALIZED VIEW in the database
|
||||
// Refreshed automatically every 15 minutes via pg_cron (with fallback to manual refresh)
|
||||
view mv_review_stats {
|
||||
storeListingId String @unique
|
||||
review_count Int
|
||||
avg_rating Float
|
||||
|
||||
// Pre-aggregated review statistics from StoreListingReview
|
||||
// Includes count of reviews and average rating per StoreListing
|
||||
// Only includes approved versions (submissionStatus = 'APPROVED') and non-deleted listings
|
||||
// Used by StoreAgent view for performance optimization
|
||||
// Unique index created automatically on storeListingId for fast lookups
|
||||
// Refresh uses CONCURRENTLY to avoid blocking reads
|
||||
}
|
||||
|
||||
model StoreListing {
|
||||
id String @id @default(uuid())
|
||||
createdAt DateTime @default(now())
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"description": "A test graph",
|
||||
"forked_from_id": null,
|
||||
"forked_from_version": null,
|
||||
"has_external_trigger": false,
|
||||
"has_webhook_trigger": false,
|
||||
"id": "graph-123",
|
||||
"input_schema": {
|
||||
"properties": {},
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
"description": "A test graph",
|
||||
"forked_from_id": null,
|
||||
"forked_from_version": null,
|
||||
"has_external_trigger": false,
|
||||
"has_webhook_trigger": false,
|
||||
"id": "graph-123",
|
||||
"input_schema": {
|
||||
"properties": {},
|
||||
@@ -16,7 +16,9 @@
|
||||
"type": "object"
|
||||
},
|
||||
"is_active": true,
|
||||
"links": [],
|
||||
"name": "Test Graph",
|
||||
"nodes": [],
|
||||
"output_schema": {
|
||||
"properties": {},
|
||||
"required": [],
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
"""SDK test module."""
|
||||
@@ -1,20 +0,0 @@
|
||||
"""
|
||||
Shared configuration for SDK test providers using the SDK pattern.
|
||||
"""
|
||||
|
||||
from backend.sdk import BlockCostType, ProviderBuilder
|
||||
|
||||
# Configure test providers
|
||||
test_api = (
|
||||
ProviderBuilder("test_api")
|
||||
.with_api_key("TEST_API_KEY", "Test API Key")
|
||||
.with_base_cost(5, BlockCostType.RUN)
|
||||
.build()
|
||||
)
|
||||
|
||||
test_service = (
|
||||
ProviderBuilder("test_service")
|
||||
.with_api_key("TEST_SERVICE_API_KEY", "Test Service API Key")
|
||||
.with_base_cost(10, BlockCostType.RUN)
|
||||
.build()
|
||||
)
|
||||
@@ -1,29 +0,0 @@
|
||||
"""
|
||||
Configuration for SDK tests.
|
||||
|
||||
This conftest.py file provides basic test setup for SDK unit tests
|
||||
without requiring the full server infrastructure.
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def server():
|
||||
"""Mock server fixture for SDK tests."""
|
||||
mock_server = MagicMock()
|
||||
mock_server.agent_server = MagicMock()
|
||||
mock_server.agent_server.test_create_graph = MagicMock()
|
||||
return mock_server
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_registry():
|
||||
"""Reset the AutoRegistry before each test."""
|
||||
from backend.sdk.registry import AutoRegistry
|
||||
|
||||
AutoRegistry.clear()
|
||||
yield
|
||||
AutoRegistry.clear()
|
||||
@@ -1,914 +0,0 @@
|
||||
"""
|
||||
Tests for creating blocks using the SDK.
|
||||
|
||||
This test suite verifies that blocks can be created using only SDK imports
|
||||
and that they work correctly without decorators.
|
||||
"""
|
||||
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
import pytest
|
||||
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockCostType,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
CredentialsMetaInput,
|
||||
OAuth2Credentials,
|
||||
ProviderBuilder,
|
||||
SchemaField,
|
||||
SecretStr,
|
||||
)
|
||||
|
||||
from ._config import test_api, test_service
|
||||
|
||||
|
||||
class TestBasicBlockCreation:
|
||||
"""Test creating basic blocks using the SDK."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_simple_block(self):
|
||||
"""Test creating a simple block without any decorators."""
|
||||
|
||||
class SimpleBlock(Block):
|
||||
"""A simple test block."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
text: str = SchemaField(description="Input text")
|
||||
count: int = SchemaField(description="Repeat count", default=1)
|
||||
|
||||
class Output(BlockSchema):
|
||||
result: str = SchemaField(description="Output result")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="simple-test-block",
|
||||
description="A simple test block",
|
||||
categories={BlockCategory.TEXT},
|
||||
input_schema=SimpleBlock.Input,
|
||||
output_schema=SimpleBlock.Output,
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
result = input_data.text * input_data.count
|
||||
yield "result", result
|
||||
|
||||
# Create and test the block
|
||||
block = SimpleBlock()
|
||||
assert block.id == "simple-test-block"
|
||||
assert BlockCategory.TEXT in block.categories
|
||||
|
||||
# Test execution
|
||||
outputs = []
|
||||
async for name, value in block.run(
|
||||
SimpleBlock.Input(text="Hello ", count=3),
|
||||
):
|
||||
outputs.append((name, value))
|
||||
assert len(outputs) == 1
|
||||
assert outputs[0] == ("result", "Hello Hello Hello ")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_with_credentials(self):
|
||||
"""Test creating a block that requires credentials."""
|
||||
|
||||
class APIBlock(Block):
|
||||
"""A block that requires API credentials."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = test_api.credentials_field(
|
||||
description="API credentials for test service",
|
||||
)
|
||||
query: str = SchemaField(description="API query")
|
||||
|
||||
class Output(BlockSchema):
|
||||
response: str = SchemaField(description="API response")
|
||||
authenticated: bool = SchemaField(description="Was authenticated")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="api-test-block",
|
||||
description="Test block with API credentials",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=APIBlock.Input,
|
||||
output_schema=APIBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
# Simulate API call
|
||||
api_key = credentials.api_key.get_secret_value()
|
||||
authenticated = bool(api_key)
|
||||
|
||||
yield "response", f"API response for: {input_data.query}"
|
||||
yield "authenticated", authenticated
|
||||
|
||||
# Create test credentials
|
||||
test_creds = APIKeyCredentials(
|
||||
id="test-creds",
|
||||
provider="test_api",
|
||||
api_key=SecretStr("test-api-key"),
|
||||
title="Test API Key",
|
||||
)
|
||||
|
||||
# Create and test the block
|
||||
block = APIBlock()
|
||||
outputs = []
|
||||
async for name, value in block.run(
|
||||
APIBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "test_api",
|
||||
"id": "test-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
query="test query",
|
||||
),
|
||||
credentials=test_creds,
|
||||
):
|
||||
outputs.append((name, value))
|
||||
|
||||
assert len(outputs) == 2
|
||||
assert outputs[0] == ("response", "API response for: test query")
|
||||
assert outputs[1] == ("authenticated", True)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_with_multiple_outputs(self):
|
||||
"""Test block that yields multiple outputs."""
|
||||
|
||||
class MultiOutputBlock(Block):
|
||||
"""Block with multiple outputs."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
text: str = SchemaField(description="Input text")
|
||||
|
||||
class Output(BlockSchema):
|
||||
uppercase: str = SchemaField(description="Uppercase version")
|
||||
lowercase: str = SchemaField(description="Lowercase version")
|
||||
length: int = SchemaField(description="Text length")
|
||||
is_empty: bool = SchemaField(description="Is text empty")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="multi-output-block",
|
||||
description="Block with multiple outputs",
|
||||
categories={BlockCategory.TEXT},
|
||||
input_schema=MultiOutputBlock.Input,
|
||||
output_schema=MultiOutputBlock.Output,
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
text = input_data.text
|
||||
yield "uppercase", text.upper()
|
||||
yield "lowercase", text.lower()
|
||||
yield "length", len(text)
|
||||
yield "is_empty", len(text) == 0
|
||||
|
||||
# Test the block
|
||||
block = MultiOutputBlock()
|
||||
outputs = []
|
||||
async for name, value in block.run(MultiOutputBlock.Input(text="Hello World")):
|
||||
outputs.append((name, value))
|
||||
|
||||
assert len(outputs) == 4
|
||||
assert ("uppercase", "HELLO WORLD") in outputs
|
||||
assert ("lowercase", "hello world") in outputs
|
||||
assert ("length", 11) in outputs
|
||||
assert ("is_empty", False) in outputs
|
||||
|
||||
|
||||
class TestBlockWithProvider:
|
||||
"""Test creating blocks associated with providers."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_using_provider(self):
|
||||
"""Test block that uses a registered provider."""
|
||||
|
||||
class TestServiceBlock(Block):
|
||||
"""Block for test service."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = test_service.credentials_field(
|
||||
description="Test service credentials",
|
||||
)
|
||||
action: str = SchemaField(description="Action to perform")
|
||||
|
||||
class Output(BlockSchema):
|
||||
result: str = SchemaField(description="Action result")
|
||||
provider_name: str = SchemaField(description="Provider used")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="test-service-block",
|
||||
description="Block using test service provider",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=TestServiceBlock.Input,
|
||||
output_schema=TestServiceBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
# The provider name should match
|
||||
yield "result", f"Performed: {input_data.action}"
|
||||
yield "provider_name", credentials.provider
|
||||
|
||||
# Create credentials for our provider
|
||||
creds = APIKeyCredentials(
|
||||
id="test-service-creds",
|
||||
provider="test_service",
|
||||
api_key=SecretStr("test-key"),
|
||||
title="Test Service Key",
|
||||
)
|
||||
|
||||
# Test the block
|
||||
block = TestServiceBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
TestServiceBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "test_service",
|
||||
"id": "test-service-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
action="test action",
|
||||
),
|
||||
credentials=creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["result"] == "Performed: test action"
|
||||
assert outputs["provider_name"] == "test_service"
|
||||
|
||||
|
||||
class TestComplexBlockScenarios:
|
||||
"""Test more complex block scenarios."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_with_optional_fields(self):
|
||||
"""Test block with optional input fields."""
|
||||
# Optional is already imported at the module level
|
||||
|
||||
class OptionalFieldBlock(Block):
|
||||
"""Block with optional fields."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
required_field: str = SchemaField(description="Required field")
|
||||
optional_field: Optional[str] = SchemaField(
|
||||
description="Optional field",
|
||||
default=None,
|
||||
)
|
||||
optional_with_default: str = SchemaField(
|
||||
description="Optional with default",
|
||||
default="default value",
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
has_optional: bool = SchemaField(description="Has optional value")
|
||||
optional_value: Optional[str] = SchemaField(
|
||||
description="Optional value"
|
||||
)
|
||||
default_value: str = SchemaField(description="Default value")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="optional-field-block",
|
||||
description="Block with optional fields",
|
||||
categories={BlockCategory.TEXT},
|
||||
input_schema=OptionalFieldBlock.Input,
|
||||
output_schema=OptionalFieldBlock.Output,
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
yield "has_optional", input_data.optional_field is not None
|
||||
yield "optional_value", input_data.optional_field
|
||||
yield "default_value", input_data.optional_with_default
|
||||
|
||||
# Test with optional field provided
|
||||
block = OptionalFieldBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
OptionalFieldBlock.Input(
|
||||
required_field="test",
|
||||
optional_field="provided",
|
||||
)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["has_optional"] is True
|
||||
assert outputs["optional_value"] == "provided"
|
||||
assert outputs["default_value"] == "default value"
|
||||
|
||||
# Test without optional field
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
OptionalFieldBlock.Input(
|
||||
required_field="test",
|
||||
)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["has_optional"] is False
|
||||
assert outputs["optional_value"] is None
|
||||
assert outputs["default_value"] == "default value"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_with_complex_types(self):
|
||||
"""Test block with complex input/output types."""
|
||||
|
||||
class ComplexBlock(Block):
|
||||
"""Block with complex types."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
items: list[str] = SchemaField(description="List of items")
|
||||
mapping: dict[str, int] = SchemaField(
|
||||
description="String to int mapping"
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
item_count: int = SchemaField(description="Number of items")
|
||||
total_value: int = SchemaField(description="Sum of mapping values")
|
||||
combined: list[str] = SchemaField(description="Combined results")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="complex-types-block",
|
||||
description="Block with complex types",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=ComplexBlock.Input,
|
||||
output_schema=ComplexBlock.Output,
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
yield "item_count", len(input_data.items)
|
||||
yield "total_value", sum(input_data.mapping.values())
|
||||
|
||||
# Combine items with their mapping values
|
||||
combined = []
|
||||
for item in input_data.items:
|
||||
value = input_data.mapping.get(item, 0)
|
||||
combined.append(f"{item}: {value}")
|
||||
|
||||
yield "combined", combined
|
||||
|
||||
# Test the block
|
||||
block = ComplexBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
ComplexBlock.Input(
|
||||
items=["apple", "banana", "orange"],
|
||||
mapping={"apple": 5, "banana": 3, "orange": 4},
|
||||
)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["item_count"] == 3
|
||||
assert outputs["total_value"] == 12
|
||||
assert outputs["combined"] == ["apple: 5", "banana: 3", "orange: 4"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_error_handling(self):
|
||||
"""Test block error handling."""
|
||||
|
||||
class ErrorHandlingBlock(Block):
|
||||
"""Block that demonstrates error handling."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
value: int = SchemaField(description="Input value")
|
||||
should_error: bool = SchemaField(
|
||||
description="Whether to trigger an error",
|
||||
default=False,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
result: int = SchemaField(description="Result")
|
||||
error_message: Optional[str] = SchemaField(
|
||||
description="Error if any", default=None
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="error-handling-block",
|
||||
description="Block with error handling",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=ErrorHandlingBlock.Input,
|
||||
output_schema=ErrorHandlingBlock.Output,
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
if input_data.should_error:
|
||||
raise ValueError("Intentional error triggered")
|
||||
|
||||
if input_data.value < 0:
|
||||
yield "error_message", "Value must be non-negative"
|
||||
yield "result", 0
|
||||
else:
|
||||
yield "result", input_data.value * 2
|
||||
yield "error_message", None
|
||||
|
||||
# Test normal operation
|
||||
block = ErrorHandlingBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
ErrorHandlingBlock.Input(value=5, should_error=False)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["result"] == 10
|
||||
assert outputs["error_message"] is None
|
||||
|
||||
# Test with negative value
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
ErrorHandlingBlock.Input(value=-5, should_error=False)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["result"] == 0
|
||||
assert outputs["error_message"] == "Value must be non-negative"
|
||||
|
||||
# Test with error
|
||||
with pytest.raises(ValueError, match="Intentional error triggered"):
|
||||
async for _ in block.run(
|
||||
ErrorHandlingBlock.Input(value=5, should_error=True)
|
||||
):
|
||||
pass
|
||||
|
||||
|
||||
class TestAuthenticationVariants:
|
||||
"""Test complex authentication scenarios including OAuth, API keys, and scopes."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_oauth_block_with_scopes(self):
|
||||
"""Test creating a block that uses OAuth2 with scopes."""
|
||||
from backend.sdk import OAuth2Credentials, ProviderBuilder
|
||||
|
||||
# Create a test OAuth provider with scopes
|
||||
# For testing, we don't need an actual OAuth handler
|
||||
# In real usage, you would provide a proper OAuth handler class
|
||||
oauth_provider = (
|
||||
ProviderBuilder("test_oauth_provider")
|
||||
.with_api_key("TEST_OAUTH_API", "Test OAuth API")
|
||||
.with_base_cost(5, BlockCostType.RUN)
|
||||
.build()
|
||||
)
|
||||
|
||||
class OAuthScopedBlock(Block):
|
||||
"""Block requiring OAuth2 with specific scopes."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = oauth_provider.credentials_field(
|
||||
description="OAuth2 credentials with scopes",
|
||||
scopes=["read:user", "write:data"],
|
||||
)
|
||||
resource: str = SchemaField(description="Resource to access")
|
||||
|
||||
class Output(BlockSchema):
|
||||
data: str = SchemaField(description="Retrieved data")
|
||||
scopes_used: list[str] = SchemaField(
|
||||
description="Scopes that were used"
|
||||
)
|
||||
token_info: dict[str, Any] = SchemaField(
|
||||
description="Token information"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="oauth-scoped-block",
|
||||
description="Test OAuth2 with scopes",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=OAuthScopedBlock.Input,
|
||||
output_schema=OAuthScopedBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: OAuth2Credentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
# Simulate OAuth API call with scopes
|
||||
token = credentials.access_token.get_secret_value()
|
||||
|
||||
yield "data", f"OAuth data for {input_data.resource}"
|
||||
yield "scopes_used", credentials.scopes or []
|
||||
yield "token_info", {
|
||||
"has_token": bool(token),
|
||||
"has_refresh": credentials.refresh_token is not None,
|
||||
"provider": credentials.provider,
|
||||
"expires_at": credentials.access_token_expires_at,
|
||||
}
|
||||
|
||||
# Create test OAuth credentials
|
||||
test_oauth_creds = OAuth2Credentials(
|
||||
id="test-oauth-creds",
|
||||
provider="test_oauth_provider",
|
||||
access_token=SecretStr("test-access-token"),
|
||||
refresh_token=SecretStr("test-refresh-token"),
|
||||
scopes=["read:user", "write:data"],
|
||||
title="Test OAuth Credentials",
|
||||
)
|
||||
|
||||
# Test the block
|
||||
block = OAuthScopedBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
OAuthScopedBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "test_oauth_provider",
|
||||
"id": "test-oauth-creds",
|
||||
"type": "oauth2",
|
||||
},
|
||||
resource="user/profile",
|
||||
),
|
||||
credentials=test_oauth_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["data"] == "OAuth data for user/profile"
|
||||
assert set(outputs["scopes_used"]) == {"read:user", "write:data"}
|
||||
assert outputs["token_info"]["has_token"] is True
|
||||
assert outputs["token_info"]["expires_at"] is None
|
||||
assert outputs["token_info"]["has_refresh"] is True
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_mixed_auth_block(self):
|
||||
"""Test block that supports both OAuth2 and API key authentication."""
|
||||
# No need to import these again, already imported at top
|
||||
|
||||
# Create provider supporting both auth types
|
||||
# Create provider supporting API key auth
|
||||
# In real usage, you would add OAuth support with .with_oauth()
|
||||
mixed_provider = (
|
||||
ProviderBuilder("mixed_auth_provider")
|
||||
.with_api_key("MIXED_API_KEY", "Mixed Provider API Key")
|
||||
.with_base_cost(8, BlockCostType.RUN)
|
||||
.build()
|
||||
)
|
||||
|
||||
class MixedAuthBlock(Block):
|
||||
"""Block supporting multiple authentication methods."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = mixed_provider.credentials_field(
|
||||
description="API key or OAuth2 credentials",
|
||||
supported_credential_types=["api_key", "oauth2"],
|
||||
)
|
||||
operation: str = SchemaField(description="Operation to perform")
|
||||
|
||||
class Output(BlockSchema):
|
||||
result: str = SchemaField(description="Operation result")
|
||||
auth_type: str = SchemaField(description="Authentication type used")
|
||||
auth_details: dict[str, Any] = SchemaField(description="Auth details")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="mixed-auth-block",
|
||||
description="Block supporting OAuth2 and API key",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=MixedAuthBlock.Input,
|
||||
output_schema=MixedAuthBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: Input,
|
||||
*,
|
||||
credentials: Union[APIKeyCredentials, OAuth2Credentials],
|
||||
**kwargs,
|
||||
) -> BlockOutput:
|
||||
# Handle different credential types
|
||||
if isinstance(credentials, APIKeyCredentials):
|
||||
auth_type = "api_key"
|
||||
auth_details = {
|
||||
"has_key": bool(credentials.api_key.get_secret_value()),
|
||||
"key_prefix": credentials.api_key.get_secret_value()[:5]
|
||||
+ "...",
|
||||
}
|
||||
elif isinstance(credentials, OAuth2Credentials):
|
||||
auth_type = "oauth2"
|
||||
auth_details = {
|
||||
"has_token": bool(credentials.access_token.get_secret_value()),
|
||||
"scopes": credentials.scopes or [],
|
||||
}
|
||||
else:
|
||||
auth_type = "unknown"
|
||||
auth_details = {}
|
||||
|
||||
yield "result", f"Performed {input_data.operation} with {auth_type}"
|
||||
yield "auth_type", auth_type
|
||||
yield "auth_details", auth_details
|
||||
|
||||
# Test with API key
|
||||
api_creds = APIKeyCredentials(
|
||||
id="mixed-api-creds",
|
||||
provider="mixed_auth_provider",
|
||||
api_key=SecretStr("sk-1234567890"),
|
||||
title="Mixed API Key",
|
||||
)
|
||||
|
||||
block = MixedAuthBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
MixedAuthBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "mixed_auth_provider",
|
||||
"id": "mixed-api-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
operation="fetch_data",
|
||||
),
|
||||
credentials=api_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["auth_type"] == "api_key"
|
||||
assert outputs["result"] == "Performed fetch_data with api_key"
|
||||
assert outputs["auth_details"]["key_prefix"] == "sk-12..."
|
||||
|
||||
# Test with OAuth2
|
||||
oauth_creds = OAuth2Credentials(
|
||||
id="mixed-oauth-creds",
|
||||
provider="mixed_auth_provider",
|
||||
access_token=SecretStr("oauth-token-123"),
|
||||
scopes=["full_access"],
|
||||
title="Mixed OAuth",
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
MixedAuthBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "mixed_auth_provider",
|
||||
"id": "mixed-oauth-creds",
|
||||
"type": "oauth2",
|
||||
},
|
||||
operation="update_data",
|
||||
),
|
||||
credentials=oauth_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["auth_type"] == "oauth2"
|
||||
assert outputs["result"] == "Performed update_data with oauth2"
|
||||
assert outputs["auth_details"]["scopes"] == ["full_access"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_multiple_credentials_block(self):
|
||||
"""Test block requiring multiple different credentials."""
|
||||
from backend.sdk import ProviderBuilder
|
||||
|
||||
# Create multiple providers
|
||||
primary_provider = (
|
||||
ProviderBuilder("primary_service")
|
||||
.with_api_key("PRIMARY_API_KEY", "Primary Service Key")
|
||||
.build()
|
||||
)
|
||||
|
||||
# For testing purposes, using API key instead of OAuth handler
|
||||
secondary_provider = (
|
||||
ProviderBuilder("secondary_service")
|
||||
.with_api_key("SECONDARY_API_KEY", "Secondary Service Key")
|
||||
.build()
|
||||
)
|
||||
|
||||
class MultiCredentialBlock(Block):
|
||||
"""Block requiring credentials from multiple services."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
primary_credentials: CredentialsMetaInput = (
|
||||
primary_provider.credentials_field(
|
||||
description="Primary service API key"
|
||||
)
|
||||
)
|
||||
secondary_credentials: CredentialsMetaInput = (
|
||||
secondary_provider.credentials_field(
|
||||
description="Secondary service OAuth"
|
||||
)
|
||||
)
|
||||
merge_data: bool = SchemaField(
|
||||
description="Whether to merge data from both services",
|
||||
default=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
primary_data: str = SchemaField(description="Data from primary service")
|
||||
secondary_data: str = SchemaField(
|
||||
description="Data from secondary service"
|
||||
)
|
||||
merged_result: Optional[str] = SchemaField(
|
||||
description="Merged data if requested"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="multi-credential-block",
|
||||
description="Block using multiple credentials",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=MultiCredentialBlock.Input,
|
||||
output_schema=MultiCredentialBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: Input,
|
||||
*,
|
||||
primary_credentials: APIKeyCredentials,
|
||||
secondary_credentials: OAuth2Credentials,
|
||||
**kwargs,
|
||||
) -> BlockOutput:
|
||||
# Simulate fetching data with primary API key
|
||||
primary_data = f"Primary data using {primary_credentials.provider}"
|
||||
yield "primary_data", primary_data
|
||||
|
||||
# Simulate fetching data with secondary OAuth
|
||||
secondary_data = f"Secondary data with {len(secondary_credentials.scopes or [])} scopes"
|
||||
yield "secondary_data", secondary_data
|
||||
|
||||
# Merge if requested
|
||||
if input_data.merge_data:
|
||||
merged = f"{primary_data} + {secondary_data}"
|
||||
yield "merged_result", merged
|
||||
else:
|
||||
yield "merged_result", None
|
||||
|
||||
# Create test credentials
|
||||
primary_creds = APIKeyCredentials(
|
||||
id="primary-creds",
|
||||
provider="primary_service",
|
||||
api_key=SecretStr("primary-key-123"),
|
||||
title="Primary Key",
|
||||
)
|
||||
|
||||
secondary_creds = OAuth2Credentials(
|
||||
id="secondary-creds",
|
||||
provider="secondary_service",
|
||||
access_token=SecretStr("secondary-token"),
|
||||
scopes=["read", "write"],
|
||||
title="Secondary OAuth",
|
||||
)
|
||||
|
||||
# Test the block
|
||||
block = MultiCredentialBlock()
|
||||
outputs = {}
|
||||
|
||||
# Note: In real usage, the framework would inject the correct credentials
|
||||
# based on the field names. Here we simulate that behavior.
|
||||
async for name, value in block.run(
|
||||
MultiCredentialBlock.Input(
|
||||
primary_credentials={ # type: ignore
|
||||
"provider": "primary_service",
|
||||
"id": "primary-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
secondary_credentials={ # type: ignore
|
||||
"provider": "secondary_service",
|
||||
"id": "secondary-creds",
|
||||
"type": "oauth2",
|
||||
},
|
||||
merge_data=True,
|
||||
),
|
||||
primary_credentials=primary_creds,
|
||||
secondary_credentials=secondary_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["primary_data"] == "Primary data using primary_service"
|
||||
assert outputs["secondary_data"] == "Secondary data with 2 scopes"
|
||||
assert "Primary data" in outputs["merged_result"]
|
||||
assert "Secondary data" in outputs["merged_result"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_oauth_scope_validation(self):
|
||||
"""Test OAuth scope validation and handling."""
|
||||
from backend.sdk import OAuth2Credentials, ProviderBuilder
|
||||
|
||||
# Provider with specific required scopes
|
||||
# For testing OAuth scope validation
|
||||
scoped_provider = (
|
||||
ProviderBuilder("scoped_oauth_service")
|
||||
.with_api_key("SCOPED_OAUTH_KEY", "Scoped OAuth Service")
|
||||
.build()
|
||||
)
|
||||
|
||||
class ScopeValidationBlock(Block):
|
||||
"""Block that validates OAuth scopes."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = scoped_provider.credentials_field(
|
||||
description="OAuth credentials with specific scopes",
|
||||
scopes=["user:read", "user:write"], # Required scopes
|
||||
)
|
||||
require_admin: bool = SchemaField(
|
||||
description="Whether admin scopes are required",
|
||||
default=False,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
allowed_operations: list[str] = SchemaField(
|
||||
description="Operations allowed with current scopes"
|
||||
)
|
||||
missing_scopes: list[str] = SchemaField(
|
||||
description="Scopes that are missing for full access"
|
||||
)
|
||||
has_required_scopes: bool = SchemaField(
|
||||
description="Whether all required scopes are present"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="scope-validation-block",
|
||||
description="Block that validates OAuth scopes",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=ScopeValidationBlock.Input,
|
||||
output_schema=ScopeValidationBlock.Output,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: OAuth2Credentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
current_scopes = set(credentials.scopes or [])
|
||||
required_scopes = {"user:read", "user:write"}
|
||||
|
||||
if input_data.require_admin:
|
||||
required_scopes.update({"admin:read", "admin:write"})
|
||||
|
||||
# Determine allowed operations based on scopes
|
||||
allowed_ops = []
|
||||
if "user:read" in current_scopes:
|
||||
allowed_ops.append("read_user_data")
|
||||
if "user:write" in current_scopes:
|
||||
allowed_ops.append("update_user_data")
|
||||
if "admin:read" in current_scopes:
|
||||
allowed_ops.append("read_admin_data")
|
||||
if "admin:write" in current_scopes:
|
||||
allowed_ops.append("update_admin_data")
|
||||
|
||||
missing = list(required_scopes - current_scopes)
|
||||
has_required = len(missing) == 0
|
||||
|
||||
yield "allowed_operations", allowed_ops
|
||||
yield "missing_scopes", missing
|
||||
yield "has_required_scopes", has_required
|
||||
|
||||
# Test with partial scopes
|
||||
partial_creds = OAuth2Credentials(
|
||||
id="partial-oauth",
|
||||
provider="scoped_oauth_service",
|
||||
access_token=SecretStr("partial-token"),
|
||||
scopes=["user:read"], # Only one of the required scopes
|
||||
title="Partial OAuth",
|
||||
)
|
||||
|
||||
block = ScopeValidationBlock()
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
ScopeValidationBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "scoped_oauth_service",
|
||||
"id": "partial-oauth",
|
||||
"type": "oauth2",
|
||||
},
|
||||
require_admin=False,
|
||||
),
|
||||
credentials=partial_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["allowed_operations"] == ["read_user_data"]
|
||||
assert "user:write" in outputs["missing_scopes"]
|
||||
assert outputs["has_required_scopes"] is False
|
||||
|
||||
# Test with all required scopes
|
||||
full_creds = OAuth2Credentials(
|
||||
id="full-oauth",
|
||||
provider="scoped_oauth_service",
|
||||
access_token=SecretStr("full-token"),
|
||||
scopes=["user:read", "user:write", "admin:read"],
|
||||
title="Full OAuth",
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
ScopeValidationBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "scoped_oauth_service",
|
||||
"id": "full-oauth",
|
||||
"type": "oauth2",
|
||||
},
|
||||
require_admin=False,
|
||||
),
|
||||
credentials=full_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert set(outputs["allowed_operations"]) == {
|
||||
"read_user_data",
|
||||
"update_user_data",
|
||||
"read_admin_data",
|
||||
}
|
||||
assert outputs["missing_scopes"] == []
|
||||
assert outputs["has_required_scopes"] is True
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
@@ -1,150 +0,0 @@
|
||||
"""
|
||||
Tests for the SDK's integration patching mechanism.
|
||||
|
||||
This test suite verifies that the AutoRegistry correctly patches
|
||||
existing integration points to include SDK-registered components.
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.sdk import (
|
||||
AutoRegistry,
|
||||
BaseOAuthHandler,
|
||||
BaseWebhooksManager,
|
||||
ProviderBuilder,
|
||||
)
|
||||
|
||||
|
||||
class MockOAuthHandler(BaseOAuthHandler):
|
||||
"""Mock OAuth handler for testing."""
|
||||
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
@classmethod
|
||||
async def authorize(cls, *args, **kwargs):
|
||||
return "mock_auth"
|
||||
|
||||
|
||||
class MockWebhookManager(BaseWebhooksManager):
|
||||
"""Mock webhook manager for testing."""
|
||||
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
@classmethod
|
||||
async def validate_payload(cls, webhook, request):
|
||||
return {}, "test_event"
|
||||
|
||||
async def _register_webhook(self, *args, **kwargs):
|
||||
return "mock_webhook_id", {}
|
||||
|
||||
async def _deregister_webhook(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
|
||||
class TestWebhookPatching:
|
||||
"""Test webhook manager patching functionality."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Clear registry."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
def test_webhook_manager_patching(self):
|
||||
"""Test that webhook managers are correctly patched."""
|
||||
|
||||
# Mock the original load_webhook_managers function
|
||||
def mock_load_webhook_managers():
|
||||
return {
|
||||
"existing_webhook": Mock(spec=BaseWebhooksManager),
|
||||
}
|
||||
|
||||
# Register a provider with webhooks
|
||||
(
|
||||
ProviderBuilder("webhook_provider")
|
||||
.with_webhook_manager(MockWebhookManager)
|
||||
.build()
|
||||
)
|
||||
|
||||
# Mock the webhooks module
|
||||
mock_webhooks_module = MagicMock()
|
||||
mock_webhooks_module.load_webhook_managers = mock_load_webhook_managers
|
||||
|
||||
with patch.dict(
|
||||
"sys.modules", {"backend.integrations.webhooks": mock_webhooks_module}
|
||||
):
|
||||
AutoRegistry.patch_integrations()
|
||||
|
||||
# Call the patched function
|
||||
result = mock_webhooks_module.load_webhook_managers()
|
||||
|
||||
# Original webhook should still exist
|
||||
assert "existing_webhook" in result
|
||||
|
||||
# New webhook should be added
|
||||
assert "webhook_provider" in result
|
||||
assert result["webhook_provider"] == MockWebhookManager
|
||||
|
||||
def test_webhook_patching_no_original_function(self):
|
||||
"""Test webhook patching when load_webhook_managers doesn't exist."""
|
||||
# Mock webhooks module without load_webhook_managers
|
||||
mock_webhooks_module = MagicMock(spec=[])
|
||||
|
||||
# Register a provider
|
||||
(
|
||||
ProviderBuilder("test_provider")
|
||||
.with_webhook_manager(MockWebhookManager)
|
||||
.build()
|
||||
)
|
||||
|
||||
with patch.dict(
|
||||
"sys.modules", {"backend.integrations.webhooks": mock_webhooks_module}
|
||||
):
|
||||
# Should not raise an error
|
||||
AutoRegistry.patch_integrations()
|
||||
|
||||
# Function should not be added if it didn't exist
|
||||
assert not hasattr(mock_webhooks_module, "load_webhook_managers")
|
||||
|
||||
|
||||
class TestPatchingIntegration:
|
||||
"""Test the complete patching integration flow."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Clear registry."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
def test_complete_provider_registration_and_patching(self):
|
||||
"""Test the complete flow from provider registration to patching."""
|
||||
# Mock webhooks module
|
||||
mock_webhooks = MagicMock()
|
||||
mock_webhooks.load_webhook_managers = lambda: {"original": Mock()}
|
||||
|
||||
# Create a fully featured provider
|
||||
(
|
||||
ProviderBuilder("complete_provider")
|
||||
.with_api_key("COMPLETE_KEY", "Complete API Key")
|
||||
.with_oauth(MockOAuthHandler, scopes=["read", "write"])
|
||||
.with_webhook_manager(MockWebhookManager)
|
||||
.build()
|
||||
)
|
||||
|
||||
# Apply patches
|
||||
with patch.dict(
|
||||
"sys.modules",
|
||||
{
|
||||
"backend.integrations.webhooks": mock_webhooks,
|
||||
},
|
||||
):
|
||||
AutoRegistry.patch_integrations()
|
||||
|
||||
# Verify webhook patching
|
||||
webhook_result = mock_webhooks.load_webhook_managers()
|
||||
assert "complete_provider" in webhook_result
|
||||
assert webhook_result["complete_provider"] == MockWebhookManager
|
||||
assert "original" in webhook_result # Original preserved
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
@@ -1,482 +0,0 @@
|
||||
"""
|
||||
Tests for the SDK auto-registration system via AutoRegistry.
|
||||
|
||||
This test suite verifies:
|
||||
1. Provider registration and retrieval
|
||||
2. OAuth handler registration via patches
|
||||
3. Webhook manager registration via patches
|
||||
4. Credential registration and management
|
||||
5. Block configuration association
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
AutoRegistry,
|
||||
BaseOAuthHandler,
|
||||
BaseWebhooksManager,
|
||||
Block,
|
||||
BlockConfiguration,
|
||||
Provider,
|
||||
ProviderBuilder,
|
||||
)
|
||||
|
||||
|
||||
class TestAutoRegistry:
|
||||
"""Test the AutoRegistry functionality."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Clear registry before each test."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
def test_provider_registration(self):
|
||||
"""Test that providers can be registered and retrieved."""
|
||||
# Create a test provider
|
||||
provider = Provider(
|
||||
name="test_provider",
|
||||
oauth_handler=None,
|
||||
webhook_manager=None,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"api_key"},
|
||||
)
|
||||
|
||||
# Register it
|
||||
AutoRegistry.register_provider(provider)
|
||||
|
||||
# Verify it's registered
|
||||
assert "test_provider" in AutoRegistry._providers
|
||||
assert AutoRegistry.get_provider("test_provider") == provider
|
||||
|
||||
def test_provider_with_oauth(self):
|
||||
"""Test provider registration with OAuth handler."""
|
||||
|
||||
# Create a mock OAuth handler
|
||||
class TestOAuthHandler(BaseOAuthHandler):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
from backend.sdk.provider import OAuthConfig
|
||||
|
||||
provider = Provider(
|
||||
name="oauth_provider",
|
||||
oauth_config=OAuthConfig(oauth_handler=TestOAuthHandler),
|
||||
webhook_manager=None,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"oauth2"},
|
||||
)
|
||||
|
||||
AutoRegistry.register_provider(provider)
|
||||
|
||||
# Verify OAuth handler is registered
|
||||
assert "oauth_provider" in AutoRegistry._oauth_handlers
|
||||
assert AutoRegistry._oauth_handlers["oauth_provider"] == TestOAuthHandler
|
||||
|
||||
def test_provider_with_webhook_manager(self):
|
||||
"""Test provider registration with webhook manager."""
|
||||
|
||||
# Create a mock webhook manager
|
||||
class TestWebhookManager(BaseWebhooksManager):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
provider = Provider(
|
||||
name="webhook_provider",
|
||||
oauth_handler=None,
|
||||
webhook_manager=TestWebhookManager,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"api_key"},
|
||||
)
|
||||
|
||||
AutoRegistry.register_provider(provider)
|
||||
|
||||
# Verify webhook manager is registered
|
||||
assert "webhook_provider" in AutoRegistry._webhook_managers
|
||||
assert AutoRegistry._webhook_managers["webhook_provider"] == TestWebhookManager
|
||||
|
||||
def test_default_credentials_registration(self):
|
||||
"""Test that default credentials are registered."""
|
||||
# Create test credentials
|
||||
from backend.sdk import SecretStr
|
||||
|
||||
cred1 = APIKeyCredentials(
|
||||
id="test-cred-1",
|
||||
provider="test_provider",
|
||||
api_key=SecretStr("test-key-1"),
|
||||
title="Test Credential 1",
|
||||
)
|
||||
cred2 = APIKeyCredentials(
|
||||
id="test-cred-2",
|
||||
provider="test_provider",
|
||||
api_key=SecretStr("test-key-2"),
|
||||
title="Test Credential 2",
|
||||
)
|
||||
|
||||
provider = Provider(
|
||||
name="test_provider",
|
||||
oauth_handler=None,
|
||||
webhook_manager=None,
|
||||
default_credentials=[cred1, cred2],
|
||||
base_costs=[],
|
||||
supported_auth_types={"api_key"},
|
||||
)
|
||||
|
||||
AutoRegistry.register_provider(provider)
|
||||
|
||||
# Verify credentials are registered
|
||||
all_creds = AutoRegistry.get_all_credentials()
|
||||
assert cred1 in all_creds
|
||||
assert cred2 in all_creds
|
||||
|
||||
def test_api_key_registration(self):
|
||||
"""Test API key environment variable registration."""
|
||||
import os
|
||||
|
||||
# Set up a test environment variable
|
||||
os.environ["TEST_API_KEY"] = "test-api-key-value"
|
||||
|
||||
try:
|
||||
AutoRegistry.register_api_key("test_provider", "TEST_API_KEY")
|
||||
|
||||
# Verify the mapping is stored
|
||||
assert AutoRegistry._api_key_mappings["test_provider"] == "TEST_API_KEY"
|
||||
|
||||
# Verify a credential was created
|
||||
all_creds = AutoRegistry.get_all_credentials()
|
||||
test_cred = next(
|
||||
(c for c in all_creds if c.id == "test_provider-default"), None
|
||||
)
|
||||
assert test_cred is not None
|
||||
assert test_cred.provider == "test_provider"
|
||||
assert test_cred.api_key.get_secret_value() == "test-api-key-value" # type: ignore
|
||||
|
||||
finally:
|
||||
# Clean up
|
||||
del os.environ["TEST_API_KEY"]
|
||||
|
||||
def test_get_oauth_handlers(self):
|
||||
"""Test retrieving all OAuth handlers."""
|
||||
|
||||
# Register multiple providers with OAuth
|
||||
class TestOAuth1(BaseOAuthHandler):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
class TestOAuth2(BaseOAuthHandler):
|
||||
PROVIDER_NAME = ProviderName.GOOGLE
|
||||
|
||||
from backend.sdk.provider import OAuthConfig
|
||||
|
||||
provider1 = Provider(
|
||||
name="provider1",
|
||||
oauth_config=OAuthConfig(oauth_handler=TestOAuth1),
|
||||
webhook_manager=None,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"oauth2"},
|
||||
)
|
||||
|
||||
provider2 = Provider(
|
||||
name="provider2",
|
||||
oauth_config=OAuthConfig(oauth_handler=TestOAuth2),
|
||||
webhook_manager=None,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"oauth2"},
|
||||
)
|
||||
|
||||
AutoRegistry.register_provider(provider1)
|
||||
AutoRegistry.register_provider(provider2)
|
||||
|
||||
handlers = AutoRegistry.get_oauth_handlers()
|
||||
assert "provider1" in handlers
|
||||
assert "provider2" in handlers
|
||||
assert handlers["provider1"] == TestOAuth1
|
||||
assert handlers["provider2"] == TestOAuth2
|
||||
|
||||
def test_block_configuration_registration(self):
|
||||
"""Test registering block configuration."""
|
||||
|
||||
# Create a test block class
|
||||
class TestBlock(Block):
|
||||
pass
|
||||
|
||||
config = BlockConfiguration(
|
||||
provider="test_provider",
|
||||
costs=[],
|
||||
default_credentials=[],
|
||||
webhook_manager=None,
|
||||
oauth_handler=None,
|
||||
)
|
||||
|
||||
AutoRegistry.register_block_configuration(TestBlock, config)
|
||||
|
||||
# Verify it's registered
|
||||
assert TestBlock in AutoRegistry._block_configurations
|
||||
assert AutoRegistry._block_configurations[TestBlock] == config
|
||||
|
||||
def test_clear_registry(self):
|
||||
"""Test clearing all registrations."""
|
||||
# Add some registrations
|
||||
provider = Provider(
|
||||
name="test_provider",
|
||||
oauth_handler=None,
|
||||
webhook_manager=None,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"api_key"},
|
||||
)
|
||||
AutoRegistry.register_provider(provider)
|
||||
AutoRegistry.register_api_key("test", "TEST_KEY")
|
||||
|
||||
# Clear everything
|
||||
AutoRegistry.clear()
|
||||
|
||||
# Verify everything is cleared
|
||||
assert len(AutoRegistry._providers) == 0
|
||||
assert len(AutoRegistry._default_credentials) == 0
|
||||
assert len(AutoRegistry._oauth_handlers) == 0
|
||||
assert len(AutoRegistry._webhook_managers) == 0
|
||||
assert len(AutoRegistry._block_configurations) == 0
|
||||
assert len(AutoRegistry._api_key_mappings) == 0
|
||||
|
||||
|
||||
class TestAutoRegistryPatching:
|
||||
"""Test the integration patching functionality."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Clear registry before each test."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
@patch("backend.integrations.webhooks.load_webhook_managers")
|
||||
def test_webhook_manager_patching(self, mock_load_managers):
|
||||
"""Test that webhook managers are patched into the system."""
|
||||
# Set up the mock to return an empty dict
|
||||
mock_load_managers.return_value = {}
|
||||
|
||||
# Create a test webhook manager
|
||||
class TestWebhookManager(BaseWebhooksManager):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
# Register a provider with webhooks
|
||||
provider = Provider(
|
||||
name="webhook_provider",
|
||||
oauth_handler=None,
|
||||
webhook_manager=TestWebhookManager,
|
||||
default_credentials=[],
|
||||
base_costs=[],
|
||||
supported_auth_types={"api_key"},
|
||||
)
|
||||
|
||||
AutoRegistry.register_provider(provider)
|
||||
|
||||
# Mock the webhooks module
|
||||
mock_webhooks = MagicMock()
|
||||
mock_webhooks.load_webhook_managers = mock_load_managers
|
||||
|
||||
with patch.dict(
|
||||
"sys.modules", {"backend.integrations.webhooks": mock_webhooks}
|
||||
):
|
||||
# Apply patches
|
||||
AutoRegistry.patch_integrations()
|
||||
|
||||
# Call the patched function
|
||||
result = mock_webhooks.load_webhook_managers()
|
||||
|
||||
# Verify our webhook manager is included
|
||||
assert "webhook_provider" in result
|
||||
assert result["webhook_provider"] == TestWebhookManager
|
||||
|
||||
|
||||
class TestProviderBuilder:
|
||||
"""Test the ProviderBuilder fluent API."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Clear registry before each test."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
def test_basic_provider_builder(self):
|
||||
"""Test building a basic provider."""
|
||||
provider = (
|
||||
ProviderBuilder("test_provider")
|
||||
.with_api_key("TEST_API_KEY", "Test API Key")
|
||||
.build()
|
||||
)
|
||||
|
||||
assert provider.name == "test_provider"
|
||||
assert "api_key" in provider.supported_auth_types
|
||||
assert AutoRegistry.get_provider("test_provider") == provider
|
||||
|
||||
def test_provider_builder_with_oauth(self):
|
||||
"""Test building a provider with OAuth."""
|
||||
|
||||
class TestOAuth(BaseOAuthHandler):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
provider = (
|
||||
ProviderBuilder("oauth_test")
|
||||
.with_oauth(TestOAuth, scopes=["read", "write"])
|
||||
.build()
|
||||
)
|
||||
|
||||
assert provider.oauth_config is not None
|
||||
assert provider.oauth_config.oauth_handler == TestOAuth
|
||||
assert "oauth2" in provider.supported_auth_types
|
||||
|
||||
def test_provider_builder_with_webhook(self):
|
||||
"""Test building a provider with webhook manager."""
|
||||
|
||||
class TestWebhook(BaseWebhooksManager):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
provider = (
|
||||
ProviderBuilder("webhook_test").with_webhook_manager(TestWebhook).build()
|
||||
)
|
||||
|
||||
assert provider.webhook_manager == TestWebhook
|
||||
|
||||
def test_provider_builder_with_base_cost(self):
|
||||
"""Test building a provider with base costs."""
|
||||
from backend.data.cost import BlockCostType
|
||||
|
||||
provider = (
|
||||
ProviderBuilder("cost_test")
|
||||
.with_base_cost(10, BlockCostType.RUN)
|
||||
.with_base_cost(5, BlockCostType.BYTE)
|
||||
.build()
|
||||
)
|
||||
|
||||
assert len(provider.base_costs) == 2
|
||||
assert provider.base_costs[0].cost_amount == 10
|
||||
assert provider.base_costs[0].cost_type == BlockCostType.RUN
|
||||
assert provider.base_costs[1].cost_amount == 5
|
||||
assert provider.base_costs[1].cost_type == BlockCostType.BYTE
|
||||
|
||||
def test_provider_builder_with_api_client(self):
|
||||
"""Test building a provider with API client factory."""
|
||||
|
||||
def mock_client_factory():
|
||||
return Mock()
|
||||
|
||||
provider = (
|
||||
ProviderBuilder("client_test").with_api_client(mock_client_factory).build()
|
||||
)
|
||||
|
||||
assert provider._api_client_factory == mock_client_factory
|
||||
|
||||
def test_provider_builder_with_error_handler(self):
|
||||
"""Test building a provider with error handler."""
|
||||
|
||||
def mock_error_handler(exc: Exception) -> str:
|
||||
return f"Error: {str(exc)}"
|
||||
|
||||
provider = (
|
||||
ProviderBuilder("error_test").with_error_handler(mock_error_handler).build()
|
||||
)
|
||||
|
||||
assert provider._error_handler == mock_error_handler
|
||||
|
||||
def test_provider_builder_complete_example(self):
|
||||
"""Test building a complete provider with all features."""
|
||||
from backend.data.cost import BlockCostType
|
||||
|
||||
class TestOAuth(BaseOAuthHandler):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
class TestWebhook(BaseWebhooksManager):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
def client_factory():
|
||||
return Mock()
|
||||
|
||||
def error_handler(exc):
|
||||
return str(exc)
|
||||
|
||||
provider = (
|
||||
ProviderBuilder("complete_test")
|
||||
.with_api_key("COMPLETE_API_KEY", "Complete API Key")
|
||||
.with_oauth(TestOAuth, scopes=["read"])
|
||||
.with_webhook_manager(TestWebhook)
|
||||
.with_base_cost(100, BlockCostType.RUN)
|
||||
.with_api_client(client_factory)
|
||||
.with_error_handler(error_handler)
|
||||
.with_config(custom_setting="value")
|
||||
.build()
|
||||
)
|
||||
|
||||
# Verify all settings
|
||||
assert provider.name == "complete_test"
|
||||
assert "api_key" in provider.supported_auth_types
|
||||
assert "oauth2" in provider.supported_auth_types
|
||||
assert provider.oauth_config is not None
|
||||
assert provider.oauth_config.oauth_handler == TestOAuth
|
||||
assert provider.webhook_manager == TestWebhook
|
||||
assert len(provider.base_costs) == 1
|
||||
assert provider._api_client_factory == client_factory
|
||||
assert provider._error_handler == error_handler
|
||||
assert provider.get_config("custom_setting") == "value" # from with_config
|
||||
|
||||
# Verify it's registered
|
||||
assert AutoRegistry.get_provider("complete_test") == provider
|
||||
assert "complete_test" in AutoRegistry._oauth_handlers
|
||||
assert "complete_test" in AutoRegistry._webhook_managers
|
||||
|
||||
|
||||
class TestSDKImports:
|
||||
"""Test that all expected exports are available from the SDK."""
|
||||
|
||||
def test_core_block_imports(self):
|
||||
"""Test core block system imports."""
|
||||
from backend.sdk import Block, BlockCategory
|
||||
|
||||
# Just verify they're importable
|
||||
assert Block is not None
|
||||
assert BlockCategory is not None
|
||||
|
||||
def test_schema_imports(self):
|
||||
"""Test schema and model imports."""
|
||||
from backend.sdk import APIKeyCredentials, SchemaField
|
||||
|
||||
assert SchemaField is not None
|
||||
assert APIKeyCredentials is not None
|
||||
|
||||
def test_type_alias_imports(self):
|
||||
"""Test type alias imports are removed."""
|
||||
# Type aliases have been removed from SDK
|
||||
# Users should import from typing or use built-in types directly
|
||||
pass
|
||||
|
||||
def test_cost_system_imports(self):
|
||||
"""Test cost system imports."""
|
||||
from backend.sdk import BlockCost, BlockCostType
|
||||
|
||||
assert BlockCost is not None
|
||||
assert BlockCostType is not None
|
||||
|
||||
def test_utility_imports(self):
|
||||
"""Test utility imports."""
|
||||
from backend.sdk import BaseModel, Requests, json
|
||||
|
||||
assert json is not None
|
||||
assert BaseModel is not None
|
||||
assert Requests is not None
|
||||
|
||||
def test_integration_imports(self):
|
||||
"""Test integration imports."""
|
||||
from backend.sdk import ProviderName
|
||||
|
||||
assert ProviderName is not None
|
||||
|
||||
def test_sdk_component_imports(self):
|
||||
"""Test SDK-specific component imports."""
|
||||
from backend.sdk import AutoRegistry, ProviderBuilder
|
||||
|
||||
assert AutoRegistry is not None
|
||||
assert ProviderBuilder is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
@@ -1,506 +0,0 @@
|
||||
"""
|
||||
Tests for SDK webhook functionality.
|
||||
|
||||
This test suite verifies webhook blocks and webhook manager integration.
|
||||
"""
|
||||
|
||||
from enum import Enum
|
||||
|
||||
import pytest
|
||||
|
||||
from backend.integrations.providers import ProviderName
|
||||
from backend.sdk import (
|
||||
APIKeyCredentials,
|
||||
AutoRegistry,
|
||||
BaseModel,
|
||||
BaseWebhooksManager,
|
||||
Block,
|
||||
BlockCategory,
|
||||
BlockOutput,
|
||||
BlockSchema,
|
||||
BlockWebhookConfig,
|
||||
CredentialsField,
|
||||
CredentialsMetaInput,
|
||||
Field,
|
||||
ProviderBuilder,
|
||||
SchemaField,
|
||||
SecretStr,
|
||||
)
|
||||
|
||||
|
||||
class TestWebhookTypes(str, Enum):
|
||||
"""Test webhook event types."""
|
||||
|
||||
CREATED = "created"
|
||||
UPDATED = "updated"
|
||||
DELETED = "deleted"
|
||||
|
||||
|
||||
class TestWebhooksManager(BaseWebhooksManager):
|
||||
"""Test webhook manager implementation."""
|
||||
|
||||
PROVIDER_NAME = ProviderName.GITHUB # Reuse for testing
|
||||
|
||||
class WebhookType(str, Enum):
|
||||
TEST = "test"
|
||||
|
||||
@classmethod
|
||||
async def validate_payload(cls, webhook, request):
|
||||
"""Validate incoming webhook payload."""
|
||||
# Mock implementation
|
||||
payload = {"test": "data"}
|
||||
event_type = "test_event"
|
||||
return payload, event_type
|
||||
|
||||
async def _register_webhook(
|
||||
self,
|
||||
credentials,
|
||||
webhook_type: str,
|
||||
resource: str,
|
||||
events: list[str],
|
||||
ingress_url: str,
|
||||
secret: str,
|
||||
) -> tuple[str, dict]:
|
||||
"""Register webhook with external service."""
|
||||
# Mock implementation
|
||||
webhook_id = f"test_webhook_{resource}"
|
||||
config = {
|
||||
"webhook_type": webhook_type,
|
||||
"resource": resource,
|
||||
"events": events,
|
||||
"url": ingress_url,
|
||||
}
|
||||
return webhook_id, config
|
||||
|
||||
async def _deregister_webhook(self, webhook, credentials) -> None:
|
||||
"""Deregister webhook from external service."""
|
||||
# Mock implementation
|
||||
pass
|
||||
|
||||
|
||||
class TestWebhookBlock(Block):
|
||||
"""Test webhook block implementation."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = CredentialsField(
|
||||
provider="test_webhooks",
|
||||
supported_credential_types={"api_key"},
|
||||
description="Webhook service credentials",
|
||||
)
|
||||
webhook_url: str = SchemaField(
|
||||
description="URL to receive webhooks",
|
||||
)
|
||||
resource_id: str = SchemaField(
|
||||
description="Resource to monitor",
|
||||
)
|
||||
events: list[TestWebhookTypes] = SchemaField(
|
||||
description="Events to listen for",
|
||||
default=[TestWebhookTypes.CREATED],
|
||||
)
|
||||
payload: dict = SchemaField(
|
||||
description="Webhook payload",
|
||||
default={},
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webhook_id: str = SchemaField(description="Registered webhook ID")
|
||||
is_active: bool = SchemaField(description="Webhook is active")
|
||||
event_count: int = SchemaField(description="Number of events configured")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="test-webhook-block",
|
||||
description="Test webhook block",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=TestWebhookBlock.Input,
|
||||
output_schema=TestWebhookBlock.Output,
|
||||
webhook_config=BlockWebhookConfig(
|
||||
provider="test_webhooks", # type: ignore
|
||||
webhook_type="test",
|
||||
resource_format="{resource_id}",
|
||||
),
|
||||
)
|
||||
|
||||
async def run(
|
||||
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
|
||||
) -> BlockOutput:
|
||||
# Simulate webhook registration
|
||||
webhook_id = f"webhook_{input_data.resource_id}"
|
||||
|
||||
yield "webhook_id", webhook_id
|
||||
yield "is_active", True
|
||||
yield "event_count", len(input_data.events)
|
||||
|
||||
|
||||
class TestWebhookBlockCreation:
|
||||
"""Test creating webhook blocks with the SDK."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test environment."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
# Register a provider with webhook support
|
||||
self.provider = (
|
||||
ProviderBuilder("test_webhooks")
|
||||
.with_api_key("TEST_WEBHOOK_KEY", "Test Webhook API Key")
|
||||
.with_webhook_manager(TestWebhooksManager)
|
||||
.build()
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_basic_webhook_block(self):
|
||||
"""Test creating a basic webhook block."""
|
||||
block = TestWebhookBlock()
|
||||
|
||||
# Verify block configuration
|
||||
assert block.webhook_config is not None
|
||||
assert block.webhook_config.provider == "test_webhooks"
|
||||
assert block.webhook_config.webhook_type == "test"
|
||||
assert "{resource_id}" in block.webhook_config.resource_format # type: ignore
|
||||
|
||||
# Test block execution
|
||||
test_creds = APIKeyCredentials(
|
||||
id="test-webhook-creds",
|
||||
provider="test_webhooks",
|
||||
api_key=SecretStr("test-key"),
|
||||
title="Test Webhook Key",
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
TestWebhookBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "test_webhooks",
|
||||
"id": "test-webhook-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
webhook_url="https://example.com/webhook",
|
||||
resource_id="resource_123",
|
||||
events=[TestWebhookTypes.CREATED, TestWebhookTypes.UPDATED],
|
||||
),
|
||||
credentials=test_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["webhook_id"] == "webhook_resource_123"
|
||||
assert outputs["is_active"] is True
|
||||
assert outputs["event_count"] == 2
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_webhook_block_with_filters(self):
|
||||
"""Test webhook block with event filters."""
|
||||
|
||||
class EventFilterModel(BaseModel):
|
||||
include_system: bool = Field(default=False)
|
||||
severity_levels: list[str] = Field(
|
||||
default_factory=lambda: ["info", "warning"]
|
||||
)
|
||||
|
||||
class FilteredWebhookBlock(Block):
|
||||
"""Webhook block with filtering."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = CredentialsField(
|
||||
provider="test_webhooks",
|
||||
supported_credential_types={"api_key"},
|
||||
)
|
||||
resource: str = SchemaField(description="Resource to monitor")
|
||||
filters: EventFilterModel = SchemaField(
|
||||
description="Event filters",
|
||||
default_factory=EventFilterModel,
|
||||
)
|
||||
payload: dict = SchemaField(
|
||||
description="Webhook payload",
|
||||
default={},
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
webhook_active: bool = SchemaField(description="Webhook active")
|
||||
filter_summary: str = SchemaField(description="Active filters")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="filtered-webhook-block",
|
||||
description="Webhook with filters",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=FilteredWebhookBlock.Input,
|
||||
output_schema=FilteredWebhookBlock.Output,
|
||||
webhook_config=BlockWebhookConfig(
|
||||
provider="test_webhooks", # type: ignore
|
||||
webhook_type="filtered",
|
||||
resource_format="{resource}",
|
||||
),
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
filters = input_data.filters
|
||||
filter_parts = []
|
||||
|
||||
if filters.include_system:
|
||||
filter_parts.append("system events")
|
||||
|
||||
filter_parts.append(f"{len(filters.severity_levels)} severity levels")
|
||||
|
||||
yield "webhook_active", True
|
||||
yield "filter_summary", ", ".join(filter_parts)
|
||||
|
||||
# Test the block
|
||||
block = FilteredWebhookBlock()
|
||||
|
||||
test_creds = APIKeyCredentials(
|
||||
id="test-creds",
|
||||
provider="test_webhooks",
|
||||
api_key=SecretStr("key"),
|
||||
title="Test Key",
|
||||
)
|
||||
|
||||
# Test with default filters
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
FilteredWebhookBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "test_webhooks",
|
||||
"id": "test-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
resource="test_resource",
|
||||
),
|
||||
credentials=test_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["webhook_active"] is True
|
||||
assert "2 severity levels" in outputs["filter_summary"]
|
||||
|
||||
# Test with custom filters
|
||||
custom_filters = EventFilterModel(
|
||||
include_system=True,
|
||||
severity_levels=["error", "critical"],
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
FilteredWebhookBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "test_webhooks",
|
||||
"id": "test-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
resource="test_resource",
|
||||
filters=custom_filters,
|
||||
),
|
||||
credentials=test_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert "system events" in outputs["filter_summary"]
|
||||
assert "2 severity levels" in outputs["filter_summary"]
|
||||
|
||||
|
||||
class TestWebhookManagerIntegration:
|
||||
"""Test webhook manager integration with AutoRegistry."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Clear registry."""
|
||||
AutoRegistry.clear()
|
||||
|
||||
def test_webhook_manager_registration(self):
|
||||
"""Test that webhook managers are properly registered."""
|
||||
|
||||
# Create multiple webhook managers
|
||||
class WebhookManager1(BaseWebhooksManager):
|
||||
PROVIDER_NAME = ProviderName.GITHUB
|
||||
|
||||
class WebhookManager2(BaseWebhooksManager):
|
||||
PROVIDER_NAME = ProviderName.GOOGLE
|
||||
|
||||
# Register providers with webhook managers
|
||||
(
|
||||
ProviderBuilder("webhook_service_1")
|
||||
.with_webhook_manager(WebhookManager1)
|
||||
.build()
|
||||
)
|
||||
|
||||
(
|
||||
ProviderBuilder("webhook_service_2")
|
||||
.with_webhook_manager(WebhookManager2)
|
||||
.build()
|
||||
)
|
||||
|
||||
# Verify registration
|
||||
managers = AutoRegistry.get_webhook_managers()
|
||||
assert "webhook_service_1" in managers
|
||||
assert "webhook_service_2" in managers
|
||||
assert managers["webhook_service_1"] == WebhookManager1
|
||||
assert managers["webhook_service_2"] == WebhookManager2
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_webhook_block_with_provider_manager(self):
|
||||
"""Test webhook block using a provider's webhook manager."""
|
||||
# Register provider with webhook manager
|
||||
(
|
||||
ProviderBuilder("integrated_webhooks")
|
||||
.with_api_key("INTEGRATED_KEY", "Integrated Webhook Key")
|
||||
.with_webhook_manager(TestWebhooksManager)
|
||||
.build()
|
||||
)
|
||||
|
||||
# Create a block that uses this provider
|
||||
class IntegratedWebhookBlock(Block):
|
||||
"""Block using integrated webhook manager."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
credentials: CredentialsMetaInput = CredentialsField(
|
||||
provider="integrated_webhooks",
|
||||
supported_credential_types={"api_key"},
|
||||
)
|
||||
target: str = SchemaField(description="Webhook target")
|
||||
payload: dict = SchemaField(
|
||||
description="Webhook payload",
|
||||
default={},
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
status: str = SchemaField(description="Webhook status")
|
||||
manager_type: str = SchemaField(description="Manager type used")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="integrated-webhook-block",
|
||||
description="Uses integrated webhook manager",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=IntegratedWebhookBlock.Input,
|
||||
output_schema=IntegratedWebhookBlock.Output,
|
||||
webhook_config=BlockWebhookConfig(
|
||||
provider="integrated_webhooks", # type: ignore
|
||||
webhook_type=TestWebhooksManager.WebhookType.TEST,
|
||||
resource_format="{target}",
|
||||
),
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
# Get the webhook manager for this provider
|
||||
managers = AutoRegistry.get_webhook_managers()
|
||||
manager_class = managers.get("integrated_webhooks")
|
||||
|
||||
yield "status", "configured"
|
||||
yield "manager_type", (
|
||||
manager_class.__name__ if manager_class else "none"
|
||||
)
|
||||
|
||||
# Test the block
|
||||
block = IntegratedWebhookBlock()
|
||||
|
||||
test_creds = APIKeyCredentials(
|
||||
id="integrated-creds",
|
||||
provider="integrated_webhooks",
|
||||
api_key=SecretStr("key"),
|
||||
title="Integrated Key",
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
IntegratedWebhookBlock.Input(
|
||||
credentials={ # type: ignore
|
||||
"provider": "integrated_webhooks",
|
||||
"id": "integrated-creds",
|
||||
"type": "api_key",
|
||||
},
|
||||
target="test_target",
|
||||
),
|
||||
credentials=test_creds,
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["status"] == "configured"
|
||||
assert outputs["manager_type"] == "TestWebhooksManager"
|
||||
|
||||
|
||||
class TestWebhookEventHandling:
|
||||
"""Test webhook event handling in blocks."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_webhook_event_processing_block(self):
|
||||
"""Test a block that processes webhook events."""
|
||||
|
||||
class WebhookEventBlock(Block):
|
||||
"""Block that processes webhook events."""
|
||||
|
||||
class Input(BlockSchema):
|
||||
event_type: str = SchemaField(description="Type of webhook event")
|
||||
payload: dict = SchemaField(description="Webhook payload")
|
||||
verify_signature: bool = SchemaField(
|
||||
description="Whether to verify webhook signature",
|
||||
default=True,
|
||||
)
|
||||
|
||||
class Output(BlockSchema):
|
||||
processed: bool = SchemaField(description="Event was processed")
|
||||
event_summary: str = SchemaField(description="Summary of event")
|
||||
action_required: bool = SchemaField(description="Action required")
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
id="webhook-event-processor",
|
||||
description="Processes incoming webhook events",
|
||||
categories={BlockCategory.DEVELOPER_TOOLS},
|
||||
input_schema=WebhookEventBlock.Input,
|
||||
output_schema=WebhookEventBlock.Output,
|
||||
)
|
||||
|
||||
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
|
||||
# Process based on event type
|
||||
event_type = input_data.event_type
|
||||
payload = input_data.payload
|
||||
|
||||
if event_type == "created":
|
||||
summary = f"New item created: {payload.get('id', 'unknown')}"
|
||||
action_required = True
|
||||
elif event_type == "updated":
|
||||
summary = f"Item updated: {payload.get('id', 'unknown')}"
|
||||
action_required = False
|
||||
elif event_type == "deleted":
|
||||
summary = f"Item deleted: {payload.get('id', 'unknown')}"
|
||||
action_required = True
|
||||
else:
|
||||
summary = f"Unknown event: {event_type}"
|
||||
action_required = False
|
||||
|
||||
yield "processed", True
|
||||
yield "event_summary", summary
|
||||
yield "action_required", action_required
|
||||
|
||||
# Test the block with different events
|
||||
block = WebhookEventBlock()
|
||||
|
||||
# Test created event
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
WebhookEventBlock.Input(
|
||||
event_type="created",
|
||||
payload={"id": "123", "name": "Test Item"},
|
||||
)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["processed"] is True
|
||||
assert "New item created: 123" in outputs["event_summary"]
|
||||
assert outputs["action_required"] is True
|
||||
|
||||
# Test updated event
|
||||
outputs = {}
|
||||
async for name, value in block.run(
|
||||
WebhookEventBlock.Input(
|
||||
event_type="updated",
|
||||
payload={"id": "456", "changes": ["name", "status"]},
|
||||
)
|
||||
):
|
||||
outputs[name] = value
|
||||
|
||||
assert outputs["processed"] is True
|
||||
assert "Item updated: 456" in outputs["event_summary"]
|
||||
assert outputs["action_required"] is False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
@@ -1,323 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Data Updater for Store Materialized Views
|
||||
|
||||
This script updates existing test data to trigger changes in the materialized views:
|
||||
- mv_agent_run_counts: Updated by creating new AgentGraphExecution records
|
||||
- mv_review_stats: Updated by creating new StoreListingReview records
|
||||
|
||||
Run this after test_data_creator.py to test that materialized views update correctly.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import random
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import prisma.enums
|
||||
from faker import Faker
|
||||
from prisma import Json, Prisma
|
||||
|
||||
faker = Faker()
|
||||
|
||||
|
||||
async def main():
|
||||
db = Prisma()
|
||||
await db.connect()
|
||||
|
||||
print("Starting test data updates for materialized views...")
|
||||
print("=" * 60)
|
||||
|
||||
# Get existing data
|
||||
users = await db.user.find_many(take=50)
|
||||
agent_graphs = await db.agentgraph.find_many(where={"isActive": True}, take=50)
|
||||
store_listings = await db.storelisting.find_many(
|
||||
where={"hasApprovedVersion": True}, include={"Versions": True}, take=30
|
||||
)
|
||||
agent_nodes = await db.agentnode.find_many(take=100)
|
||||
|
||||
if not all([users, agent_graphs, store_listings]):
|
||||
print(
|
||||
"ERROR: Not enough test data found. Please run test_data_creator.py first."
|
||||
)
|
||||
await db.disconnect()
|
||||
return
|
||||
|
||||
print(
|
||||
f"Found {len(users)} users, {len(agent_graphs)} graphs, {len(store_listings)} store listings"
|
||||
)
|
||||
print()
|
||||
|
||||
# 1. Add new AgentGraphExecutions to update mv_agent_run_counts
|
||||
print("1. Adding new agent graph executions...")
|
||||
print("-" * 40)
|
||||
|
||||
new_executions_count = 0
|
||||
execution_data = []
|
||||
|
||||
for graph in random.sample(agent_graphs, min(20, len(agent_graphs))):
|
||||
# Add 5-15 new executions per selected graph
|
||||
num_new_executions = random.randint(5, 15)
|
||||
for _ in range(num_new_executions):
|
||||
user = random.choice(users)
|
||||
execution_data.append(
|
||||
{
|
||||
"agentGraphId": graph.id,
|
||||
"agentGraphVersion": graph.version,
|
||||
"userId": user.id,
|
||||
"executionStatus": random.choice(
|
||||
[
|
||||
prisma.enums.AgentExecutionStatus.COMPLETED,
|
||||
prisma.enums.AgentExecutionStatus.FAILED,
|
||||
prisma.enums.AgentExecutionStatus.RUNNING,
|
||||
]
|
||||
),
|
||||
"startedAt": faker.date_time_between(
|
||||
start_date="-7d", end_date="now"
|
||||
),
|
||||
"stats": Json(
|
||||
{
|
||||
"duration": random.randint(100, 5000),
|
||||
"blocks_executed": random.randint(1, 10),
|
||||
}
|
||||
),
|
||||
}
|
||||
)
|
||||
new_executions_count += 1
|
||||
|
||||
# Batch create executions
|
||||
await db.agentgraphexecution.create_many(data=execution_data)
|
||||
print(f"✓ Created {new_executions_count} new executions")
|
||||
|
||||
# Get the created executions for node executions
|
||||
recent_executions = await db.agentgraphexecution.find_many(
|
||||
take=new_executions_count, order={"createdAt": "desc"}
|
||||
)
|
||||
|
||||
# 2. Add corresponding AgentNodeExecutions
|
||||
print("\n2. Adding agent node executions...")
|
||||
print("-" * 40)
|
||||
|
||||
node_execution_data = []
|
||||
for execution in recent_executions:
|
||||
# Get nodes for this graph
|
||||
graph_nodes = [
|
||||
n for n in agent_nodes if n.agentGraphId == execution.agentGraphId
|
||||
]
|
||||
if graph_nodes:
|
||||
for node in random.sample(graph_nodes, min(3, len(graph_nodes))):
|
||||
node_execution_data.append(
|
||||
{
|
||||
"agentGraphExecutionId": execution.id,
|
||||
"agentNodeId": node.id,
|
||||
"executionStatus": execution.executionStatus,
|
||||
"addedTime": datetime.now(),
|
||||
"startedTime": datetime.now()
|
||||
- timedelta(minutes=random.randint(1, 10)),
|
||||
"endedTime": (
|
||||
datetime.now()
|
||||
if execution.executionStatus
|
||||
== prisma.enums.AgentExecutionStatus.COMPLETED
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
await db.agentnodeexecution.create_many(data=node_execution_data)
|
||||
print(f"✓ Created {len(node_execution_data)} node executions")
|
||||
|
||||
# 3. Add new StoreListingReviews to update mv_review_stats
|
||||
print("\n3. Adding new store listing reviews...")
|
||||
print("-" * 40)
|
||||
|
||||
new_reviews_count = 0
|
||||
|
||||
for listing in store_listings:
|
||||
if not listing.Versions:
|
||||
continue
|
||||
|
||||
# Get approved versions
|
||||
approved_versions = [
|
||||
v
|
||||
for v in listing.Versions
|
||||
if v.submissionStatus == prisma.enums.SubmissionStatus.APPROVED
|
||||
]
|
||||
if not approved_versions:
|
||||
continue
|
||||
|
||||
# Pick a version to add reviews to
|
||||
version = random.choice(approved_versions)
|
||||
|
||||
# Get existing reviews for this version to avoid duplicates
|
||||
existing_reviews = await db.storelistingreview.find_many(
|
||||
where={"storeListingVersionId": version.id}
|
||||
)
|
||||
existing_reviewer_ids = {r.reviewByUserId for r in existing_reviews}
|
||||
|
||||
# Find users who haven't reviewed this version yet
|
||||
available_reviewers = [u for u in users if u.id not in existing_reviewer_ids]
|
||||
|
||||
if available_reviewers:
|
||||
# Add 2-5 new reviews
|
||||
num_new_reviews = min(random.randint(2, 5), len(available_reviewers))
|
||||
selected_reviewers = random.sample(available_reviewers, num_new_reviews)
|
||||
|
||||
for reviewer in selected_reviewers:
|
||||
# Bias towards positive reviews (4-5 stars)
|
||||
score = random.choices([1, 2, 3, 4, 5], weights=[5, 10, 20, 40, 25])[0]
|
||||
|
||||
await db.storelistingreview.create(
|
||||
data={
|
||||
"storeListingVersionId": version.id,
|
||||
"reviewByUserId": reviewer.id,
|
||||
"score": score,
|
||||
"comments": (
|
||||
faker.text(max_nb_chars=200)
|
||||
if random.random() < 0.7
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
new_reviews_count += 1
|
||||
|
||||
print(f"✓ Created {new_reviews_count} new reviews")
|
||||
|
||||
# 4. Update some store listing versions (change categories, featured status)
|
||||
print("\n4. Updating store listing versions...")
|
||||
print("-" * 40)
|
||||
|
||||
updates_count = 0
|
||||
for listing in random.sample(store_listings, min(10, len(store_listings))):
|
||||
if listing.Versions:
|
||||
version = random.choice(listing.Versions)
|
||||
if version.submissionStatus == prisma.enums.SubmissionStatus.APPROVED:
|
||||
# Toggle featured status or update categories
|
||||
new_categories = random.sample(
|
||||
[
|
||||
"productivity",
|
||||
"ai",
|
||||
"automation",
|
||||
"data",
|
||||
"social",
|
||||
"marketing",
|
||||
"development",
|
||||
"analytics",
|
||||
],
|
||||
k=random.randint(2, 4),
|
||||
)
|
||||
|
||||
await db.storelistingversion.update(
|
||||
where={"id": version.id},
|
||||
data={
|
||||
"isFeatured": (
|
||||
not version.isFeatured
|
||||
if random.random() < 0.3
|
||||
else version.isFeatured
|
||||
),
|
||||
"categories": new_categories,
|
||||
"updatedAt": datetime.now(),
|
||||
},
|
||||
)
|
||||
updates_count += 1
|
||||
|
||||
print(f"✓ Updated {updates_count} store listing versions")
|
||||
|
||||
# 5. Create some new credit transactions
|
||||
print("\n5. Adding credit transactions...")
|
||||
print("-" * 40)
|
||||
|
||||
transaction_count = 0
|
||||
for user in random.sample(users, min(30, len(users))):
|
||||
# Add 1-3 transactions per user
|
||||
for _ in range(random.randint(1, 3)):
|
||||
transaction_type = random.choice(
|
||||
[
|
||||
prisma.enums.CreditTransactionType.USAGE,
|
||||
prisma.enums.CreditTransactionType.TOP_UP,
|
||||
prisma.enums.CreditTransactionType.GRANT,
|
||||
]
|
||||
)
|
||||
|
||||
amount = (
|
||||
random.randint(10, 500)
|
||||
if transaction_type == prisma.enums.CreditTransactionType.TOP_UP
|
||||
else -random.randint(1, 50)
|
||||
)
|
||||
|
||||
await db.credittransaction.create(
|
||||
data={
|
||||
"userId": user.id,
|
||||
"amount": amount,
|
||||
"type": transaction_type,
|
||||
"metadata": Json(
|
||||
{
|
||||
"source": "test_updater",
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
}
|
||||
),
|
||||
}
|
||||
)
|
||||
transaction_count += 1
|
||||
|
||||
print(f"✓ Created {transaction_count} credit transactions")
|
||||
|
||||
# 6. Refresh materialized views
|
||||
print("\n6. Refreshing materialized views...")
|
||||
print("-" * 40)
|
||||
|
||||
try:
|
||||
await db.execute_raw("SELECT refresh_store_materialized_views();")
|
||||
print("✓ Materialized views refreshed successfully")
|
||||
except Exception as e:
|
||||
print(f"⚠ Warning: Could not refresh materialized views: {e}")
|
||||
print(
|
||||
" You may need to refresh them manually with: SELECT refresh_store_materialized_views();"
|
||||
)
|
||||
|
||||
# 7. Verify the updates
|
||||
print("\n7. Verifying updates...")
|
||||
print("-" * 40)
|
||||
|
||||
# Check agent run counts
|
||||
run_counts = await db.query_raw(
|
||||
"SELECT COUNT(*) as view_count FROM mv_agent_run_counts"
|
||||
)
|
||||
print(f"✓ mv_agent_run_counts has {run_counts[0]['view_count']} entries")
|
||||
|
||||
# Check review stats
|
||||
review_stats = await db.query_raw(
|
||||
"SELECT COUNT(*) as view_count FROM mv_review_stats"
|
||||
)
|
||||
print(f"✓ mv_review_stats has {review_stats[0]['view_count']} entries")
|
||||
|
||||
# Sample some data from the views
|
||||
print("\nSample data from materialized views:")
|
||||
|
||||
sample_runs = await db.query_raw(
|
||||
"SELECT * FROM mv_agent_run_counts ORDER BY run_count DESC LIMIT 5"
|
||||
)
|
||||
print("\nTop 5 agents by run count:")
|
||||
for row in sample_runs:
|
||||
print(f" - Agent {row['agentGraphId'][:8]}...: {row['run_count']} runs")
|
||||
|
||||
sample_reviews = await db.query_raw(
|
||||
"SELECT * FROM mv_review_stats ORDER BY avg_rating DESC NULLS LAST LIMIT 5"
|
||||
)
|
||||
print("\nTop 5 store listings by rating:")
|
||||
for row in sample_reviews:
|
||||
avg_rating = row["avg_rating"] if row["avg_rating"] is not None else 0.0
|
||||
print(
|
||||
f" - Listing {row['storeListingId'][:8]}...: {avg_rating:.2f} ⭐ ({row['review_count']} reviews)"
|
||||
)
|
||||
|
||||
await db.disconnect()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Test data update completed successfully!")
|
||||
print("The materialized views should now reflect the updated data.")
|
||||
print(
|
||||
"\nTo manually refresh views, run: SELECT refresh_store_materialized_views();"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -13,6 +13,7 @@ const config: StorybookConfig = {
|
||||
"@storybook/addon-onboarding",
|
||||
"@storybook/addon-links",
|
||||
"@storybook/addon-docs",
|
||||
"@storybook/addon-interactions",
|
||||
],
|
||||
features: {
|
||||
experimentalRSC: true,
|
||||
|
||||
@@ -60,10 +60,11 @@ Every time a new Front-end dependency is added by you or others, you will need t
|
||||
- `pnpm start` - Start production server
|
||||
- `pnpm lint` - Run ESLint and Prettier checks
|
||||
- `pnpm format` - Format code with Prettier
|
||||
- `pnpm type-check` - Run TypeScript type checking
|
||||
- `pnpm types` - Run TypeScript type checking
|
||||
- `pnpm test` - Run Playwright tests
|
||||
- `pnpm test-ui` - Run Playwright tests with UI
|
||||
- `pnpm fetch:openapi` - Fetch OpenAPI spec from backend
|
||||
- `pnpm test:ui` - Run Playwright tests with UI
|
||||
- `pnpm test:unit` - Run unit tests (Vitest)
|
||||
- `pnpm test:unit:watch` - Run unit tests (Vitest) in watch mode
|
||||
- `pnpm generate:api-client` - Generate API client from OpenAPI spec
|
||||
- `pnpm generate:api-all` - Fetch OpenAPI spec and generate API client
|
||||
|
||||
@@ -237,17 +238,10 @@ Storybook is a powerful development environment for UI components. It allows you
|
||||
To build a static version of Storybook for deployment, use:
|
||||
|
||||
```bash
|
||||
pnpm build-storybook
|
||||
pnpm storybook:build
|
||||
```
|
||||
|
||||
3. **Running Storybook Tests**:
|
||||
Storybook tests can be run using:
|
||||
|
||||
```bash
|
||||
pnpm test-storybook
|
||||
```
|
||||
|
||||
4. **Writing Stories**:
|
||||
3. **Writing Stories**:
|
||||
Create `.stories.tsx` files alongside your components to define different states and variations of your components.
|
||||
|
||||
By integrating Storybook into our development workflow, we can streamline UI development, improve component reusability, and maintain a consistent design system across the project.
|
||||
|
||||
@@ -11,6 +11,8 @@ const nextConfig = {
|
||||
|
||||
"ideogram.ai", // for generated images
|
||||
"picsum.photos", // for placeholder images
|
||||
"dummyimage.com", // for placeholder images
|
||||
"placekitten.com", // for placeholder images
|
||||
],
|
||||
},
|
||||
output: "standalone",
|
||||
@@ -28,11 +30,6 @@ export default isDevelopmentBuild
|
||||
org: "significant-gravitas",
|
||||
project: "builder",
|
||||
|
||||
// Expose Vercel env to the client
|
||||
env: {
|
||||
NEXT_PUBLIC_VERCEL_ENV: process.env.VERCEL_ENV,
|
||||
},
|
||||
|
||||
// Only print logs for uploading source maps in CI
|
||||
silent: !process.env.CI,
|
||||
|
||||
|
||||
@@ -9,15 +9,14 @@
|
||||
"start:standalone": "cd .next/standalone && node server.js",
|
||||
"lint": "next lint && prettier --check .",
|
||||
"format": "prettier --write .",
|
||||
"type-check": "tsc --noEmit",
|
||||
"types": "tsc --noEmit",
|
||||
"test": "next build --turbo && playwright test",
|
||||
"test-ui": "next build --turbo && playwright test --ui",
|
||||
"test:no-build": "playwright test",
|
||||
"test:ui": "next build --turbo && playwright test --ui",
|
||||
"test:unit": "vitest --config vitest.config.mjs --run",
|
||||
"test:unit:watch": "vitest --config vitest.config.mjs --watch",
|
||||
"gentests": "playwright codegen http://localhost:3000",
|
||||
"storybook": "storybook dev -p 6006",
|
||||
"build-storybook": "storybook build",
|
||||
"test-storybook": "test-storybook",
|
||||
"test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"pnpm run build-storybook -- --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && pnpm run test-storybook\"",
|
||||
"storybook:build": "storybook build",
|
||||
"fetch:openapi": "curl http://localhost:8006/openapi.json > ./src/app/api/openapi.json && prettier --write ./src/app/api/openapi.json",
|
||||
"generate:api-client": "orval --config ./orval.config.ts",
|
||||
"generate:api-all": "pnpm run fetch:openapi && pnpm run generate:api-client"
|
||||
@@ -26,9 +25,9 @@
|
||||
"defaults"
|
||||
],
|
||||
"dependencies": {
|
||||
"@faker-js/faker": "9.9.0",
|
||||
"@faker-js/faker": "9.8.0",
|
||||
"@hookform/resolvers": "5.1.1",
|
||||
"@next/third-parties": "15.3.5",
|
||||
"@next/third-parties": "15.3.4",
|
||||
"@phosphor-icons/react": "2.1.10",
|
||||
"@radix-ui/react-alert-dialog": "1.1.14",
|
||||
"@radix-ui/react-avatar": "1.1.10",
|
||||
@@ -49,13 +48,13 @@
|
||||
"@radix-ui/react-tabs": "1.1.12",
|
||||
"@radix-ui/react-toast": "1.2.14",
|
||||
"@radix-ui/react-tooltip": "1.2.7",
|
||||
"@sentry/nextjs": "9.35.0",
|
||||
"@sentry/nextjs": "9.33.0",
|
||||
"@supabase/ssr": "0.6.1",
|
||||
"@supabase/supabase-js": "2.50.3",
|
||||
"@tanstack/react-query": "5.81.5",
|
||||
"@supabase/supabase-js": "2.50.2",
|
||||
"@tanstack/react-query": "5.81.2",
|
||||
"@tanstack/react-table": "8.21.3",
|
||||
"@types/jaro-winkler": "0.2.4",
|
||||
"@xyflow/react": "12.8.1",
|
||||
"@xyflow/react": "12.8.0",
|
||||
"ajv": "8.17.1",
|
||||
"boring-avatars": "1.11.2",
|
||||
"class-variance-authority": "0.7.1",
|
||||
@@ -63,74 +62,78 @@
|
||||
"cmdk": "1.1.1",
|
||||
"cookie": "1.0.2",
|
||||
"date-fns": "4.1.0",
|
||||
"dotenv": "16.5.0",
|
||||
"dotenv": "16.6.0",
|
||||
"elliptic": "6.6.1",
|
||||
"embla-carousel-react": "8.6.0",
|
||||
"framer-motion": "12.23.0",
|
||||
"framer-motion": "12.19.2",
|
||||
"geist": "1.4.2",
|
||||
"jaro-winkler": "0.2.8",
|
||||
"launchdarkly-react-client-sdk": "3.8.1",
|
||||
"lodash": "4.17.21",
|
||||
"lucide-react": "0.525.0",
|
||||
"lucide-react": "0.524.0",
|
||||
"moment": "2.30.1",
|
||||
"next": "15.3.5",
|
||||
"next": "15.3.4",
|
||||
"next-themes": "0.4.6",
|
||||
"nuqs": "2.4.3",
|
||||
"party-js": "2.2.0",
|
||||
"react": "18.3.1",
|
||||
"react-day-picker": "9.8.0",
|
||||
"react-day-picker": "9.7.0",
|
||||
"react-dom": "18.3.1",
|
||||
"react-drag-drop-files": "2.4.0",
|
||||
"react-hook-form": "7.60.0",
|
||||
"react-hook-form": "7.58.1",
|
||||
"react-icons": "5.5.0",
|
||||
"react-markdown": "9.0.3",
|
||||
"react-modal": "3.16.3",
|
||||
"react-shepherd": "6.1.8",
|
||||
"recharts": "2.15.3",
|
||||
"shepherd.js": "14.5.0",
|
||||
"sonner": "2.0.6",
|
||||
"tailwind-merge": "2.6.0",
|
||||
"tailwindcss-animate": "1.0.7",
|
||||
"uuid": "11.1.0",
|
||||
"vaul": "1.1.2",
|
||||
"zod": "3.25.76"
|
||||
"zod": "3.25.67"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@chromatic-com/storybook": "4.0.1",
|
||||
"@playwright/test": "1.53.2",
|
||||
"@storybook/addon-a11y": "9.0.16",
|
||||
"@storybook/addon-docs": "9.0.16",
|
||||
"@storybook/addon-links": "9.0.16",
|
||||
"@storybook/addon-onboarding": "9.0.16",
|
||||
"@storybook/nextjs": "9.0.16",
|
||||
"@playwright/test": "1.53.1",
|
||||
"@storybook/addon-a11y": "9.0.14",
|
||||
"@storybook/addon-docs": "9.0.14",
|
||||
"@storybook/addon-links": "9.0.14",
|
||||
"@storybook/addon-onboarding": "9.0.14",
|
||||
"@storybook/nextjs": "9.0.14",
|
||||
"@tanstack/eslint-plugin-query": "5.81.2",
|
||||
"@tanstack/react-query-devtools": "5.81.5",
|
||||
"@testing-library/jest-dom": "6.6.3",
|
||||
"@testing-library/react": "16.3.0",
|
||||
"@testing-library/user-event": "14.6.1",
|
||||
"@types/canvas-confetti": "1.9.0",
|
||||
"@types/lodash": "4.17.20",
|
||||
"@types/lodash": "4.17.19",
|
||||
"@types/negotiator": "0.6.4",
|
||||
"@types/node": "22.15.30",
|
||||
"@types/react": "18.3.17",
|
||||
"@types/react-dom": "18.3.5",
|
||||
"@types/react-modal": "3.16.3",
|
||||
"@vitest/browser": "3.2.4",
|
||||
"axe-playwright": "2.1.0",
|
||||
"chromatic": "11.25.2",
|
||||
"concurrently": "9.2.0",
|
||||
"cross-env": "7.0.3",
|
||||
"eslint": "8.57.1",
|
||||
"eslint-config-next": "15.3.5",
|
||||
"eslint-plugin-storybook": "9.0.16",
|
||||
"eslint-config-next": "15.3.4",
|
||||
"eslint-plugin-storybook": "9.0.14",
|
||||
"import-in-the-middle": "1.14.2",
|
||||
"msw": "2.10.3",
|
||||
"jsdom": "26.1.0",
|
||||
"msw": "2.10.2",
|
||||
"msw-storybook-addon": "2.0.5",
|
||||
"orval": "7.10.0",
|
||||
"pbkdf2": "3.1.3",
|
||||
"postcss": "8.5.6",
|
||||
"prettier": "3.6.2",
|
||||
"prettier-plugin-tailwindcss": "0.6.13",
|
||||
"require-in-the-middle": "7.5.2",
|
||||
"storybook": "9.0.16",
|
||||
"storybook": "9.0.14",
|
||||
"tailwindcss": "3.4.17",
|
||||
"typescript": "5.8.3"
|
||||
"typescript": "5.8.3",
|
||||
"vite": "7.0.0",
|
||||
"vitest": "3.2.4"
|
||||
},
|
||||
"msw": {
|
||||
"workerDirectory": [
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user