Compare commits

...

81 Commits

Author SHA1 Message Date
Swifty
50f39e55f7 added ability to disable cors at the backend 2025-09-11 09:59:47 +02:00
Ubbe
e70c970ab6 feat(frontend): new <Avatar /> component using next/image (#10897)
## Changes 🏗️

<img width="800" height="648" alt="Screenshot 2025-09-10 at 22 00 01"
src="https://github.com/user-attachments/assets/eb396d62-01f2-45e5-9150-4e01dfcb71d0"
/><br />

Adds a new `<Avatar />` component and uses that across the app. Is a
copy of
[shadcn/avatar](https://duckduckgo.com/?q=shadcn+avatar&t=brave&ia=web)
with the following modifications:
- renders images with
[`next/image`](https://duckduckgo.com/?q=next+image&t=brave&ia=web) by
default
- this ensures avatars rendered on the app are optimised and resized ✔️
- it will work as long as all the domains are white-listed in
`nextjs.config.mjs`
- allows to bypass and use a normal `<img />` tag via an `as` prop if
needed
  - sometimes we might need to render images from a dynamic cdn 🤷🏽‍♂️   

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] ...

### For configuration changes:

None
2025-09-10 13:25:09 +00:00
Abhimanyu Yadav
3bbce71678 feat(builder): Block menu redesign - part 3 (#10864)
### Changes 🏗️

#### Block Menu Redesign - Part 3

This PR continues the block menu redesign effort, implementing the new
content sections and improving the overall user experience. The changes
focus on better organization, pagination, error handling, and visual
consistency.

#### Key Features Implemented:

**1. New Content Organization**
- **All Blocks Content**: Complete listing of all available blocks with
category-based organization and infinite scroll support
(`AllBlocksContent/`)
- **My Agents Content**: Display and manage user's own agents with
pagination (`MyAgentsContent/`)
- **Marketplace Agents Content**: Browse and add marketplace agents with
improved loading states (`MarketplaceAgentsContent/`)
- **Integration Blocks**: Dedicated view for integration-specific blocks
with better filtering (`IntegrationBlocks/`)
- **Suggestion Content**: Smart suggestions based on user context and
search history (`SuggestionContent/`)
- **Integrations Content**: Browse available integrations in a dedicated
view (`IntegrationsContent/`)

**2. Enhanced UI Components**
- **Paginated Lists**: New pagination components for blocks and
integrations (`PaginatedBlocksContent/`, `PaginatedIntegrationList/`)
- **Block List**: Reusable block list component with consistent styling
(`BlockList/`)
- **Improved Error Handling**: Comprehensive error states with retry
functionality across all content types
- **Loading States**: Skeleton loaders for better perceived performance

**3. Infrastructure Improvements**
- **Centralized Styles**: New `style.ts` file for consistent styling
across components
- **Better State Management**: Enhanced context provider with improved
menu state handling
- **Mock Flag Support**: Added feature flags for testing new block
features
- **Default State Enum**: Refactored to use enums for menu default
states

**4. Visual Assets**
- Added 50+ new integration icons/logos for better visual representation
- Updated existing integration images for consistency

**5. Code Quality**
- Improved error handling with proper error cards and retry mechanisms
- Consistent formatting and import organization
- Enhanced TypeScript types and interfaces
- Better separation of concerns with dedicated hooks for each content
type

#### Technical Details:
- **Files Changed**: 96 files
- **Additions**: 1,380 lines
- **Deletions**: 162 lines
- **New Components**: 10+ new React components with dedicated hooks
- **Integration Icons**: 50+ new PNG images for various integrations

#### Breaking Changes:
None - All changes are backwards compatible

---

### Test Plan 📋

- [x] Create a new agent and verify all blocks are accessible
- [x] Test infinite scroll in "All Blocks" view
- [x] Verify pagination works correctly in marketplace agents view
- [x] Test error states by simulating network failures
- [x] Check that all new integration icons display correctly
- [x] Test adding agents from marketplace view
- [x] Ensure skeleton loaders appear during data fetching

> Generated by claude
2025-09-10 11:58:07 +00:00
Abhimanyu Yadav
34fbf4377f fix(frontend): allow lazy loading of images (#10895)
The `next/image` component has inbuilt lazy loading enabled, but in some
components, we are bypassing it using a priority flag. So, I have
reverted this in this PR.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Lazy loading is working perfectly locally.
2025-09-10 11:38:37 +00:00
dependabot[bot]
f682ef885a chore(frontend/deps-dev): Bump 16 dev dependencies to newer minor versions (#10837)
Bumps the development-dependencies group with 16 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
|
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
| `4.1.0` | `4.1.1` |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.54.2`
| `1.55.0` |
|
[@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links)
| `9.1.2` | `9.1.4` |
|
[@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding)
| `9.1.2` | `9.1.4` |
|
[@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs)
| `9.1.2` | `9.1.4` |
|
[@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query)
| `5.83.1` | `5.86.0` |
|
[@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools)
| `5.84.2` | `5.86.0` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `24.2.1` | `24.3.0` |
| [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.3` |
`13.1.4` |
| [concurrently](https://github.com/open-cli-tools/concurrently) |
`9.2.0` | `9.2.1` |
|
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
| `15.4.6` | `15.5.2` |
|
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
| `9.1.2` | `9.1.4` |
| [msw](https://github.com/mswjs/msw) | `2.10.4` | `2.11.1` |
|
[storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core)
| `9.1.2` | `9.1.4` |


Updates `@chromatic-com/storybook` from 4.1.0 to 4.1.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v4.1.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.1/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v4.1.1 (Wed Aug 20 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Broaden version-range for storybook peerDependency <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e38f120b09"><code>e38f120</code></a>
Bump version to: 4.1.1 [skip ci]</li>
<li><a
href="adb8570cbb"><code>adb8570</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="9e38298e21"><code>9e38298</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/390">#390</a>
from chromaui/next</li>
<li><a
href="7b3a184d0a"><code>7b3a184</code></a>
Merge branch 'main' into next</li>
<li><a
href="2faff8b888"><code>2faff8b</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/389">#389</a>
from chromaui/ndelangen/sb10-beta-compatibility</li>
<li><a
href="51015a75f2"><code>51015a7</code></a>
Update yarn.lock to add support for Storybook 10.0.0-0</li>
<li><a
href="6f050adca2"><code>6f050ad</code></a>
Update package.json</li>
<li>See full diff in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v4.1.0...v4.1.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@playwright/test` from 1.54.2 to 1.55.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/microsoft/playwright/releases"><code>@​playwright/test</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v1.55.0</h2>
<h2>New APIs</h2>
<ul>
<li>New Property <a
href="https://playwright.dev/docs/api/class-teststepinfo#test-step-info-title-path">testStepInfo.titlePath</a>
Returns the full title path starting from the test file, including test
and step titles.</li>
</ul>
<h2>Codegen</h2>
<ul>
<li>Automatic <code>toBeVisible()</code> assertions: Codegen can now
generate automatic <code>toBeVisible()</code> assertions for common UI
interactions. This feature can be enabled in the Codegen settings
UI.</li>
</ul>
<h2>Breaking Changes</h2>
<ul>
<li>⚠️ Dropped support for Chromium extension manifest v2.</li>
</ul>
<h2>Miscellaneous</h2>
<ul>
<li>Added support for Debian 13 &quot;Trixie&quot;.</li>
</ul>
<h2>Browser Versions</h2>
<ul>
<li>Chromium 140.0.7339.16</li>
<li>Mozilla Firefox 141.0</li>
<li>WebKit 26.0</li>
</ul>
<p>This version was also tested against the following stable
channels:</p>
<ul>
<li>Google Chrome 139</li>
<li>Microsoft Edge 139</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f992162f04"><code>f992162</code></a>
chore: mark v1.55.0 (<a
href="https://redirect.github.com/microsoft/playwright/issues/37121">#37121</a>)</li>
<li><a
href="4a92ea0025"><code>4a92ea0</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37113">#37113</a>):
docs: add release-notes for v1.55</li>
<li><a
href="aa05507bba"><code>aa05507</code></a>
cherry-pick(<a
href="https://redirect.github.com/microsoft/playwright/issues/37114">#37114</a>):
test: move browser._launchServer in child process</li>
<li><a
href="27ae7dc639"><code>27ae7dc</code></a>
test: tree gardening (<a
href="https://redirect.github.com/microsoft/playwright/issues/37107">#37107</a>)</li>
<li><a
href="cd09d859a9"><code>cd09d85</code></a>
test: unflake &quot;should pick element&quot; (<a
href="https://redirect.github.com/microsoft/playwright/issues/37103">#37103</a>)</li>
<li><a
href="72e47728ba"><code>72e4772</code></a>
chore(trace-viewer): remove unused code (<a
href="https://redirect.github.com/microsoft/playwright/issues/37097">#37097</a>)</li>
<li><a
href="5b8c7d648a"><code>5b8c7d6</code></a>
chore(dotnet): float is non-nullable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37095">#37095</a>)</li>
<li><a
href="c7bf035c35"><code>c7bf035</code></a>
test(webkit): closing dialog &gt; contenteditable (<a
href="https://redirect.github.com/microsoft/playwright/issues/37084">#37084</a>)</li>
<li><a
href="9fd6986f8d"><code>9fd6986</code></a>
test: skip debug-controller tests in driver mode (<a
href="https://redirect.github.com/microsoft/playwright/issues/37090">#37090</a>)</li>
<li><a
href="4c2f44d591"><code>4c2f44d</code></a>
test(bidi): use the nightly channel only for Firefox in CI (<a
href="https://redirect.github.com/microsoft/playwright/issues/37086">#37086</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/microsoft/playwright/compare/v1.54.2...v1.55.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-a11y` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-a11y</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-a11y</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/a11y">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-docs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-docs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-docs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="0f86613a92"><code>0f86613</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32287">#32287</a>
from storybookjs/shilman/error-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li><a
href="f8ff03a47d"><code>f8ff03a</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs/issues/32238">#32238</a>
from storybookjs/sidnioulz/issue-31436-table</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/docs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-links` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-links</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-links</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/links">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/addon-onboarding` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/addon-onboarding</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/addon-onboarding</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/addons/onboarding">compare
view</a></li>
</ul>
</details>
<br />

Updates `@storybook/nextjs` from 9.1.2 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/releases"><code>@​storybook/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.1.4</h2>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>v9.1.3</h2>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md"><code>@​storybook/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<ul>
<li>Angular: Properly merge builder options and browserTarget options -
<a
href="https://redirect.github.com/storybookjs/storybook/pull/32272">#32272</a>,
thanks <a
href="https://github.com/kroeder"><code>@​kroeder</code></a>!</li>
<li>Core: Optimize bundlesize, by reusing internal/babel in
mocking-utils - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32350">#32350</a>,
thanks <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>!</li>
<li>Svelte &amp; Vue: Add framework-specific <code>docgen</code> option
to disable docgen processing - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32319">#32319</a>,
thanks <a
href="https://github.com/copilot-swe-agent"><code>@​copilot-swe-agent</code></a>!</li>
<li>Svelte: Support <code>@sveltejs/vite-plugin-svelte</code> v6 - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32320">#32320</a>,
thanks <a
href="https://github.com/JReinhold"><code>@​JReinhold</code></a>!</li>
</ul>
<h2>9.1.3</h2>
<ul>
<li>Docs: Move button in ArgsTable heading to fix screenreader
announcements - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32238">#32238</a>,
thanks <a
href="https://github.com/Sidnioulz"><code>@​Sidnioulz</code></a>!</li>
<li>Mock: Catch errors when transforming preview files - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32216">#32216</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Next.js: Fix version mismatch error in Webpack - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32306">#32306</a>,
thanks <a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>!</li>
<li>Telemetry: Disambiguate traffic coming from error/upgrade links - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32287">#32287</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
<li>Telemetry: Disambiguate unattributed traffic from Onboarding - <a
href="https://redirect.github.com/storybookjs/storybook/pull/32286">#32286</a>,
thanks <a
href="https://github.com/shilman"><code>@​shilman</code></a>!</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f02684ad1"><code>9f02684</code></a>
Bump version from &quot;9.1.3&quot; to &quot;9.1.4&quot; [skip ci]</li>
<li><a
href="ce3915727c"><code>ce39157</code></a>
Bump version from &quot;9.1.2&quot; to &quot;9.1.3&quot; [skip ci]</li>
<li><a
href="1dba82472a"><code>1dba824</code></a>
Next.js: Avoid multiple webpack versions at runtime</li>
<li><a
href="a0151efbce"><code>a0151ef</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32306">#32306</a>
from storybookjs/valentin/fix-nextjs-webpack-error</li>
<li><a
href="730bbf04ed"><code>730bbf0</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32284">#32284</a>
from storybookjs/shilman/package-json-keywords</li>
<li><a
href="bbb4ffe209"><code>bbb4ffe</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32286">#32286</a>
from storybookjs/shilman/configure-utm</li>
<li><a
href="2bae930c30"><code>2bae930</code></a>
Merge pull request <a
href="https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs/issues/32283">#32283</a>
from storybookjs/shilman/readme-utm-params</li>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v9.1.4/code/frameworks/nextjs">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/eslint-plugin-query` from 5.83.1 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/eslint-plugin-query</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li>See full diff in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/eslint-plugin-query">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-query-devtools` from 5.84.2 to 5.86.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/query/releases"><code>@​tanstack/react-query-devtools</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.86.0</h2>
<p>Version 5.86.0 - 9/4/25, 9:27 AM (Manual Release)</p>
<h2>Changes</h2>
<p>Note: This release contains BREAKING CHANGES for the
<code>experimental_streamedQuery</code> API:</p>
<h3>BREAKING CHANGES</h3>
<ul>
<li>query-core: add custom reducer support to streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9532">#9532</a>)
(8f24580) by <a
href="https://github.com/marcog83"><code>@​marcog83</code></a></li>
</ul>
<p>BREAKING CHANGE: The <code>maxChunks</code> parameter has been
removed from <code>streamedQuery</code>.
Use a custom <code>reducer</code> function to control data aggregation
behavior instead.</p>
<p>BREAKING CHANGE: When using a custom reducer function with
<code>streamedQuery</code>,
the <code>initialValue</code> parameter is now required and must be
provided.</p>
<ul>
<li>rename queryFn to streamFn in streamedQuery (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9606">#9606</a>)
(b25412a) by Dominik Dorfmeister</li>
</ul>
<p>BREAKING CHANGE: <code>queryFn</code> has been renamed to
<code>streamFn</code></p>
<h3>Chore</h3>
<ul>
<li>tsconfig.json: simplify &quot;include&quot; patterns by
consolidating file extensions and directory paths (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9547">#9547</a>)
(7306474) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Test</h3>
<ul>
<li>react-query/useMutationState: clarify assertions and improve code
formatting (<a
href="https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools/issues/9611">#9611</a>)
(43049c5) by <a
href="https://github.com/sukvvon"><code>@​sukvvon</code></a></li>
</ul>
<h3>Other</h3>
<ul>
<li>(c75a994) by BennettLiam</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/eslint-plugin-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-async-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-broadcast-client-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-persist-client-core</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/query-sync-storage-persister</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/react-query-next-experimental</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-devtools</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/solid-query-persist-client</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
<li><code>@​tanstack/svelte-query</code><a
href="https://github.com/5"><code>@​5</code></a>.86.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1599bb4742"><code>1599bb4</code></a>
release: v5.86.0</li>
<li><a
href="7306474eee"><code>7306474</code></a>
chore(tsconfig.json): simplify 'include' patterns by consolidating file
exten...</li>
<li><a
href="0a35234ce4"><code>0a35234</code></a>
release: v5.85.9</li>
<li><a
href="aec19c93a5"><code>aec19c9</code></a>
release: v5.85.8</li>
<li><a
href="a978b34b1b"><code>a978b34</code></a>
release: v5.85.7</li>
<li><a
href="b998f68a57"><code>b998f68</code></a>
release: v5.85.6</li>
<li><a
href="0a0e01443d"><code>0a0e014</code></a>
release: v5.85.5</li>
<li><a
href="ef0a9d2c7e"><code>ef0a9d2</code></a>
release: v5.85.4</li>
<li><a
href="b6516bd25e"><code>b6516bd</code></a>
release: v5.85.3</li>
<li><a
href="e14f39c6ee"><code>e14f39c</code></a>
release: v5.85.2</li>
<li>Additional commits viewable in <a
href="https://github.com/TanStack/query/commits/v5.86.0/packages/react-query-devtools">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 24.2.1 to 24.3.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromatic` from 13.1.3 to 13.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/releases">chromatic's
releases</a>.</em></p>
<blockquote>
<h2>v13.1.4</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md">chromatic's
changelog</a>.</em></p>
<blockquote>
<h1>v13.1.4 (Fri Aug 29 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Feat:Fix outdated and incorrect links in the CLI <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1202">#1202</a>
(<a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a>)</li>
<li>Show setup URL on build errors when onboarding. <a
href="https://redirect.github.com/chromaui/chromatic-cli/pull/1201">#1201</a>
(<a href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<ul>
<li><a
href="https://github.com/jonniebigodes"><code>@​jonniebigodes</code></a></li>
<li>John Hobbs (<a
href="https://github.com/jmhobbs"><code>@​jmhobbs</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c85259331b"><code>c852593</code></a>
Bump version to: 13.1.4 [skip ci]</li>
<li><a
href="aae7c41b86"><code>aae7c41</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="062d1fadee"><code>062d1fa</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1202">#1202</a>
from chromaui/feat_fix_links</li>
<li><a
href="15f72ae8ea"><code>15f72ae</code></a>
Removes references to the viewlayer used by noCSFGlobs</li>
<li><a
href="1a0e200bce"><code>1a0e200</code></a>
Remove viewlayer from nocsf file</li>
<li><a
href="f4c52ab0af"><code>f4c52ab</code></a>
Attempt to fix linting errors</li>
<li><a
href="0db944b07c"><code>0db944b</code></a>
Feat:Fix outdated and incorrect links in the CLI</li>
<li><a
href="8879737ab0"><code>8879737</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/chromatic-cli/issues/1201">#1201</a>
from chromaui/jmhobbs/cap-3287-chromatic-cicli-issue...</li>
<li><a
href="d3c25a21a1"><code>d3c25a2</code></a>
Improve links during onboarding with system errors</li>
<li><a
href="cce1b29cdf"><code>cce1b29</code></a>
Show setup URL on build errors when onboarding.</li>
<li>See full diff in <a
href="https://github.com/chromaui/chromatic-cli/compare/v13.1.3...v13.1.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `concurrently` from 9.2.0 to 9.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/open-cli-tools/concurrently/releases">concurrently's
releases</a>.</em></p>
<blockquote>
<h2>v9.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>chore: update eslint-plugin-simple-import-sort from v10 to v12 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/551">open-cli-tools/concurrently#551</a></li>
<li>chore: update eslint-config-prettier from v9 to v10 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/552">open-cli-tools/concurrently#552</a></li>
<li>Remove lodash by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/555">open-cli-tools/concurrently#555</a></li>
<li>chore: update coveralls-next from v4 to v5 by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/557">open-cli-tools/concurrently#557</a></li>
<li>Replace jest with vitest by <a
href="https://github.com/gustavohenke"><code>@​gustavohenke</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/554">open-cli-tools/concurrently#554</a></li>
<li>Upgrade to pnpm v10 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/558">open-cli-tools/concurrently#558</a></li>
<li>chore: remove unused eslint-plugin-jest by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/559">open-cli-tools/concurrently#559</a></li>
<li>Minor dependency updates by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/560">open-cli-tools/concurrently#560</a></li>
<li>Migrate to ESLint v9 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/561">open-cli-tools/concurrently#561</a></li>
<li>Update shell-quote to 1.8.3 by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/562">open-cli-tools/concurrently#562</a></li>
<li>Full coverage by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/563">open-cli-tools/concurrently#563</a></li>
<li>Update GH actions/workflows, enable NPM provenance by <a
href="https://github.com/paescuj"><code>@​paescuj</code></a> in <a
href="https://redirect.github.com/open-cli-tools/concurrently/pull/564">open-cli-tools/concurrently#564</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="414cd016c6"><code>414cd01</code></a>
9.2.1</li>
<li><a
href="0dfedb028c"><code>0dfedb0</code></a>
Update GH actions/workflows, enable npm provenance (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/564">#564</a>)</li>
<li><a
href="ee81511999"><code>ee81511</code></a>
Remove obsolete tsdk config</li>
<li><a
href="09d3d7b11f"><code>09d3d7b</code></a>
Full coverage (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/563">#563</a>)</li>
<li><a
href="8cfc6a6cb4"><code>8cfc6a6</code></a>
Update shell-quote to 1.8.3 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/562">#562</a>)</li>
<li><a
href="4c403f8b01"><code>4c403f8</code></a>
Migrate to ESLint v9 (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/561">#561</a>)</li>
<li><a
href="8bfcaf7828"><code>8bfcaf7</code></a>
Minor dependency updates (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/560">#560</a>)</li>
<li><a
href="389fec4830"><code>389fec4</code></a>
Enable watch mode &amp; coverage for unit tests by default</li>
<li><a
href="7993ce6817"><code>7993ce6</code></a>
chore: remove unused eslint-plugin-jest (<a
href="https://redirect.github.com/open-cli-tools/concurrently/issues/559">#559</a>)</li>
<li><a
href="58300f45eb"><code>58300f4</code></a>
Remove obsolete .npmrc file</li>
<li>Additional commits viewable in <a
href="https://github.com/open-cli-tools/concurrently/compare/v9.2.0...v9.2.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-config-next` from 15.4.6 to 15.5.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.2</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: disable unknownatrules lint rule entirely (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83059">#83059</a>)</li>
<li>revert: add ?dpl to fonts in /_next/static/media (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83062">#83062</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a> and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: aliased navigations should apply scroll handling (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82900">#82900</a>)</li>
<li>Turbopack: fix invalid NFT entry with file behind symlink (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82887">#82887</a>)</li>
<li>fix: typesafe linking to route handlers and pages API routes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82858">#82858</a>)</li>
<li>fix: change &quot;noUnknownAtRules&quot; to &quot;warn&quot; for
Biome (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82974">#82974</a>)</li>
<li>fix: add path normalization to getRelativePath for Windows (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82918">#82918</a>)</li>
<li>feat: add typesafety with config.typedRoutes to redirect() and
permanentRedirect() (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82860">#82860</a>)</li>
<li>fix: avoid importing types that will be unused (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82856">#82856</a>)</li>
<li>fix: update the config.api.responseLimit type (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82852">#82852</a>)</li>
<li>fix: update validation return types (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82854">#82854</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgub"><code>@​bgub</code></a>, <a
href="https://github.com/mischnic"><code>@​mischnic</code></a>, and <a
href="https://github.com/ztanner"><code>@​ztanner</code></a> for
helping!</p>
<h2>v15.5.1-canary.26</h2>
<h3>Core Changes</h3>
<ul>
<li>Upgrade React from <code>2805f0ed-20250903</code> to
<code>3302d1f7-20250903</code>: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83400">#83400</a></li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>Fixed broken link in backend-for-frontend.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83310">#83310</a></li>
<li>Update next-response.mdx: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83302">#83302</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/geraldochristiano"><code>@​geraldochristiano</code></a>
and <a href="https://github.com/blntgvn42"><code>@​blntgvn42</code></a>
for helping!</p>
<h2>v15.5.1-canary.25</h2>
<h3>Core Changes</h3>
<ul>
<li>fix(ppr): fix prerender info matching for rewritten paths: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83359">#83359</a></li>
<li>fix: shrink bottom stack container height: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83377">#83377</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="497ec6aa08"><code>497ec6a</code></a>
v15.5.2</li>
<li><a
href="cc68ced552"><code>cc68ced</code></a>
v15.5.1</li>
<li><a
href="7e08c8223d"><code>7e08c82</code></a>
v15.5.0</li>
<li>...

_Description has been truncated_

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 10:21:53 +00:00
Reinier van der Leer
2ffd249aac fix(backend/external-api): Improve security & reliability of API key storage (#10796)
Our API key generation, storage, and verification system has a couple of
issues that need to be ironed out before full-scale deployment.

### Changes 🏗️

- Move from unsalted SHA256 to salted Scrypt hashing for API keys
- Avoid false-negative API key validation due to prefix collision
- Refactor API key management code for clarity
- [refactor(backend): Clean up API key DB & API code
(#10797)](https://github.com/Significant-Gravitas/AutoGPT/pull/10797)
  - Rename models and properties in `backend.data.api_key` for clarity
- Eliminate redundant/custom/boilerplate error handling/wrapping in API
key endpoint call stack
- Remove redundant/inaccurate `response_model` declarations from API key
endpoints

Dependencies for `autogpt_libs`:
- Add `cryptography` as a dependency
- Add `pyright` as a dev dependency

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Performing these actions through the UI (still) works:
    - [x] Creating an API key
    - [x] Listing owned API keys
    - [x] Deleting an owned API key
  - [x] Newly created API key can be used in Swagger UI
  - [x] Existing API key can be used in Swagger UI
  - [x] Existing API key is re-encrypted with salt on use
2025-09-10 09:34:49 +00:00
Ubbe
986245ec43 feat(frontend): run agent page improvements (#10879)
## Changes 🏗️

- Add all the cron scheduling options ( _yearly, monthly, weekly,
custom, etc..._ ) using the new Design System components
- Add missing agent/run actions: export agent + delete agent

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `new-agent-runs` enabled
  - [x] Test the above 

### For configuration changes:

None
2025-09-10 07:44:51 +00:00
Bentlybro
f89717153f Merge branch 'master' into dev 2025-09-10 08:52:35 +01:00
Nicholas Tindle
5da41e0753 fix(backend): Add Airtable record normalization + find/create base (#10891)
## Summary
Fixes critical issue with Airtable API where empty/false fields are
completely omitted from responses, causing inconsistent data structures.
Also improves the create base block to prevent duplicate bases.

<!-- Clearly explain the need for these changes: -->
The Airtable API has a problematic behavior where it omits fields with
"empty" values from responses:
- Unchecked checkboxes are missing entirely instead of returning `false`
- Empty number fields are missing instead of returning `0`
- This makes it impossible to distinguish between "field doesn't exist"
and "field is false/empty"
- Users were getting inconsistent record structures that broke their
workflows

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

#### 1. **Added Record Normalization**
(`backend/blocks/airtable/_api.py`)
- New `get_table_schema()` function to fetch table field definitions
- New `get_empty_value_for_field()` to determine appropriate empty
values per field type
- New `normalize_records()` to fill in missing fields with proper
defaults:
  - Checkbox → `false`
  - Number/Currency/Percent/Duration/Rating → `0`
  - Text fields → `""`
  - Multiple selects/attachments/collaborators → `[]`
  - Dates/Single selects → `null`
- New `get_base_tables()` to fetch tables for a base

#### 2. **Enhanced List and Get Record Blocks**
(`backend/blocks/airtable/records.py`)
- Added `normalize_output` parameter (defaults to `true`) - ensures all
fields are present
- Added `include_field_metadata` parameter to optionally include field
type information
- When normalization is enabled, fetches schema once and normalizes all
records
- Can be disabled by setting `normalize_output=false` for raw Airtable
response

#### 3. **Simplified Create Records Block**
- Added `skip_normalization` parameter (default `false`) - normalized
output by default
- Records now always include all fields with proper empty values

#### 4. **Enhanced Create Base Block**
(`backend/blocks/airtable/bases.py`)
- Added `find_existing` parameter (defaults to `true`) to prevent
duplicate bases
- When finding an existing base, now fetches and returns table
information
- Added `was_created` output field to indicate whether base was created
or found

### Testing 📋

-  All Airtable block tests pass
-  Tested normalization with records containing missing checkbox fields
-  Verified all field types get appropriate empty values
-  Tested create base find-or-create functionality
-  Ran `poetry run format` and `poetry run lint`

### Migration Guide

This update makes the blocks behave more predictably:
- **List/Get Records**: All fields are now included by default. Set
`normalize_output: false` if you need the raw Airtable response
- **Create Records**: Simply creates records, no more upsert confusion
- **Create Base**: Prevents duplicate bases by default. Set
`find_existing: false` to force creation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes were required - all changes are code-only.
2025-09-10 04:57:26 +00:00
Nicholas Tindle
cddeb185a8 feat(blocks): Add Gmail Draft Reply Block (#10882)
### Changes 🏗️

This PR adds a new `GmailDraftReplyBlock` that enables creating draft
replies to existing Gmail email threads. This block complements the
existing Gmail blocks by providing specialized functionality for
replying within email conversations.

**New Block: GmailDraftReplyBlock**
- **Purpose**: Creates draft replies to Gmail threads with intelligent
content type detection
- **Key Features**:
-  Automatic HTML detection: Draft replies containing HTML tags are
formatted as text/html
-  No hard-wrap for plain text: Plain text draft replies preserve
natural line flow (prevents 78-character hard-wrap issue)
-  Manual content type override: Use content_type parameter to force
specific format ("auto", "plain", or "html")
-  Reply-all functionality: Option to reply to all original recipients
-  Thread preservation: Maintains proper email threading with
In-Reply-To and References headers
  -  Full Unicode/emoji support with UTF-8 encoding
  -  File attachment support

**Implementation Details**:
- Retrieves parent message metadata to build proper reply context
- Automatically determines recipients based on reply mode (reply vs
reply-all)
- Adds "Re:" prefix to subject if not already present
- Maintains email thread continuity with proper headers
- Supports OAuth scopes: `gmail.modify` and `gmail.readonly`

**Inputs**:
- `threadId`: Thread ID to reply in
- `parentMessageId`: ID of the message being replied to  
- `to`, `cc`, `bcc`: Optional recipient lists
- `replyAll`: Boolean to reply to all original recipients
- `subject`: Optional custom subject
- `body`: Email body (plain text or HTML)
- `content_type`: Optional content type override
- `attachments`: Optional file attachments

**Outputs**:
- `draftId`: Created draft ID
- `messageId`: Draft message ID
- `threadId`: Thread ID
- `status`: Draft creation status
- `error`: Error message if any

Closes #10846

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
  - [x] Block includes test input/output configuration
  - [x] Mock test handler implemented for unit testing
  - [x] Proper error handling included
  - [x] OAuth authentication properly configured
- [x] Content type detection logic tested (auto-detects HTML vs plain
text)
  - [x] Threading headers properly maintained for email conversations

<details>
  <summary>Additional Testing Notes</summary>
  
  - The block uses the existing Gmail authentication infrastructure
  - Test credentials and mock outputs are configured for CI/CD
- The `_make_mime_text` helper function ensures proper content
formatting
  - Reply-all logic properly deduplicates recipients
</details>

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note**: No configuration changes required - uses existing Google OAuth
configuration.

<details>
  <summary>Configuration Compatibility</summary>

  - Uses existing `GOOGLE_OAUTH_IS_CONFIGURED` flag
  - Leverages existing Google OAuth credentials system
  - No new environment variables or services required
</details>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2025-09-10 04:24:15 +00:00
Reinier van der Leer
08a3fd6d26 Revert "fix(backend): Improve GitHub PR URL validation and API URL generation" (#10890)
Reverts Significant-Gravitas/AutoGPT#10317

The changes omit the hostname from the "prepared" URL, breaking some
GitHub blocks.
2025-09-09 21:47:51 +00:00
Nicholas Tindle
39b30bc82c ci: Add 'Bash(gh pr edit:*)' to allowed tools for claude (#10885) 2025-09-09 12:36:46 -05:00
Reinier van der Leer
2df0e2b750 fix(backend/api): Fix & add tests for APIKeyAuthenticator (#10881)
- Resolves #10875

### Changes 🏗️

- Fix use of `super().__call__` in `APIKeyAuthenticator.__call__`
- Fix non-ASCII API key validation
- Add tests for `APIKeyAuthenticator`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test implementations have been verified manually
  - [x] All the new tests pass
2025-09-09 16:09:51 +00:00
Ubbe
925f249ce1 feat(frontend): use new output renderers on new run page (#10876)
## Changes 🏗️

Integrating the great work @ntindle did on the rich agent output
renderers into the new Agent run page in the library 💜

- Implemented enhanced output rendering in `<RunDetails />` using the
shared output-renderers
- Added `<RunOutputs />` sub-component at
`RunDetails/components/RunOutputs.tsx` that:
- [x] builds items from `run.outputs`, extracts metadata, picks a
renderer via `globalRegistry`, and falls back to `TextRenderer`
- [x] renders `<OutputActions />` for copy/download and a list of
`<OutputItems />`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run an agent on the view which outputs rich content
- [x] See the output use the new renderers, for example code is
higlighted

### For configuration changes:

None
2025-09-09 05:44:37 +00:00
Ubbe
e8cf3edbf4 feat(frontend): add timezone to new library agent page (#10874)
## Changes 🏗️

<img width="800" height="756" alt="Screenshot 2025-09-09 at 14 03 24"
src="https://github.com/user-attachments/assets/65f3e3a8-1ce0-491c-824a-601a494d3a36"
/>

<img width="600" height="493" alt="Screenshot 2025-09-09 at 14 03 28"
src="https://github.com/user-attachments/assets/457b37a3-6b3b-49b8-912c-c72cf06e8e58"
/>

Following the nice changes @ntindle did regarding timezones, bring them
into the new page:
- display the timezone when scheduling an agent on the new modal
- display the timezone for a schedule on the new schedule details view

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with `agent-new-runs` flag ON
  - [x] Open an agent on the new page
- [x] On the new modal, create a schedule, it display the timezone alert
  - [x] Once created, on the schedule view, it displays the timezone   

### For configuration changes:

None
2025-09-09 05:37:48 +00:00
Nicholas Tindle
dc03ea718c fix(backend): Report process errors to Sentry before cleanup (#10873)
Adds error reporting to Sentry for exceptions (excluding
KeyboardInterrupt and SystemExit) before executing process cleanup.
Silently ignores failures if Sentry is unavailable.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds cleanup for sentry
Adds disabling for sentry
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test all services with manual exception raising
  - [x] Remove those excptions
  - [x] Make sure they show up in sentry
2025-09-09 04:56:51 +00:00
Nicholas Tindle
dbee580d80 build(frontend): Increase build process memory limit to 4 GiB (#10861)
Increase memory limit of frontend build process to 4GiB to fix
out-of-memory build failures
2025-09-08 05:04:17 +00:00
Nicholas Tindle
0325ec0a2c fix(frontend): Fix environment variable handling in Docker builds for dev/prod deployments (#10859)
<!-- Clearly explain the need for these changes: -->
Sentry was not being enabled in dev/prod deployments because environment
variables were being incorrectly overwritten during the Docker build
process.

### Changes 🏗️

- Fixed Dockerfile environment variable merging logic to prevent
`.env.default` from overwriting `.env.production` values
- Added `NODE_ENV=production` to build stage to ensure Next.js looks for
production env files
- Updated env file merging to only run when not in CI/CD (when
`.env.production` doesn't exist)
- When `.env.production` exists (CI/CD), now merges defaults with
production values properly

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Verify local Docker builds still work with `docker compose up`
- [ ] Verify dev deployment has `NEXT_PUBLIC_APP_ENV=dev` in built
JavaScript
- [ ] Verify prod deployment has `NEXT_PUBLIC_APP_ENV=prod` in built
JavaScript
- [ ] Verify Sentry is enabled in dev/prod deployments
(`isProdOrDev=true`)

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Technical Details

**Root Cause:**
1. CI/CD workflow creates `.env.production` with correct values (e.g.,
`NEXT_PUBLIC_APP_ENV=dev`)
2. Dockerfile's env merging logic always created `.env` from
`.env.default`
3. Next.js loads `.env.production` first, then `.env` second
4. Since `.env` is loaded after `.env.production`, it overwrites the
values
5. `.env.default` has `NEXT_PUBLIC_APP_ENV=local`, causing `getAppEnv()`
to return "local" instead of "dev"/"prod"
6. This made `isProdOrDev` evaluate to `false`, disabling Sentry

**Solution:**
The Dockerfile now checks if `.env.production` exists:
- If yes (CI/CD): Merges `.env.default` + `.env.production` →
`.env.production` (production values take precedence)
- If no (local): Merges `.env.default` + `.env` → `.env` (user values
take precedence)

This ensures production deployments get the correct environment
variables while preserving local development workflow.

🤖 Description generated + Investigation assisted with [Claude
Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-06 17:05:45 +00:00
Ubbe
3952a1a226 feat(frontend): smol improvements on new run view (#10854)
## Changes 🏗️

<img width="800" height="790" alt="Screenshot 2025-09-05 at 17 22 36"
src="https://github.com/user-attachments/assets/8b22424c-1968-4c4f-9eed-3d5d5185751d"
/>

- Make a nicer empty state and display it when there are no
runs/schedules
- Rename search param to `executionId` to mirror what was on the old
page
- Reduce polling when execution is happening to 1.5s ( 3.s is too slow
maybe... )
- Make sure the run details page also updates when a run is happening

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app
  - [x] Tested the above 

### For configuration changes:

None

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-09-06 09:02:25 +00:00
Krzysztof Czerwinski
cfc975d39b feat(backend): Type for API block data response (#10763)
Moving to auto-generated frontend types caused returned blocks data to
no longer have proper typing.

### Changes 🏗️

- Add `BlockInfo` model and `get_info` function that returns it to the
`Block` class, including costs
- Move `BlockCost` and `BlockCostType` to `block.py` to prevent circular
imports

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Endpoints using the new type work correctly

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-06 04:21:48 +00:00
Zamil Majdy
46e0f6cc45 feat(platform): Add recommended run schedule for agent execution (#10827)
## Summary

<img width="1000" alt="Screenshot 2025-09-02 at 9 46 49 PM"
src="https://github.com/user-attachments/assets/d78100c7-7974-4d37-a788-757764d8b6b7"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 24 PM"
src="https://github.com/user-attachments/assets/cd092963-8e26-4198-b65a-4416b2307a50"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 22 30 PM"
src="https://github.com/user-attachments/assets/e16b3bdb-c48c-4dec-9281-b2a35b3e21d0"
/>

<img width="1000" alt="Screenshot 2025-09-02 at 9 20 38 PM"
src="https://github.com/user-attachments/assets/11d74a39-f4b4-4fce-8d30-0e6a925f3a9b"
/>

• Added recommended schedule cron expression as an optional input
throughout the platform
• Implemented complete data flow from builder → store submission → agent
library → run page
• Fixed UI layout issues including button text overflow and ensured
proper component reusability

## Changes

### Backend
- Added `recommended_schedule_cron` field to `AgentGraph` schema and
database migration
- Updated API models (`LibraryAgent`, `MyAgent`,
`StoreSubmissionRequest`) to include the new field
- Enhanced store submission approval flow to persist recommended
schedule to database

### Frontend
- Added recommended schedule input to builder page (SaveControl
component) with overflow-safe styling
- Updated store submission modal (PublishAgentModal) with schedule
configuration
- Enhanced agent run page with schedule tip display and pre-filled
schedule dialog
- Refactored `CronSchedulerDialog` with discriminated union types for
better reusability
- Fixed layout issues including button text truncation and popover width
constraints
- Implemented robust cron expression parsing with 100% reversibility
between UI and cron format

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-06 03:57:03 +02:00
Nicholas Tindle
c03af5c196 feat(frontend): allow sentry disable in dev (#10858)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Vercel is logging things it shouldnt

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manually verified in vercel
2025-09-05 20:29:39 +00:00
Nicholas Tindle
00cbfb8f80 feat(frontend): re-enable sentry in dev (#10857)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- We want sentry to actually work so we can do testing
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] we're just re-enabling. it wroks in prod
2025-09-05 19:38:10 +00:00
Ubbe
3beafae955 feat(frontend): new agent library run page (#10835)
## Changes 🏗️

This is the new **Agent Library Run** page. Sorry in advance for the
massive PR 🙏🏽 . I got carried away and it has been tricky to split it (
_maybe I abused the agent too much_ 🤔 )

<img width="800" height="1085" alt="Screenshot 2025-09-04 at 13 58 33"
src="https://github.com/user-attachments/assets/b709edb9-d2b5-48ad-a04d-dddf10c89af3"
/>

<img width="800" height="338" alt="Screenshot 2025-09-04 at 13 54 51"
src="https://github.com/user-attachments/assets/efa28be2-d2dd-477f-af13-33ddd1d639dd"
/>

<img width="800" height="598" alt="Screenshot 2025-09-04 at 13 54 18"
src="https://github.com/user-attachments/assets/806ab620-3492-4c5b-b4e2-f17b89756dd8"
/>

- Schedules are now on the sidebar tabbed along with runs
- The whole UI has been updated to match the new designs and design
system
- There is no more "run draft" view as the modal is in charge of new
runs now 💪🏽
- The page is responsive and mobile friendly 📱 


Uploading mobile.mov…


https://github.com/user-attachments/assets/0e483062-0e50-4fa6-aaad-a1f6766df931

### Safety

I understand this is a lot of changes. However is all behind a feature
flag, `new-agent-runs`, when OFF it will display the old library agent
view. The old library agent view can still be accessed under:
`/library/legacy/{id}` for reference 👍🏽

### Testing

I haven't any tests for now... 💆🏽 I want to get this enabled on dev so
we can start running our agents there through the new page and modal and
start catching edge-cases.

Tests will come later in the form of E2E for the happy paths, and
probably I will introduce [Vitest](https://vitest.dev/) + [Testing
Library](https://testing-library.com/) for the finer details...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test the above

### For configuration changes:

None, the feature flag is already configured 🙏🏽

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-09-05 06:39:54 +00:00
seer-by-sentry[bot]
9cd186a2f3 fix(backend): Improve GitHub PR URL validation and API URL generation (#10317)
### Changes 🏗️

Fixes
[AUTOGPT-SERVER-4EN](https://sentry.io/organizations/significant-gravitas/issues/6731949478/).
The issue was that: Issue URL passed to PR file reader, regex failed,
leading to issue API call, returning object iterated as keys, causing
AttributeError.

- Refactor `prepare_pr_api_url` to improve validation of GitHub PR URLs.
- Update regex to specifically target github.com URLs.
- Raise ValueError with a descriptive message for invalid URLs.
- Correctly construct the API URL using the extracted repository path.

This fix was generated by Seer in Sentry, triggered automatically. 👁️
Run ID: 265077

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6731949478/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test plan:
- [x] Provide an invalid GitHub PR URL and verify that a ValueError is
raised with a descriptive message.
- [x] Provide a valid GitHub PR URL and verify that the API URL is
correctly constructed.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Bently <Github@bentlybro.com>
2025-09-05 06:35:45 +00:00
Nicholas Tindle
dcf26bd3d4 ci: update Claude code action (#10851)
<!-- Clearly explain the need for these changes: -->
Claude code now uses prompt not system prompt
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Swaps to peomot from custom system prompt
### Checklist 📋

#### For code changes:
N/A
2025-09-05 06:33:09 +00:00
Bently
b97f097c9d feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-05 06:31:12 +01:00
Reinier van der Leer
ce24975a9d fix(backend): Unbreak get_webhook query (#10850)
- Resolves #10849

### Changes 🏗️

- Use `AGENT_PRESET_INCLUDE` in `INTEGRATION_WEBHOOK_INCLUDE` so the
`AgentPreset.from_db(..)` doesn't break

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Webhook ingress works
2025-09-04 19:00:05 +00:00
Bently
75c90e49ce feat(blocks): add AI Image Customizer block using Googles Nano Banana (#10845)
Add new AutoGPT Platform Block that uses google/gemini-2.5-flash-image
model via Replicate API.

Features:
- Text prompt input for image generation
- Optional list of image URLs as input
- Configurable output format (jpg/png, defaults to png)
- Single model option: google/gemini-2.5-flash-image
- Returns image_url output for generated images

Fixes #10815

🤖 Generated with [Claude Code](https://claude.ai/code)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the AI image customizer block and upload 2 images to see if it
uses them in the image generation/edits


<img width="1536" height="672" alt="tmprhzqasxz"
src="https://github.com/user-attachments/assets/39d7adbd-2847-4988-aeab-1c5453290174"
/>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-04 18:51:32 +00:00
Zamil Majdy
2e38f132e7 feat(backend/store): implement sub-agent approval support (#10842)
## Summary
Implement comprehensive sub-agent approval flow following the business
requirements from the flow diagram.

<img width="1956" height="1448" alt="image"
src="https://github.com/user-attachments/assets/8de35e5b-9d3e-4dc2-bff0-47b49dbebc83"
/>


### Key Features
-  **Auto-approve sub-agents** when main agent is approved
-  **Handle all scenarios**: new listings, existing versions, missing
versions
-  **Transaction safety** with atomic operations via database
transactions
-  **Parallel processing** using asyncio.gather for performance
optimization
-  **Hidden from store** with isAvailable=false for all sub-agents

### Implementation Details
- **Replaced** `_get_missing_sub_store_listing` with comprehensive
`_handle_sub_agent_approvals`
- **Added** `_approve_sub_agent` function with early returns for clean,
readable code flow
- **Used** `transaction()` context manager to ensure data consistency
across operations
- **Process sub-agents in parallel** while maintaining transaction
integrity

### Business Logic Flow
1. **Check if sub-agent is already listed** in store
2. **If not listed**: create new store listing with `isAvailable=false`
3. **If listed but not approved**: approve the correct version  
4. **If correct version not listed**: create store listing version and
approve it
5. **If already approved**: no action needed (early return)

All sub-agents remain **hidden from public store** while being
internally approved for system use.

## Files Changed
- `backend/server/v2/store/db.py` - Core implementation of sub-agent
approval logic

## Test Plan
- [ ] Verify main agent approval triggers sub-agent approvals
- [ ] Test all sub-agent scenarios: new, existing unapproved, existing
approved
- [ ] Confirm sub-agents remain hidden (`isAvailable=false`)
- [ ] Validate transaction rollback on failures
- [ ] Check parallel processing works correctly

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 17:59:37 +00:00
Nicholas Tindle
5be6987d58 ci: Add model argument to Claude workflow (#10843) 2025-09-04 10:22:53 -05:00
Nicholas Tindle
b59592be9b feat(ci): Claude workflow for Dependabot PR analysis (#10841)
Created workflow to analyze Dependabot PRs with Claude, including
detailed dependency analysis and changelog review.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds workflow for claude to do dependabot
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
N/A

#### For configuration changes:
N/A

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-09-04 14:48:07 +00:00
dependabot[bot]
7ea17df9ed chore(frontend/deps): Bump recharts from 2.15.3 to 3.1.2 in /autogpt_platform/frontend (#10807)
Bumps [recharts](https://github.com/recharts/recharts) from 2.15.3 to
3.1.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/recharts/recharts/releases">recharts's
releases</a>.</em></p>
<blockquote>
<h2>v3.1.2</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>Label/Polar Charts</code>: <code>Label</code> viewbox should
now be present in polar charts and address <a
href="https://redirect.github.com/recharts/recharts/issues/6030">recharts/recharts#6030</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6180">recharts/recharts#6180</a></li>
<li>Add LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6176">recharts/recharts#6176</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2">https://github.com/recharts/recharts/compare/v3.1.1...v3.1.2</a></p>
<h2>v3.1.1</h2>
<h2>What's Changed</h2>
<h3>Fix</h3>
<ul>
<li><code>General</code>: Don't apply duplicate IDs in the DOM by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6111">recharts/recharts#6111</a></li>
<li><code>Stacked Area/Bar</code>: give all graphical items their own
unique identifier and use that to select stacked data. Fixes issue where
stacked charts could not be created from the graphical item
<code>data</code> prop <a
href="https://redirect.github.com/recharts/recharts/issues/6073">recharts/recharts#6073</a>
by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a></li>
<li><code>Stacked Area/Bar</code>: exclude stacked axis domain when not
relevant for axis by <a
href="https://github.com/rinkstiekema"><code>@​rinkstiekema</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6162">recharts/recharts#6162</a>
fixes issue where numeric stacked charts would not render correctly</li>
<li><code>Area Chart</code>: ranged area chart - show active dot on both
points instead of just the top one by <a
href="https://github.com/sroy8091"><code>@​sroy8091</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a>
fixes <a
href="https://redirect.github.com/recharts/recharts/issues/6080">#6080</a></li>
<li><code>Polar Charts/Label</code>: fix <code>Label</code> in polar
charts by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6126">recharts/recharts#6126</a></li>
<li><code>Scatter/ErrorBar</code>: choose implicit Scatter ErrorBar
direction based on chart layout (to be the same as 2.x) by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6159">recharts/recharts#6159</a></li>
<li><code>X/YAxis/Reference Components</code>: allow axis values and
reference items to render when there is no data but there is a
domain/explicit ticks set by <a
href="https://github.com/ethphan"><code>@​ethphan</code></a> in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
<li><code>X/YAxis</code>: pass axis padding info to custom tick
components by <a
href="https://github.com/shreedharbhat98"><code>@​shreedharbhat98</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6163">recharts/recharts#6163</a></li>
</ul>
<h3>Chore / Testing</h3>
<ul>
<li>good progress on our journey to enable
<code>strictNullChecks</code></li>
<li>addition of playwright visual regression tests to CI</li>
<li>split <code>Animate</code> into <code>JavascriptAnimate</code> and
<code>CSSTransitionAnimate</code> by <a
href="https://github.com/PavelVanecek"><code>@​PavelVanecek</code></a>
in <a
href="https://redirect.github.com/recharts/recharts/pull/6175">recharts/recharts#6175</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/sroy8091"><code>@​sroy8091</code></a>
made their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6116">recharts/recharts#6116</a></li>
<li><a href="https://github.com/ethphan"><code>@​ethphan</code></a> made
their first contribution in <a
href="https://redirect.github.com/recharts/recharts/pull/6161">recharts/recharts#6161</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1">https://github.com/recharts/recharts/compare/v3.1.0...v3.1.1</a></p>
<h2>v3.1.0</h2>
<h2>What's Changed</h2>
<p>Bug fixes (old and new) and a few new hooks post 3.0 launch!</p>
<h3>Feat</h3>
<p>More hooks!</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="079545c631"><code>079545c</code></a>
3.1.2</li>
<li><a
href="c6c26b5eae"><code>c6c26b5</code></a>
Label viewbox in polar charts (<a
href="https://redirect.github.com/recharts/recharts/issues/6180">#6180</a>)</li>
<li><a
href="6c4acb7299"><code>6c4acb7</code></a>
Add CompositeAnimationManager to allow testing for multiple concurrent
animat...</li>
<li><a
href="e56913daed"><code>e56913d</code></a>
feat: implement LRU cache for string size measurements (<a
href="https://redirect.github.com/recharts/recharts/issues/3955">#3955</a>)
(<a
href="https://redirect.github.com/recharts/recharts/issues/6176">#6176</a>)</li>
<li><a
href="cf648902cd"><code>cf64890</code></a>
3.1.1</li>
<li><a
href="c4ec635cf5"><code>c4ec635</code></a>
Split Animate into JavascriptAnimate and CSSTransitionAnimate (<a
href="https://redirect.github.com/recharts/recharts/issues/6175">#6175</a>)</li>
<li><a
href="e243a12d02"><code>e243a12</code></a>
Add animation tests for Rectangle (<a
href="https://redirect.github.com/recharts/recharts/issues/6174">#6174</a>)</li>
<li><a
href="a1e07aaa1c"><code>a1e07aa</code></a>
docs: convert class components to functional components (<a
href="https://redirect.github.com/recharts/recharts/issues/6172">#6172</a>)</li>
<li><a
href="1f8d4cb84d"><code>1f8d4cb</code></a>
fix: Exclude stacked axis domain when not relevant for axis (<a
href="https://redirect.github.com/recharts/recharts/issues/6162">#6162</a>)</li>
<li><a
href="fe28f2e63a"><code>fe28f2e</code></a>
chore(deps-dev): bump the storybook group with 8 updates (<a
href="https://redirect.github.com/recharts/recharts/issues/6166">#6166</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/recharts/recharts/compare/v2.15.3...v3.1.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=recharts&package-manager=npm_and_yarn&previous-version=2.15.3&new-version=3.1.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 14:22:30 +00:00
Reinier van der Leer
22e692bdda fix(frontend): Update in-view run with real-time updates (#10839)
- Resolves #10838

### Changes 🏗️

- Update `selectedRun` with received graph execution update if
applicable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Agent outputs appear in real-time
2025-09-04 13:43:44 +00:00
Nicholas Tindle
901e9eba5d feat(frontend): Enhanced output rendering system for agent runs (#10819)
## Summary
Introduces a modular, extensible output renderer system supporting
multiple content types (text, code, images, videos, JSON, markdown) for
agent run outputs. The system includes smart clipboard operations,
concatenated downloads, and rich markdown rendering with LaTeX math and
video embedding support.

## Changes 🏗️

### Core Output Rendering System
- **Added extensible renderer architecture**
(`output-renderers/types.ts`)
  - Plugin-based system with priority ordering
  - Registry pattern for automatic renderer selection
  - Support for custom metadata and MIME types

### Output Renderers
- **TextRenderer**: Plain text with proper formatting and line breaks
- **CodeRenderer**: Syntax-highlighted code blocks with language
detection
- **JSONRenderer**: Collapsible, formatted JSON with syntax highlighting
- **ImageRenderer**: Image display with support for URLs, data URIs, and
file uploads
- **VideoRenderer**: Embedded video player for YouTube, Vimeo, and
direct video files
- **MarkdownRenderer**: Rich markdown with:
  - GitHub Flavored Markdown (tables, task lists, strikethrough)
- LaTeX math rendering via KaTeX (inline `$...$` and display `$$...$$`)
  - Syntax highlighting via highlight.js
  - Video embedding (YouTube/Vimeo URLs auto-convert to embeds)
  - Clickable heading anchors
  - Dark mode support

### User Interface Components
- **OutputItem**: Individual output display with renderer selection
- **OutputActions**: Hover-based action buttons for:
  - Copy to clipboard with smart MIME type detection
- Download with intelligent concatenation (text files merge, binaries
separate)
  - Share functionality (placeholder for future implementation)
- **AgentRunOutputView**: Main output view component with feature flag
integration

### Clipboard & Download Features
- Smart clipboard operations using native ClipboardItem API
- MIME type detection and browser capability checking
- Fallback strategies for unsupported content types
- Concatenated downloads for text-based outputs
- Individual downloads for binary content

### Feature Flag Integration
- Added `ENABLE_ENHANCED_OUTPUT_HANDLING` flag to LaunchDarkly
- Backwards compatible with existing output display
- Graceful fallback for disabled feature flag

### Styling & UX
- Max width constraints (950px card, 660px content)
- Hover-based action buttons for clean interface
- Dark mode support across all renderers
- Responsive design for various content types
- Loading states and error handling

## Test Plan 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

### Test Scenarios:
- [x] **Basic Output Rendering**
  - [x] Execute agent with text output - verify proper formatting
  - [x] Execute agent with JSON output - verify collapsible tree view
  - [x] Execute agent with code output - verify syntax highlighting
  
- [x] **Rich Content**
  - [x] Test markdown rendering with headers, lists, tables
  - [x] Test LaTeX math expressions (inline and display)
  - [x] Test code blocks within markdown
  - [x] Test task lists and strikethrough
  
- [x] **Media Handling**
  - [x] Upload and display PNG/JPEG images
  - [x] Test video URL embedding (YouTube/Vimeo)
  - [x] Test direct video file playback
  
- [x] **Clipboard Operations**
  - [x] Copy plain text output
  - [x] Copy formatted code
  - [x] Copy JSON data
  - [x] Copy markdown content
  - [x] Verify fallback for unsupported MIME types
  
- [x] **Download Functionality**
  - [x] Download single text output
  - [x] Download multiple text outputs (verify concatenation)
  - [x] Download mixed content (verify separate files)
  - [x] Download images and binary content
  
- [x] **Feature Flag**
  - [x] Enable flag - verify enhanced rendering
  - [x] Disable flag - verify fallback to original view
  - [x] Check backwards compatibility
 
  
- [x] **Edge Cases**
  - [x] Large JSON objects (performance)
  - [x] Very long text outputs
  - [x] Mixed content types in single run
  - [x] Malformed markdown
  - [x] Invalid video URLs

## Dependencies Added
- `react-markdown` (9.0.3) - Already present
- `remark-gfm` (4.0.1) - GitHub Flavored Markdown
- `remark-math` (6.0.0) - LaTeX math support
- `rehype-katex` (7.0.1) - Math rendering
- `katex` (0.16.22) - Math typesetting
- `rehype-highlight` (7.0.2) - Syntax highlighting
- `highlight.js` (11.11.1) - Highlighting library
- `rehype-slug` (6.0.0) - Heading anchors
- `rehype-autolink-headings` (7.1.0) - Clickable headings

## Notes
- Mermaid diagram support was attempted but removed due to compatibility
issues
- Share functionality is stubbed out for future implementation
- PNG file upload rendering issue has logging in place for debugging
- All components follow existing UI patterns and use Tailwind CSS

## Screenshots
<img width="1656" height="1250" alt="image"
src="https://github.com/user-attachments/assets/af7542fe-db89-4521-aaf5-19e33d48a409"
/>
## Related Issues
- Implements SECRT-1209

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-04 13:37:26 +00:00
dependabot[bot]
bbd6709bd6 chore(backend/deps): Bump cryptography from 43.0.3 to 45.0.7 in /autogpt_platform/backend (#10810)
Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.3
to 45.0.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>45.0.7 - 2025-09-01</p>
<pre><code>
* Added a function to support an upcoming ``pyOpenSSL`` release.
<p>.. _v45-0-6:</p>
<p>45.0.6 - 2025-08-05<br />
</code></pre></p>
<ul>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.2.</li>
</ul>
<p>.. _v45-0-5:</p>
<p>45.0.5 - 2025-07-02</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.1.
<p>.. _v45-0-4:</p>
<p>45.0.4 - 2025-06-09<br />
</code></pre></p>
<ul>
<li>Fixed decrypting PKCS#8 files encrypted with SHA1-RC4. (This is not
considered secure, and is supported only for backwards
compatibility.)</li>
</ul>
<p>.. _v45-0-3:</p>
<p>45.0.3 - 2025-05-25</p>
<pre><code>
* Fixed decrypting PKCS#8 files encrypted with long salts (this impacts
keys
  encrypted by Bouncy Castle).
* Fixed decrypting PKCS#8 files encrypted with DES-CBC-MD5. While wildly
  insecure, this remains prevalent.
<p>.. _v45-0-2:</p>
<p>45.0.2 - 2025-05-17<br />
</code></pre></p>
<ul>
<li>Fixed using <code>mypy</code> with <code>cryptography</code> on
older versions of Python.</li>
</ul>
<p>.. _v45-0-1:</p>
<p>45.0.1 - 2025-05-17</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.0.
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f52a3e1496"><code>f52a3e1</code></a>
prep for a 45.0.7 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13378">#13378</a>)</li>
<li><a
href="66198c23c9"><code>66198c2</code></a>
Bump for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13249">#13249</a>)</li>
<li><a
href="3e53a233b6"><code>3e53a23</code></a>
Bump for 45.0.5 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13135">#13135</a>)</li>
<li><a
href="678c0c59f7"><code>678c0c5</code></a>
prepare for 45.0.4 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/13058">#13058</a>)</li>
<li><a
href="5038495987"><code>5038495</code></a>
backports for 45.0.3 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12979">#12979</a>)</li>
<li><a
href="f81c07535d"><code>f81c075</code></a>
Backport mypy fixes for release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12930">#12930</a>)</li>
<li><a
href="8ea28e0bc7"><code>8ea28e0</code></a>
bump for 45.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12922">#12922</a>)</li>
<li><a
href="67840977c9"><code>6784097</code></a>
bump for 45 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/12886">#12886</a>)</li>
<li><a
href="2d9c1c9cbe"><code>2d9c1c9</code></a>
bump MSRV to 1.74 (<a
href="https://redirect.github.com/pyca/cryptography/issues/12919">#12919</a>)</li>
<li><a
href="6c18874cc2"><code>6c18874</code></a>
Bump BoringSSL, OpenSSL, AWS-LC in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/12918">#12918</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/43.0.3...45.0.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=43.0.3&new-version=45.0.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:35:07 +00:00
dependabot[bot]
3ec1721d6d chore(libs/deps-dev): Bump ruff from 0.12.9 to 0.12.11 in /autogpt_platform/autogpt_libs in the development-dependencies group (#10804)
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.12.9 to 0.12.11
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a href="https://github.com/Avasam"><code>@​Avasam</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a href="https://github.com/Gankra"><code>@​Gankra</code></a></li>
<li><a
href="https://github.com/Glyphack"><code>@​Glyphack</code></a></li>
<li><a
href="https://github.com/JelleZijlstra"><code>@​JelleZijlstra</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a
href="https://github.com/PrettyWood"><code>@​PrettyWood</code></a></li>
<li><a href="https://github.com/Renkai"><code>@​Renkai</code></a></li>
<li><a href="https://github.com/TaKO8Ki"><code>@​TaKO8Ki</code></a></li>
<li><a
href="https://github.com/amyreese"><code>@​amyreese</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/github-actions"><code>@​github-actions</code></a></li>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.12.11</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend <code>AIR311</code> and
<code>AIR312</code> rules (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20082">#20082</a>)</li>
<li>[<code>airflow</code>] Replace wrong path
<code>airflow.io.storage</code> with <code>airflow.io.store</code>
(<code>AIR311</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20081">#20081</a>)</li>
<li>[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx-in-async-function</code>
(<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20091">#20091</a>)</li>
<li>[<code>flake8-logging-format</code>] Add auto-fix for f-string
logging calls (<code>G004</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19303">#19303</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add autofix for
<code>PTH211</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20009">#20009</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Make <code>PTH100</code> fix
unsafe because it can change behavior (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20100">#20100</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>pyflakes</code>, <code>pylint</code>] Fix false positives
caused by <code>__class__</code> cell handling (<code>F841</code>,
<code>PLE0117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20048">#20048</a>)</li>
<li>[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code>
matching for top-level modules (<code>F401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20115">#20115</a>)</li>
<li>[<code>ruff</code>] Fix false positive for t-strings in
<code>default-factory-kwarg</code> (<code>RUF026</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20032">#20032</a>)</li>
<li>[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19647">#19647</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Handle empty t-strings in
<code>unnecessary-empty-iterable-within-deque-call</code>
(<code>RUF037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20045">#20045</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix incorrect <code>D413</code> links in docstrings convention FAQ
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/20089">#20089</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Update links to the table showing
the correspondence between <code>os</code> and <code>pathlib</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20103">#20103</a>)</li>
</ul>
<h2>0.12.10</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-simplify</code>] Implement fix for
<code>maxsplit</code> without separator (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19851">#19851</a>)</li>
<li>[<code>flake8-use-pathlib</code>] Add fixes for <code>PTH102</code>
and <code>PTH103</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19514">#19514</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>isort</code>] Handle multiple continuation lines after module
docstring (<code>I002</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19818">#19818</a>)</li>
<li>[<code>pyupgrade</code>] Avoid reporting <code>__future__</code>
features as unnecessary when they are used (<code>UP010</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19769">#19769</a>)</li>
<li>[<code>pyupgrade</code>] Handle nested <code>Optional</code>s
(<code>UP045</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19770">#19770</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pycodestyle</code>] Make <code>E731</code> fix unsafe instead
of display-only for class assignments (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19700">#19700</a>)</li>
<li>[<code>pyflakes</code>] Add secondary annotation showing previous
definition (<code>F811</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19900">#19900</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix description of global config file discovery strategy (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19188">#19188</a>)</li>
<li>Update outdated links to <a
href="https://typing.python.org/en/latest/source/stubs.html">https://typing.python.org/en/latest/source/stubs.html</a>
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19992">#19992</a>)</li>
<li>[<code>flake8-annotations</code>] Remove unused import in example
(<code>ANN401</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20000">#20000</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c2bc15bc15"><code>c2bc15b</code></a>
Bump 0.12.11 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20136">#20136</a>)</li>
<li><a
href="e586f6dcc4"><code>e586f6d</code></a>
[ty] Benchmarks for problematic implicit instance attributes cases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20133">#20133</a>)</li>
<li><a
href="76a6b7e3e2"><code>76a6b7e</code></a>
[<code>pyflakes</code>] Fix <code>allowed-unused-imports</code> matching
for top-level modules (`F4...</li>
<li><a
href="1ce65714c0"><code>1ce6571</code></a>
Move GitLab output rendering to <code>ruff_db</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20117">#20117</a>)</li>
<li><a
href="d9aaacd01f"><code>d9aaacd</code></a>
[ty] Evaluate reachability of non-definitely-bound to Ambiguous (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19579">#19579</a>)</li>
<li><a
href="18eaa659c1"><code>18eaa65</code></a>
[ty] Introduce a representation for the top/bottom materialization of an
inva...</li>
<li><a
href="af259faed5"><code>af259fa</code></a>
[<code>flake8-async</code>] Implement
<code>blocking-http-call-httpx</code> (<code>ASYNC212</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20091">#20091</a>)</li>
<li><a
href="d75ef3823c"><code>d75ef38</code></a>
[ty] print diagnostics with fully qualified name to disambiguate some
cases (...</li>
<li><a
href="89ca493fd9"><code>89ca493</code></a>
[<code>ruff</code>] Preserve relative whitespace in multi-line
expressions (<code>RUF033</code>) (#...</li>
<li><a
href="4b80f5fa4f"><code>4b80f5f</code></a>
[ty] Optimize TDD atom ordering (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20098">#20098</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.9...0.12.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.12.9&new-version=0.12.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 13:26:59 +00:00
Swifty
8c8a2ab0c2 feat(blocks): Add Bannerbear API block for text overlay on images (#10768)
## Summary
- Implemented a new Bannerbear API block that enables adding text
overlays to images using template designs
- Block supports customizable text styling (color, font, size, weight,
alignment)
- Always uses synchronous API mode for immediate image generation
results


[agent_ead942d9-58a2-4be6-bdb3-99010c489466.json](https://github.com/user-attachments/files/22027352/agent_ead942d9-58a2-4be6-bdb3-99010c489466.json)

<img width="140" height="572" alt="Screenshot 2025-08-28 at 16 28 35"
src="https://github.com/user-attachments/assets/096b532b-31dc-4ca6-bd68-c00b7594426c"
/>


## Features
- **Text overlay capabilities**: Add multiple text layers to images
using Bannerbear templates
- **Customizable styling**: Support for color, font family, font size,
font weight, and text alignment
- **Image support**: Optional ability to add images to templates
- **Smart field handling**: Only sends non-empty optional parameters to
the API
- **Webhook & metadata**: Advanced options for webhook notifications and
custom metadata

## Implementation Details
- Created provider configuration with API key authentication
- Implemented `BannerbearTextOverlayBlock` with proper input/output
schemas
- Extracted API calls to private method `_make_api_request()` for test
mocking support
- Follows SDK guide patterns and integrates with AutoGPT platform

## Use Case
This block will be used in the Ad generator agent for creating dynamic
marketing materials and social media graphics with text overlays.

## Test plan
- [x] Block imports successfully
- [x] Block instantiates with unique ID
- [x] Code passes linting and formatting checks
- [x] Manual testing with actual Bannerbear API key
- [x] Integration testing with Ad generator agent
2025-09-04 13:15:00 +00:00
Krzysztof Czerwinski
4041e1f39c feat(backend/infra): Cleanup platform and db docker compose files (#10830)
Supabase `db/docker/docker-compose.yml` overrides env vars set in
`autogpt_platform/.env` file. This PR fixes that and simplifies the
compose files further.

### Changes 🏗️

`autogpt_platform/docker-compose.platform.yml`:
- Move hardcoded `DATABASE_URL` and `DIRECT_URL` to `x-backend-env` on
top as it repeats for most services.
- Remove `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` from
`rabbitmq` service and use env files instead

`autogpt_platform/db/docker/docker-compose.yml`:
- Remove hardcoded env vars from `x-supabase-env` - these are already
defined in `.env`
- Remove env vars from services that are already defined in `.env` files

*Changes to db compose file only affect self-hosted Supabase*

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Platform, db works when self-hosting
2025-09-04 12:01:49 +00:00
Nicholas Tindle
7cbdc1ad1a security(frontend): Upgrade next from 15.4.6 to 15.4.7 (#10820)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
'high severity') | Server-side Request Forgery (SSRF)
<br/>[SNYK-JS-NEXT-12299318](https://snyk.io/vuln/SNYK-JS-NEXT-12299318)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJjM2E3MTAxZi1mNTI2LTQxMGUtYjVkOS0wMDhmYzMxNjk0MmQiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImMzYTcxMDFmLWY1MjYtNDEwZS1iNWQ5LTAwOGZjMzE2OTQyZCJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Server-side Request Forgery
(SSRF)](https://learn.snyk.io/lesson/ssrf-server-side-request-forgery/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.6","to":"15.4.7"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-12299318"],"prId":"c3a7101f-f526-410e-b5d9-008fc316942d","prPublicId":"c3a7101f-f526-410e-b5d9-008fc316942d","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-12299318"],"vulns":["SNYK-JS-NEXT-12299318"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-09-04 09:18:12 +00:00
Reinier van der Leer
2a19aa0ed3 fix(frontend/library): Show total runs count above runs list (#10832)
- Resolves #10831

### Changes 🏗️

- Show number of total runs instead of currently loaded runs
- Show loading spinner instead of zero while loading

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Counter shows number of total runs, even if it exceeds number of
currently loaded items
2025-09-04 06:43:19 +00:00
Nicholas Tindle
6d39dfe382 feat(frontend): Add admin button to user profile dropdown (#10774)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)
by adding an "Admin" button to the user account dropdown menu. This
button is only visible to users with an "admin" role and provides direct
navigation to the admin marketplace management page, making existing
admin functionalities accessible from the new UI.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added Admin Icon**: Integrated `IconSliders` into the `IconType`
enum and `getAccountMenuOptionIcon` function.
- **Dynamic Menu Generation**: Introduced
`getAccountMenuItems(userRole?: string)` to dynamically construct the
account menu. This function conditionally adds an "Admin" menu item
(linking to `/admin/marketplace`) if the `userRole` is "admin".
- **Navbar Integration**: Updated `NavbarView.tsx` to utilize the
`useSupabase` hook to retrieve the current user's role and then render
the account menu using the new dynamic `getAccountMenuItems` function
for both desktop and mobile views.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Log in as an admin user and verify the "Admin" button appears in
the account dropdown.
- [x] Click the "Admin" button and confirm navigation to
`/admin/marketplace`.
- [x] Log in as a non-admin user and verify the "Admin" button does not
appear in the account dropdown.
- [x] Verify all other existing menu items (e.g., "Edit profile", "Log
out") function correctly for both admin and non-admin users.
  - [x] Test the above scenarios on both desktop and mobile views.

---
Linear Issue:
[OPEN-2232](https://linear.app/autogpt/issue/OPEN-2232/add-admin-pages-in-dropdown)

<a
href="https://cursor.com/background-agent?bcId=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-2dceda38-31b4-4e8e-8277-fb87c8858abf">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-09-03 09:26:24 +00:00
Ubbe
57ecc10535 fix(frontend): typed inputs in new run modal (#10799)
## Changes 🏗️

###  Make the **Credentials Inputs** show up on the new Run Agent Modal
<img width="450" height="784" alt="Screenshot 2025-09-02 at 00 54 19"
src="https://github.com/user-attachments/assets/26ad8242-a1bc-45f6-9149-a3d207683679"
/>


### Fixes on other modals...

<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 40"
src="https://github.com/user-attachments/assets/fa2f9ed9-207b-4599-9e60-3e37c4be6ea9"
/>
 
<img width="450" height="579" alt="Screenshot 2025-09-02 at 00 04 44"
src="https://github.com/user-attachments/assets/92e062a7-f161-423e-b6c9-f998fbdef102"
/>

<img width="450" height="634" alt="Screenshot 2025-09-02 at 00 47 06"
src="https://github.com/user-attachments/assets/93009809-0df0-44c5-b2d2-a9aa0f501312"
/>


- always use buttons/inputs in small size ( _due to the tight space_ ) 
- use from the design system
- always use pretty scrollbars
- Configure
[`tailwind-scrollbars`](https://github.com/adoxography/tailwind-scrollbar)
for pretty scrollbars
- prevent content in dialog to overflow when scrollable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally with the new agent run modal flag enabled
  - [x] Check the above 

#### For configuration changes:

None
2025-09-03 06:58:07 +00:00
Reinier van der Leer
4928ce3f90 feat(library): Create presets from runs (#10823)
- Resolves #9307

### Changes 🏗️

- feat(library): Create presets from runs
  - Prevent creating preset from run with unknown credentials
- Fix running presets with credentials
  - Add `credential_inputs` parameter to `execute_preset` endpoint

API:
- Return `GraphExecutionMeta` from `*/execute` endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` for an agent that *does not* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
- Go to `/library/agents/[id]` for an agent that *does* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Error toast appears with descriptive message
- Initiate a new run; once finished, click "Create preset from run";
fill out the dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
2025-09-03 01:26:12 +00:00
Reinier van der Leer
e16e69ca55 feat(library, executor): Make "Run Again" work with credentials (#10821)
- Resolves [OPEN-2549: Make "Run again" work with credentials in
`AgentRunDetailsView`](https://linear.app/autogpt/issue/OPEN-2549/make-run-again-work-with-credentials-in-agentrundetailsview)
- Resolves #10237

### Changes 🏗️

- feat(frontend/library): Make "Run Again" button work for runs with
credentials
- feat(backend/executor): Store passed-in credentials on
`GraphExecution`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]` for an agent with credentials inputs
  - Run the agent manually
    - [x] -> runs successfully
- [x] -> "Run again" shows among the action buttons on the newly created
run
  - Click "Run again"
    - [x] -> runs successfully
2025-09-02 18:34:56 +00:00
Nicholas Tindle
4bcc73f784 ci: Update GitHub Actions workflow for Claude integration (#10822) 2025-09-02 17:06:49 +01:00
Nicholas Tindle
c8240a4d6b ci: Modify Claude workflow permissions and action version (#10818) 2025-09-02 08:43:54 -07:00
Ubbe
f669db4a10 fix(frontend): prevent using mock feature flags (#10792)
## Changes 🏗️

Make sure `NEXT_PUBLIC_PW_TEST` is set only when running Playwright.
This forces the app to use "mock" feature flags, so the tests run stable
and predictable despite changes on LaunchDarkly.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x]  should not have `PW_TEST=true` ...

### For configuration changes:

None
2025-09-02 14:18:18 +00:00
Reinier van der Leer
0e755a5c85 feat(platform/library): Support UX for manual-setup triggers (#10309)
- Resolves #10234

### Preview

#### Manual setup triggers
![preview of the setup screen for manual-setup
triggers](https://github.com/user-attachments/assets/295d2968-ad11-4291-b360-2eb2acb03397)
![preview of the view for an active manual-setup
trigger](https://github.com/user-attachments/assets/d0ae2246-2305-48f5-aea8-8adb37336401)

#### Auto-setup triggers
![preview of the view for an active auto-setup
trigger](https://github.com/user-attachments/assets/63856311-fc99-450c-ae1f-86951e40dc26)

### Changes 🏗️

- Add "Trigger status" section to `AgentRunDraftView`
- Add `AgentPreset.webhook`, so we can show webhook URL in library
  - Add `AGENT_PRESET_INCLUDE` to `backend.data.includes`
- Add `BaseGraph.trigger_setup_info` (computed field)
- Rename `LibraryAgentTriggerInfo` to `GraphTriggerInfo`; move to
`backend.data.graph`

Refactor:
- Move contents of `@/components/agents/` to
`@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/`
- Fix small type difference between legacy & generated
`LibraryAgent.image_url`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Setting up GitHub trigger works
  - [x] Setting up manual trigger works
  - [x] Enabling/disabling manual trigger through Library works
2025-09-02 10:23:32 +00:00
Reinier van der Leer
dfdc71f97f feat(backend/external-api): Make API key auth work in Swagger UI (#10783)
![Swagger UI API key auth
dialog](https://github.com/user-attachments/assets/02026802-51f9-410d-bdb8-53840d5eb17b)

- Resolves #10782

### Changes 🏗️

- Use `Security(..)` for security dependencies
- Minor tweaks to auth mechanism (similar to #10720)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] API key auth feature appears in Swagger UI
  - [ ] API key auth *works* in Swagger UI (@ntindle wanna test this?)
2025-09-02 09:24:14 +00:00
Krzysztof Czerwinski
def008408c feat(frontend): Preserve openapi.json spec file on generation failure (#10791)
`openapi.json` file is cleared when script fails to retrieve api spec
from the server. This shouldn't happen and it breaks building docker
containers.

### Changes 🏗️

Use temp file during generation to prevent actual file clearing on
failure.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Spec file doesn't get cleared on failure
  - [x] Spec file is correctly generated
  - [x] Works when frontend is run in docker container
2025-09-02 01:37:54 +00:00
Swifty
916d0adabb feat(frontend): Add graph search functionality to builder (#10776)
## Summary
- Added search functionality to find nodes in the graph by block type,
node ID, and input/output names
- Search icon added to both new and old control panels
- Implemented node highlighting on hover and navigation on click


https://github.com/user-attachments/assets/8cc69186-5582-446d-b2cd-601de992144f



## Changes
- Created `GraphSearchMenu` component for the new control panel
- Created `GraphSearchControl` component for the old control panel  
- Added `GraphSearchContent` component with search UI similar to
BlockMenu
- Implemented `useGraphSearch` hook with fuzzy search logic
- Added node highlighting without viewport movement on hover
- Added node navigation with centering and highlighting on selection

## Features
- Search by block type name, node ID, or input/output field names
- Real-time filtering with keyboard navigation support
- Visual feedback with node highlighting on hover
- Click to navigate and center on selected node
- Consistent styling with BlockMenu including category colors
- Works in both old and new control panels

## Test plan
- [x] Test search functionality in both old and new control panels
- [x] Verify search by block type name works
- [x] Verify search by node ID works  
- [x] Verify search by input/output names works
- [x] Test keyboard navigation (arrow keys and enter)
- [x] Verify node highlighting on hover
- [x] Verify node navigation on click
- [x] Check popover alignment with control panel top
2025-09-01 18:42:13 +00:00
Abhimanyu Yadav
417ee7f0e1 feat(frontend): redesign-block-menu-part-2 (#10793)
In this PR, I have added:

- a search input
- conditional rendering of the search page and the default page
- a sidebar for the default page (with the correct data)

### Screenshot

<img width="1512" height="982" alt="Screenshot 2025-09-01 at 12 28
34 PM"
src="https://github.com/user-attachments/assets/891ab99f-dde5-47b8-a980-a700845f10c2"
/>

#### Checklist:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally.
2025-09-01 10:04:29 +00:00
Abhimanyu Yadav
ae4c9897b4 refactor(frontend): Revamp marketplace agent page data fetching and structure (#10756)
- Updated the agent page to utilize React Query for data fetching,
improving performance and reliability.
- Removed legacy API calls and integrated prefetching for creator
details and agents.
- Introduced a new MainAgentPage component for better separation of
concerns.
- Added a hydration boundary for managing server state.

> It’s important to note that I haven’t changed any UI in this, as it’s
out of scope for this PR.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have manually tested both `Add to Library` and `Download`
functions, and they are working correctly.
  - [x] All fetching functions are working perfectly.
  - [x] All end-to-end tests are also working correctly.
2025-09-01 05:40:01 +00:00
Ubbe
7544028b94 fix(frontend): use new dialog in schedule agent (#10786)
## Changes 🏗️

Should fix the issue where sometimes the schedule modal wouldn't appear
when clicking on the CTA.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Set up schedules multiple times, look good on the modal
gent from monitor, and confirm it executes correctly

#### For configuration changes:

None
2025-08-30 04:47:12 +00:00
Ubbe
ad49946890 feat(frontend): New Run Agent Modal (2/2) (#10769)
## Changes 🏗️

<img width="400" height="821" alt="Screenshot 2025-08-28 at 23 57 41"
src="https://github.com/user-attachments/assets/f5f7c0a6-0b87-4c1f-b644-3ee2ddd1db95"
/>

<img width="400" height="822" alt="Screenshot 2025-08-28 at 23 57 47"
src="https://github.com/user-attachments/assets/120dbb60-d9e1-4a4a-a593-971badb4a97a"
/>

This is the final piece of work on the new **Run Agent Modal**... It is
all behind a feature flag so I'm relatively comfortable is safe. The
idea is to test with the team once it lands into dev to try different
combinations of agent inputs / credentials and schedules...

I have moved and tied a lot of the original logic around running agents.
Mostly importantly, I have made all the dynamic inputs adhere to the
design system.

### AI changes summary  

- Allow to run schedules in the main modal body
- Integrate and tidy old logic around dynamic run agent inputs 
- Integrate and tidy old logic around credentials inputs
- Refactor: `<TypeBasedInputs />` to use Design System components
(`atoms/Input`, `atoms/Select`, `molecules/MultiToggle`, and native
date/time picker via `<Input />` using the browser's date picker )
- Added support for `type="date"` and `type="datetime-local"` to `<Input
/>` ( _for the above_ )
- On the `<Select />` component:
  - added `size` prop (`small` | `medium`).
- added rich items: `icon`, `disabled`, `separator`, `onSelect`, and
`renderItem` prop.
- stories updated/added for size variants, icons/separators, and custom
rendering.
- Added and documented to the design system:
  - `molecules/TimePicker` + story.
- `atoms/FileInput`: added `accept` and `maxFileSize` props; story
documents constraints.
- `atoms/Progress` stories (Basic, CustomMax, Sizes, Live) with
fixed-width container.
  - `atoms/Switch` stories (Basic, Disabled, WithLabel).
  - `molecules/Dialog` story: Modal-over-Modal example.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open Storybook and verify new/updated stories render correctly.
  - [x] In app, validate modals open/close correctly using DS `Dialog`.
- [x] Validate DS Select rich items (icon, separator, disabled, action)
behave as expected.
  - [x] Run lints and ensure no errors.
  - [x] Manually test File upload constraints (type/size) and progress.

### For configuration changes:
None
2025-08-29 11:38:36 +00:00
Reinier van der Leer
b333d56492 fix(frontend): Fix checking of Date and NaN input values (#10781)
Date values were being rejected as "empty" by the run input form.

![screenshot demonstrating the
issue](https://github.com/user-attachments/assets/cf89414d-9b27-4da5-8a4a-64d9439c7084)

### Changes 🏗️

- Specifically handle `Date` type values in `isEmpty`
- Specifically handle `NaN` values in `isEmpty`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Date values are no longer rejected as "empty"
2025-08-29 11:03:15 +00:00
Abhimanyu Yadav
b23cb14e49 test(frontend): fix library flaky e2e test (#10764)
A test in one of my pr is failing something like… 

<img width="1044" height="452" alt="Screenshot 2025-08-28 at 9 39 07 AM"
src="https://github.com/user-attachments/assets/9c8b8996-50a2-44c6-8a2c-c3904f07ced5"
/>

That’s why I fixed it.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All E2E tests are now working correctly.
2025-08-29 10:19:01 +00:00
Swifty
e71a44521a feat(frontend): Add editable node titles with hover pencil icon (#10779)
## Summary
- Adds ability to edit custom node titles by clicking a pencil icon that
appears on hover
- Custom titles are saved in node metadata and persist across saves
- Original node type is shown in tooltip when hovering over custom
titles



https://github.com/user-attachments/assets/a0a41ac9-1ffb-44c8-9e1c-f4c42e032b49



## Changes
- **CustomNode.tsx**: 
  - Added inline title editing with pencil icon on hover
  - Implemented state management for title editing mode
  - Added tooltip to show original node type for custom titles
  - Prevents custom names from being copied when duplicating nodes

- **useAgentGraph.tsx**:
- Updated graph save/load logic to preserve metadata including custom
titles
  - Ensures metadata persistence through all node operations

## Technical Details
- Uses existing `metadata` JSON field in AgentNode model (no database
changes needed)
- Stores custom title in `metadata.customized_name`
- Backward compatible - nodes without custom titles display normally

## Test Plan
- [x] Hover over node title shows pencil icon
- [x] Click pencil icon to edit title
- [x] Press Enter or blur to save, Escape to cancel
- [x] Custom title persists after saving graph
- [x] Tooltip shows original node type when hovering over custom title
- [x] Copying node doesn't copy custom name
- [x] Backward compatible with existing graphs
2025-08-29 10:02:35 +00:00
Swifty
15aa091f65 feat(frontend): add drag and drop functionality from blocks menu to flow canvas (#10777)
Closes #1055
2025-08-29 10:55:43 +02:00
Swifty
3e4ca19036 docs: Add comprehensive Block SDK guide (#10767)
## Summary
- Added comprehensive Block SDK guide documenting the new SDK pattern
for creating blocks
- Integrated the guide into the documentation structure
- Updated existing documentation to reference the new guide

## Changes
- Created `docs/content/platform/block-sdk-guide.md` with detailed
instructions for:
  - Provider configuration using `ProviderBuilder`
  - Block schema definition and implementation
  - Authentication methods (API keys, OAuth, webhooks)
  - Testing and validation
  - File organization and best practices

- Updated documentation structure:
  - Added guide to `mkdocs.yml` navigation
  - Added cross-references in `new_blocks.md`
  - Added links in `blocks/blocks.md` overview
  - Updated `CLAUDE.md` with reference to the new guide

## Test plan
- [ ] Documentation builds correctly with mkdocs
- [ ] All internal links resolve properly
- [ ] Guide examples are syntactically correct
- [ ] Navigation structure is logical and accessible
2025-08-28 15:54:13 +00:00
dependabot[bot]
a595da02f7 chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 6 updates (#10710)
Bumps the development-dependencies group with 6 updates in the
/autogpt_platform/backend directory:

| Package | From | To |
| --- | --- | --- |
| [faker](https://github.com/joke2k/faker) | `37.4.2` | `37.5.3` |
| [poethepoet](https://github.com/nat-n/poethepoet) | `0.36.0` |
`0.37.0` |
| [pre-commit](https://github.com/pre-commit/pre-commit) | `4.2.0` |
`4.3.0` |
| [pyright](https://github.com/RobertCraigie/pyright-python) | `1.1.403`
| `1.1.404` |
| [requests](https://github.com/psf/requests) | `2.32.4` | `2.32.5` |
| [ruff](https://github.com/astral-sh/ruff) | `0.12.4` | `0.12.9` |


Updates `faker` from 37.4.2 to 37.5.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v37.5.3</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.3/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.5.2</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.2/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.5.1</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.1/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.5.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.5.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.4.3</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.4.3/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.5.2...v37.5.3">v37.5.3
- 2025-07-30</a></h3>
<ul>
<li>Allow <code>Decimal</code> type for <code>min_value</code> and
<code>max_value</code> in <code>pydecimal</code>. Thanks <a
href="https://github.com/sshishov"><code>@​sshishov</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.5.1...v37.5.2">v37.5.2
- 2025-07-30</a></h3>
<ul>
<li>Fix Turkish Republic National Number (TCKN) provider. Thanks <a
href="https://github.com/fleizean"><code>@​fleizean</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.5.0...v37.5.1">v37.5.1
- 2025-07-30</a></h3>
<ul>
<li>Fix unnatural Korean company names in <code>ko_KR</code> locale.
Thanks <a
href="https://github.com/r-4bb1t"><code>@​r-4bb1t</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.4.3...v37.5.0">v37.5.0
- 2025-07-30</a></h3>
<ul>
<li>Add Spanish lorem provider for <code>es_ES</code>,
<code>es_AR</code> and <code>es_MX</code>. Thanks <a
href="https://github.com/Pandede"><code>@​Pandede</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.4.2...v37.4.3">v37.4.3
- 2025-07-30</a></h3>
<ul>
<li>Fix male names in <code>sv_SE</code> locale. Thanks <a
href="https://github.com/peterk"><code>@​peterk</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c7db7f583d"><code>c7db7f5</code></a>
Bump version: 37.5.2 → 37.5.3</li>
<li><a
href="f4fbe8f933"><code>f4fbe8f</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="2a55697c46"><code>2a55697</code></a>
format code</li>
<li><a
href="614e3255e0"><code>614e325</code></a>
Placate mypy</li>
<li><a
href="f8e5d868f2"><code>f8e5d86</code></a>
fix(pydecimal): allow <code>Decimal</code> type for
<code>min_value</code> and <code>max_value</code> in `pyde...</li>
<li><a
href="4cf26710f7"><code>4cf2671</code></a>
Bump version: 37.5.1 → 37.5.2</li>
<li><a
href="fecc0373fd"><code>fecc037</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="3e94c67740"><code>3e94c67</code></a>
Fix Turkish Republic National Number (TCKN) provider (<a
href="https://redirect.github.com/joke2k/faker/issues/2232">#2232</a>)</li>
<li><a
href="867b08e984"><code>867b08e</code></a>
more samples</li>
<li><a
href="5acc936b6d"><code>5acc936</code></a>
update stubs</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.4.2...v37.5.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `poethepoet` from 0.36.0 to 0.37.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nat-n/poethepoet/releases">poethepoet's
releases</a>.</em></p>
<blockquote>
<h2>0.37.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Support configuring task level verbosity by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/304">nat-n/poethepoet#304</a></li>
<li>Direct most non-task output to stderr by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/304">nat-n/poethepoet#304</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.36.0...v0.37.0">https://github.com/nat-n/poethepoet/compare/v0.36.0...v0.37.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9c582c8d25"><code>9c582c8</code></a>
Bump version to 0.37.0</li>
<li><a
href="6eb522f791"><code>6eb522f</code></a>
feat: Support task level verbosity config and use stderr for most
non-task ou...</li>
<li>See full diff in <a
href="https://github.com/nat-n/poethepoet/compare/v0.36.0...v0.37.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pre-commit` from 4.2.0 to 4.3.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/releases">pre-commit's
releases</a>.</em></p>
<blockquote>
<h2>pre-commit v4.3.0</h2>
<h3>Features</h3>
<ul>
<li><code>language: docker</code> / <code>language: docker_image</code>:
detect rootless docker.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3446">#3446</a>
PR by <a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/1243">#1243</a>
issue by <a
href="https://github.com/dkolepp"><code>@​dkolepp</code></a>.</li>
</ul>
</li>
<li><code>language: julia</code>: avoid <code>startup.jl</code> when
executing hooks.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3496">#3496</a>
PR by <a
href="https://github.com/ericphanson"><code>@​ericphanson</code></a>.</li>
</ul>
</li>
<li><code>language: dart</code>: support latest dart versions which
require a higher sdk
lower bound.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3507">#3507</a>
PR by <a
href="https://github.com/bc-lee"><code>@​bc-lee</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md">pre-commit's
changelog</a>.</em></p>
<blockquote>
<h1>4.3.0 - 2025-08-09</h1>
<h3>Features</h3>
<ul>
<li><code>language: docker</code> / <code>language: docker_image</code>:
detect rootless docker.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3446">#3446</a>
PR by <a
href="https://github.com/matthewhughes934"><code>@​matthewhughes934</code></a>.</li>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/1243">#1243</a>
issue by <a
href="https://github.com/dkolepp"><code>@​dkolepp</code></a>.</li>
</ul>
</li>
<li><code>language: julia</code>: avoid <code>startup.jl</code> when
executing hooks.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3496">#3496</a>
PR by <a
href="https://github.com/ericphanson"><code>@​ericphanson</code></a>.</li>
</ul>
</li>
<li><code>language: dart</code>: support latest dart versions which
require a higher sdk
lower bound.
<ul>
<li><a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3507">#3507</a>
PR by <a
href="https://github.com/bc-lee"><code>@​bc-lee</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b74a22d96c"><code>b74a22d</code></a>
v4.3.0</li>
<li><a
href="cc899de192"><code>cc899de</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3507">#3507</a>
from bc-lee/dart-fix</li>
<li><a
href="2a0bcea757"><code>2a0bcea</code></a>
Downgrade Dart SDK version installed in the CI</li>
<li><a
href="f1cc7a445f"><code>f1cc7a4</code></a>
Make Dart pre-commit hook compatible with the latest Dart SDKs</li>
<li><a
href="72a3b71f0e"><code>72a3b71</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3504">#3504</a>
from pre-commit/pre-commit-ci-update-config</li>
<li><a
href="c8925a457a"><code>c8925a4</code></a>
[pre-commit.ci] pre-commit autoupdate</li>
<li><a
href="a5fe6c500c"><code>a5fe6c5</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3496">#3496</a>
from ericphanson/eph/jl-startup</li>
<li><a
href="6f1f433a9c"><code>6f1f433</code></a>
Julia language: skip startup.jl file</li>
<li><a
href="c6817210b1"><code>c681721</code></a>
Merge pull request <a
href="https://redirect.github.com/pre-commit/pre-commit/issues/3499">#3499</a>
from pre-commit/pre-commit-ci-update-config</li>
<li><a
href="4fd4537bc6"><code>4fd4537</code></a>
[pre-commit.ci] pre-commit autoupdate</li>
<li>Additional commits viewable in <a
href="https://github.com/pre-commit/pre-commit/compare/v4.2.0...v4.3.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.403 to 1.1.404
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d393df1703"><code>d393df1</code></a>
Pyright NPM Package update to 1.1.404 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/352">#352</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.403...v1.1.404">compare
view</a></li>
</ul>
</details>
<br />

Updates `requests` from 2.32.4 to 2.32.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/psf/requests/releases">requests's
releases</a>.</em></p>
<blockquote>
<h2>v2.32.5</h2>
<h2>2.32.5 (2025-08-18)</h2>
<p><strong>Bugfixes</strong></p>
<ul>
<li>The SSLContext caching feature originally introduced in 2.32.0 has
created
a new class of issues in Requests that have had negative impact across a
number
of use cases. The Requests team has decided to revert this feature as
long term
maintenance of it is proving to be unsustainable in its current
iteration.</li>
</ul>
<p><strong>Deprecations</strong></p>
<ul>
<li>Added support for Python 3.14.</li>
<li>Dropped support for Python 3.8 following its end of support.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's
changelog</a>.</em></p>
<blockquote>
<h2>2.32.5 (2025-08-18)</h2>
<p><strong>Bugfixes</strong></p>
<ul>
<li>The SSLContext caching feature originally introduced in 2.32.0 has
created
a new class of issues in Requests that have had negative impact across a
number
of use cases. The Requests team has decided to revert this feature as
long term
maintenance of it is proving to be unsustainable in its current
iteration.</li>
</ul>
<p><strong>Deprecations</strong></p>
<ul>
<li>Added support for Python 3.14.</li>
<li>Dropped support for Python 3.8 following its end of support.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b25c87d7cb"><code>b25c87d</code></a>
v2.32.5</li>
<li><a
href="131e506079"><code>131e506</code></a>
Merge pull request <a
href="https://redirect.github.com/psf/requests/issues/7010">#7010</a>
from psf/dependabot/github_actions/actions/checkout-...</li>
<li><a
href="b336cb2bc6"><code>b336cb2</code></a>
Bump actions/checkout from 4.2.0 to 5.0.0</li>
<li><a
href="46e939b552"><code>46e939b</code></a>
Update publish workflow to use <code>artifact-id</code> instead of
<code>name</code></li>
<li><a
href="4b9c546aa3"><code>4b9c546</code></a>
Merge pull request <a
href="https://redirect.github.com/psf/requests/issues/6999">#6999</a>
from psf/dependabot/github_actions/step-security/har...</li>
<li><a
href="7618dbef01"><code>7618dbe</code></a>
Bump step-security/harden-runner from 2.12.0 to 2.13.0</li>
<li><a
href="2edca11103"><code>2edca11</code></a>
Add support for Python 3.14 and drop support for Python 3.8 (<a
href="https://redirect.github.com/psf/requests/issues/6993">#6993</a>)</li>
<li><a
href="fec96cd597"><code>fec96cd</code></a>
Update Makefile rules (<a
href="https://redirect.github.com/psf/requests/issues/6996">#6996</a>)</li>
<li><a
href="d58d8aa2f4"><code>d58d8aa</code></a>
docs: clarify timeout parameter uses seconds in Session.request (<a
href="https://redirect.github.com/psf/requests/issues/6994">#6994</a>)</li>
<li><a
href="91a3eabd3d"><code>91a3eab</code></a>
Bump github/codeql-action from 3.28.5 to 3.29.0</li>
<li>Additional commits viewable in <a
href="https://github.com/psf/requests/compare/v2.32.4...v2.32.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.12.4 to 0.12.9
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.12.9</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add check for
<code>airflow.secrets.cache.SecretCache</code> (<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17707">#17707</a>)</li>
<li>[<code>ruff</code>] Offer a safe fix for multi-digit zeros
(<code>RUF064</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19847">#19847</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-blind-except</code>] Fix <code>BLE001</code>
false-positive on <code>raise ... from None</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19755">#19755</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix false positive for
<code>C420</code> with attribute, subscript, or slice assignment targets
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19513">#19513</a>)</li>
<li>[<code>flake8-simplify</code>] Fix handling of U+001C..U+001F
whitespace (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19849">#19849</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pylint</code>] Use lowercase hex characters to match the
formatter (<code>PLE2513</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19808">#19808</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix <code>lint.future-annotations</code> link (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19876">#19876</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>
<p>Build <code>riscv64</code> binaries for release (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19819">#19819</a>)</p>
</li>
<li>
<p>Add rule code to error description in GitLab output (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19896">#19896</a>)</p>
</li>
<li>
<p>Improve rendering of the <code>full</code> output format (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19415">#19415</a>)</p>
<p>Below is an example diff for <a
href="https://docs.astral.sh/ruff/rules/unused-import/"><code>F401</code></a>:</p>
<pre lang="diff"><code>-unused.py:8:19: F401 [*] `pathlib` imported but
unused
+F401 [*] `pathlib` imported but unused
+  --&gt; unused.py:8:19
    |
  7 | # Unused, _not_ marked as required (due to the alias).
  8 | import pathlib as non_alias
-   |                   ^^^^^^^^^ F401
+   |                   ^^^^^^^^^
  9 |
 10 | # Unused, marked as required.
    |
-   = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
</code></pre>
<p>For now, the primary difference is the movement of the filename, line
number, and column information to a second line in the header. This new
representation will allow us to make further additions to Ruff's
diagnostics, such as adding sub-diagnostics and multiple annotations to
the same snippet.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.12.9</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add check for
<code>airflow.secrets.cache.SecretCache</code> (<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17707">#17707</a>)</li>
<li>[<code>ruff</code>] Offer a safe fix for multi-digit zeros
(<code>RUF064</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19847">#19847</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-blind-except</code>] Fix <code>BLE001</code>
false-positive on <code>raise ... from None</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19755">#19755</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix false positive for
<code>C420</code> with attribute, subscript, or slice assignment targets
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/19513">#19513</a>)</li>
<li>[<code>flake8-simplify</code>] Fix handling of U+001C..U+001F
whitespace (<code>SIM905</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19849">#19849</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>pylint</code>] Use lowercase hex characters to match the
formatter (<code>PLE2513</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19808">#19808</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix <code>lint.future-annotations</code> link (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19876">#19876</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>
<p>Build <code>riscv64</code> binaries for release (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19819">#19819</a>)</p>
</li>
<li>
<p>Add rule code to error description in GitLab output (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19896">#19896</a>)</p>
</li>
<li>
<p>Improve rendering of the <code>full</code> output format (<a
href="https://redirect.github.com/astral-sh/ruff/pull/19415">#19415</a>)</p>
<p>Below is an example diff for <a
href="https://docs.astral.sh/ruff/rules/unused-import/"><code>F401</code></a>:</p>
<pre lang="diff"><code>-unused.py:8:19: F401 [*] `pathlib` imported but
unused
+F401 [*] `pathlib` imported but unused
+  --&gt; unused.py:8:19
    |
  7 | # Unused, _not_ marked as required (due to the alias).
  8 | import pathlib as non_alias
-   |                   ^^^^^^^^^ F401
+   |                   ^^^^^^^^^
  9 |
 10 | # Unused, marked as required.
    |
-   = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
</code></pre>
<p>For now, the primary difference is the movement of the filename, line
number, and column information to a second line in the header. This new
representation will allow us to make further additions to Ruff's
diagnostics, such as adding sub-diagnostics and multiple annotations to
the same snippet.</p>
</li>
</ul>
<h2>0.12.8</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ef422460de"><code>ef42246</code></a>
Bump 0.12.9 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19917">#19917</a>)</li>
<li><a
href="dc2e8ab377"><code>dc2e8ab</code></a>
[ty] support <code>kw_only=True</code> for <code>dataclass()</code> and
<code>field()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19677">#19677</a>)</li>
<li><a
href="9aaa82d037"><code>9aaa82d</code></a>
Feature/build riscv64 bin (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19819">#19819</a>)</li>
<li><a
href="3288ac2dfb"><code>3288ac2</code></a>
[ty] Add caching to <code>CodeGeneratorKind::matches()</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19912">#19912</a>)</li>
<li><a
href="1167ed61cf"><code>1167ed6</code></a>
[ty] Rename <code>functionArgumentNames</code> to
<code>callArgumentNames</code> inlay hint setting...</li>
<li><a
href="2ee47d87b6"><code>2ee47d8</code></a>
[ty] Default <code>ty.inlayHints.*</code> server settings to true (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19910">#19910</a>)</li>
<li><a
href="d324cedfc2"><code>d324ced</code></a>
[ty] Remove py-fuzzer skips for seeds that are no longer slow (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19906">#19906</a>)</li>
<li><a
href="5a570c8e6d"><code>5a570c8</code></a>
[ty] fix deferred name loading in PEP695 generic classes/functions (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19888">#19888</a>)</li>
<li><a
href="baadb5a78d"><code>baadb5a</code></a>
[ty] Add some additional type safety to <code>CycleDetector</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/19903">#19903</a>)</li>
<li><a
href="df0648aae0"><code>df0648a</code></a>
[<code>flake8-blind-except</code>] Fix <code>BLE001</code>
false-positive on <code>raise ... from None</code> ...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.4...0.12.9">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-28 15:22:40 +00:00
Reinier van der Leer
12cdd45551 refactor(backend): Improve auth setup & OpenAPI generation (#10720)
Our current auth setup (`autogpt_libs.auth` + its usage) is quite
inconsistent and doesn't do all of its jobs properly. The 401 responses
you get when unauthenticated are not included in the OpenAPI spec,
causing these to be unaccounted for in the generated frontend API
client. Usage of the FastAPI dependencies supplied by
`autogpt_libs.auth.depends` aren't consistently used the same way,
making maintenance on these hard to oversee. API tests use many
different ways to get around the auth requirement, making this also hard
to maintain and oversee.
This pull request aims to fix all of this and give us a consistent,
clean, and self-documenting API auth implementation.

- Resolves #10715

### Changes 🏗️

- Homogenize use of `autogpt_libs.auth` security dependencies throughout
the backend
- Fix OpenAPI schema generation for 401 responses
  - Handle possible 401 responses in frontend
- Tighten validation and add warnings for weak settings in
`autogpt_libs.auth.config`
- Increase test coverage for `autogpt_libs.auth` to 100%
- Standardize auth setup for API tests
- Rename `APIKeyValidator` to `APIKeyAuthenticator` and move to its own
module in `backend.server`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tests for `autogpt_libs.auth` pass
  - [x] All tests for `backend.server` pass
  - [x] @ntindle does a security audit for these changes
- [x] OpenAPI spec for authenticated routes is generated with the
appropriate `401` response

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-28 14:46:50 +00:00
Reinier van der Leer
df3c81a7a6 fix(backend): Fix 4 (deprecation) warnings on startup (#10759)
Fixes these warnings on startup:
```
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:373: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
  warnings.warn(message, UserWarning)
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:323: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
  warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:298: PydanticDeprecatedSince20: `json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
  warnings.warn(
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:294: UserWarning: `alias` specification on field "created_at" must be set on outermost annotation to take effect.
  warnings.warn(
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:294: UserWarning: `alias` specification on field "updated_at" must be set on outermost annotation to take effect.
  warnings.warn(
```

- Resolves #10758

### Changes 🏗️

- Fix field annotations in `backend/blocks/exa/websets.py`
- Replace deprecated JSON encoder specification in
`backend/blocks/wordpress/_api.py` by field serializer
- Move deprecated `schema_extra` example specification in
`backend/server/integrations/models.py` to `Field(examples=...)`

The two remaining warnings that appear on start-up aren't trivial to
fix.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Changes are trivial and do not require further testing
2025-08-28 11:34:58 +00:00
Ubbe
0f477e2392 feat(frontend): new run agent modal (1/2) (#10696)
## Changes 🏗️

<img width="600" height="624" alt="Screenshot 2025-08-25 at 23 22 24"
src="https://github.com/user-attachments/assets/a66b0a02-cb7a-47f3-8759-e955fb76f865"
/>

<img width="600" height="748" alt="Screenshot 2025-08-25 at 23 22 40"
src="https://github.com/user-attachments/assets/0357bd0b-9875-41a4-8752-d7dbc7a82ff6"
/>

The new **Agent Run Modal**, to be used when running agents. This is PR
1/2 ( _as I learned there is so much into running agents_ 🔮 ). The first
part sets up "the easy things":
- the run view
- the schedule run view
- the switch between them
- the agent details

On the next PR, I will add support for the current agent run inputs (
[and all their
types...](https://github.com/Significant-Gravitas/AutoGPT/blob/dev/autogpt_platform/frontend/src/components/type-based-input.tsx)
😆 ) + webhook triggers...

## Checklist 📋

### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] with the flag ON ( is now OFF in dev but ON local )
  - [x] clicking `New Run` on the new library page shows the new modal
  - [x] Details are shown on the modal header
  - [x] Agent details are shown
  - [x] You can schedule runs 
   
### For configuration changes:

None

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-27 10:38:12 +00:00
Swifty
82618aede0 Merge remote-tracking branch 'origin/dev' 2025-08-27 11:39:16 +02:00
Swifty
b713093276 Merge master into dev to tidy up merge history (#10753) 2025-08-27 11:14:19 +02:00
Bently
3718b948ea feat(blocks): Add Ideogram V3 model (#10752)
Adds support for Ideogram V3 model while maintaining backward
compatibility with existing models (V1,
V1_TURBO, V2, V2_TURBO). Updates default model to V3 and implements
smart API routing to handle
Ideogram's new V3 endpoint requirements.

Changes Made

- Added V3 model support: Added V_3 to IdeogramModelName enum and set as
default
- Dual API endpoint handling:
- V3 models route to new /v1/ideogram-v3/generate endpoint with updated
payload format
- Legacy models (V1, V2, Turbo variants) continue using /generate
endpoint
- Model-specific feature filtering:
- V1 models: Basic parameters only (no style_type or color_palette
support)
- V2/V2_TURBO: Full legacy feature support including style_type and
color_palette
- V3: New endpoint with aspect ratio mapping and updated parameter
structure
- Aspect ratio compatibility: Added mapping between internal enum values
and V3's expected format
(ASPECT_1_1 → 1x1)
- Updated pricing: V3 model costs 18 credits (vs 16 for other models)
- Updated default usage: Store image generation now uses V3 by default

Technical Details

Ideogram updated their API with a separate V3 endpoint that has
different requirements:
- Different URL path (/v1/ideogram-v3/generate)
- Different aspect ratio format (e.g., 1x1 instead of ASPECT_1_1)
- Model-specific feature support (V1 models don't support style_type,
etc.)

The implementation intelligently routes requests to the appropriate
endpoint based on the selected model
while maintaining a single unified interface.

I tested all the models and they are working here
<img width="1804" height="887" alt="image"
src="https://github.com/user-attachments/assets/9f2e44ca-50a4-487f-987c-3230dd72fb5e"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the Ideogram model block and watch as they all work!
2025-08-27 08:33:47 +00:00
Swifty
44d739386b feat(platform/blocks): Added stagehand integration (#10751)
Added basic stagehand integration:

<img width="667" height="609" alt="Screenshot 2025-08-27 at 09 20 18"
src="https://github.com/user-attachments/assets/11ab2941-0913-4346-a1d4-45980711e0f9"
/>


[stagehand_v35.json](https://github.com/user-attachments/files/22002924/stagehand_v35.json)

### Changes 🏗️

- Act Block
- Extract Block
- Observe Block

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have added a sample agent
- [x] I have created an agent that uses these blocks and ensured it runs
2025-08-27 08:33:41 +00:00
Abhimanyu Yadav
533d2d0277 refactor(frontend): Revamp creator page data fetching and structure (#10737)
### Changes 🏗️
- Updated the creator page to utilize React Query for data fetching,
improving performance and reliability.
- Removed legacy API calls and integrated prefetching for creator
details and agents.
- Introduced a new MainCreatorPage component for better separation of
concerns.
- Added a hydration boundary for managing server state.

### Checklist 📋

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All marketplace E2E tests are working.
- [x] I’ve tested all the links and checked if everything renders
perfectly on the marketplace page.
2025-08-27 04:04:57 +00:00
Abhimanyu Yadav
c6821484c7 fix(frontend): add min-width to agent run draft view component (#10731)
- resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10618

When we have a dropdown with a large description, the actions button is
moved out of the dialog box. To fix this, I’ve added a temporary
solution, but in the future, we need to change the entire layout.


### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything works perfectly locally.
2025-08-27 04:04:16 +00:00
Reinier van der Leer
fecbd3042d fix(frontend/library): Fix new runs not appearing in list (#10750)
- Fixes #10749

### Changes 🏗️

- Fix implementation of `useAgentRunsInfinite.upsertAgentRun(..)`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] New runs appear in runs list
2025-08-26 17:44:10 +00:00
Zamil Majdy
c0172c93aa fix(backend/executor): prevent infinite requeueing of malformed messages (#10746)
### Changes 🏗️

This PR fixes an infinite loop issue in the execution manager where
malformed or unparseable messages would be continuously requeued,
causing high CPU usage and preventing the system from processing
legitimate messages.

**Key changes:**
- Modified `_ack_message()` function to accept explicit `requeue`
parameter
- Set `requeue=False` for malformed/unparseable messages that cannot be
fixed by retrying
- Set `requeue=False` for duplicate execution attempts (graph already
running)
- Kept `requeue=True` for legitimate failures that may succeed on retry
(e.g., temporary resource constraints, network issues)

**Technical details:**
The previous implementation always set `requeue=True` when rejecting
messages with `basic_nack()`. This caused problematic messages to be
immediately re-delivered to the consumer, creating an infinite loop for:
1. Messages with invalid JSON that cannot be parsed
2. Messages for executions that are already running (duplicates)

These scenarios will never succeed regardless of how many times they're
retried, so they should be rejected without requeueing to prevent
resource exhaustion.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified malformed messages are rejected without requeue
- [x] Confirmed duplicate execution messages are rejected without
requeue
- [x] Ensured legitimate failures (shutdown, pool full) still requeue
properly
- [x] Tested that normal message processing continues to work correctly
2025-08-26 18:34:58 +07:00
Swifty
c4483fa6c7 Merge branch 'dev' 2025-08-20 15:22:08 +02:00
Swifty
c2af8c1a6a Reapply "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 260dd526c9.
2025-08-20 15:13:46 +02:00
Nicholas Tindle
483c399812 Revert "feat(platform): add ability to reject approved agents in admin dashboard"
This reverts commit 92515b3683.
2025-08-19 00:49:30 -05:00
Nicholas Tindle
260dd526c9 Revert "Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard"
This reverts commit 105d5dc7e9, reversing
changes made to 92515b3683.
2025-08-19 00:49:10 -05:00
Nicholas Tindle
75a159db01 Revert "style(frontend): format code and improve dialog titles in approve-reject buttons"
This reverts commit 62032e6584.
2025-08-19 00:48:22 -05:00
Nicholas Tindle
62032e6584 style(frontend): format code and improve dialog titles in approve-reject buttons 2025-08-18 23:32:43 -05:00
Nicholas Tindle
105d5dc7e9 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into ntindle/secrt-1218-add-ability-to-reject-approved-agents-in-admin-dashboard 2025-08-18 22:59:27 -05:00
Nicholas Tindle
92515b3683 feat(platform): add ability to reject approved agents in admin dashboard
- Modified backend review_store_submission to handle rejecting approved agents
- Added logic to update StoreListing when rejecting approved agents
- Updated UI to show "Revoke" button for approved agents
- Only shows approve button for pending agents
- Updates dialog text appropriately for revoking vs rejecting

Fixes SECRT-1218

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-18 10:35:25 -05:00
442 changed files with 23551 additions and 5263 deletions

View File

@@ -0,0 +1,97 @@
name: Auto Fix CI Failures
on:
workflow_run:
workflows: ["CI"]
types:
- completed
permissions:
contents: write
pull-requests: write
actions: read
issues: write
id-token: write # Required for OIDC token exchange
jobs:
auto-fix:
if: |
github.event.workflow_run.conclusion == 'failure' &&
github.event.workflow_run.pull_requests[0] &&
!startsWith(github.event.workflow_run.head_branch, 'claude-auto-fix-ci-')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup git identity
run: |
git config --global user.email "claude[bot]@users.noreply.github.com"
git config --global user.name "claude[bot]"
- name: Create fix branch
id: branch
run: |
BRANCH_NAME="claude-auto-fix-ci-${{ github.event.workflow_run.head_branch }}-${{ github.run_id }}"
git checkout -b "$BRANCH_NAME"
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
- name: Get CI failure details
id: failure_details
uses: actions/github-script@v7
with:
script: |
const run = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: ${{ github.event.workflow_run.id }}
});
const failedJobs = jobs.data.jobs.filter(job => job.conclusion === 'failure');
let errorLogs = [];
for (const job of failedJobs) {
const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
job_id: job.id
});
errorLogs.push({
jobName: job.name,
logs: logs.data
});
}
return {
runUrl: run.data.html_url,
failedJobs: failedJobs.map(j => j.name),
errorLogs: errorLogs
};
- name: Fix CI failures with Claude
id: claude
uses: anthropics/claude-code-action@v1
with:
prompt: |
/fix-ci
Failed CI Run: ${{ fromJSON(steps.failure_details.outputs.result).runUrl }}
Failed Jobs: ${{ join(fromJSON(steps.failure_details.outputs.result).failedJobs, ', ') }}
PR Number: ${{ github.event.workflow_run.pull_requests[0].number }}
Branch Name: ${{ steps.branch.outputs.branch_name }}
Base Branch: ${{ github.event.workflow_run.head_branch }}
Repository: ${{ github.repository }}
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

379
.github/workflows/claude-dependabot.yml vendored Normal file
View File

@@ -0,0 +1,379 @@
# Claude Dependabot PR Review Workflow
#
# This workflow automatically runs Claude analysis on Dependabot PRs to:
# - Identify dependency changes and their versions
# - Look up changelogs for updated packages
# - Assess breaking changes and security impacts
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
name: Claude Dependabot PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
dependabot-review:
# Only run on Dependabot PRs
if: github.actor == 'dependabot[bot]'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Dependabot Analysis
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |
You are Claude, an AI assistant specialized in reviewing Dependabot dependency update PRs.
Your primary tasks are:
1. **Analyze the dependency changes** in this Dependabot PR
2. **Look up changelogs** for all updated dependencies to understand what changed
3. **Identify breaking changes** and assess potential impact on the AutoGPT codebase
4. **Provide actionable recommendations** for the development team
## Analysis Process:
1. **Identify Changed Dependencies**:
- Use git diff to see what dependencies were updated
- Parse package.json, poetry.lock, requirements files, etc.
- List all package versions: old → new
2. **Changelog Research**:
- For each updated dependency, look up its changelog/release notes
- Use WebFetch to access GitHub releases, NPM package pages, PyPI project pages. The pr should also have some details
- Focus on versions between the old and new versions
- Identify: breaking changes, deprecations, security fixes, new features
3. **Breaking Change Assessment**:
- Categorize changes: BREAKING, MAJOR, MINOR, PATCH, SECURITY
- Assess impact on AutoGPT's usage patterns
- Check if AutoGPT uses affected APIs/features
- Look for migration guides or upgrade instructions
4. **Codebase Impact Analysis**:
- Search the AutoGPT codebase for usage of changed APIs
- Identify files that might be affected by breaking changes
- Check test files for deprecated usage patterns
- Look for configuration changes needed
## Output Format:
Provide a comprehensive review comment with:
### 🔍 Dependency Analysis Summary
- List of updated packages with version changes
- Overall risk assessment (LOW/MEDIUM/HIGH)
### 📋 Detailed Changelog Review
For each updated dependency:
- **Package**: name (old_version → new_version)
- **Changes**: Summary of key changes
- **Breaking Changes**: List any breaking changes
- **Security Fixes**: Note security improvements
- **Migration Notes**: Any upgrade steps needed
### ⚠️ Impact Assessment
- **Breaking Changes Found**: Yes/No with details
- **Affected Files**: List AutoGPT files that may need updates
- **Test Impact**: Any tests that may need updating
- **Configuration Changes**: Required config updates
### 🛠️ Recommendations
- **Action Required**: What the team should do
- **Testing Focus**: Areas to test thoroughly
- **Follow-up Tasks**: Any additional work needed
- **Merge Recommendation**: APPROVE/REVIEW_NEEDED/HOLD
### 📚 Useful Links
- Links to relevant changelogs, migration guides, documentation
Be thorough but concise. Focus on actionable insights that help the development team make informed decisions about the dependency updates.

View File

@@ -30,18 +30,296 @@ jobs:
github.event.issue.author_association == 'COLLABORATOR'
)
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
contents: write
pull-requests: read
issues: read
id-token: write
actions: read # Required for CI access
steps:
- name: Checkout repository
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
# Extract Poetry version from backend/poetry.lock (matches CI)
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
# Add Poetry to PATH
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Check poetry.lock
working-directory: autogpt_platform/backend
run: |
poetry lock
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Warning: poetry.lock not up to date, but continuing for setup"
git checkout poetry.lock # Reset for clean setup
fi
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
# Install Playwright browsers for frontend testing
# NOTE: Disabled to save ~1 minute of setup time. Re-enable if Copilot needs browser automation (e.g., for MCP)
# - name: Install Playwright browsers
# working-directory: autogpt_platform/frontend
# run: pnpm playwright install --with-deps chromium
# Docker setup for development environment
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Copy default environment files
working-directory: autogpt_platform
run: |
# Copy default environment files for development
cp .env.default .env
cp backend/.env.default backend/.env
cp frontend/.env.default frontend/.env
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
key: docker-images-v2-${{ runner.os }}-${{ hashFiles('.github/workflows/copilot-setup-steps.yml') }}
restore-keys: |
docker-images-v2-${{ runner.os }}-
docker-images-v1-${{ runner.os }}-
- name: Load or pull Docker images
working-directory: autogpt_platform
run: |
mkdir -p ~/docker-cache
# Define image list for easy maintenance
IMAGES=(
"redis:latest"
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
)
# Check if any cached tar files exist (more reliable than cache-hit)
if ls ~/docker-cache/*.tar 1> /dev/null 2>&1; then
echo "Docker cache found, loading images in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
if [ -f ~/docker-cache/${filename}.tar ]; then
echo "Loading $image..."
docker load -i ~/docker-cache/${filename}.tar || echo "Warning: Failed to load $image from cache" &
fi
done
wait
echo "All cached images loaded"
else
echo "No Docker cache found, pulling images in parallel..."
# Pull all images in parallel
for image in "${IMAGES[@]}"; do
docker pull "$image" &
done
wait
# Only save cache on main branches (not PRs) to avoid cache pollution
if [[ "${{ github.ref }}" == "refs/heads/master" ]] || [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
echo "Saving Docker images to cache in parallel..."
for image in "${IMAGES[@]}"; do
# Convert image name to filename (replace : and / with -)
filename=$(echo "$image" | tr ':/' '--')
echo "Saving $image..."
docker save -o ~/docker-cache/${filename}.tar "$image" || echo "Warning: Failed to save $image" &
done
wait
echo "Docker image cache saved"
else
echo "Skipping cache save for PR/feature branch"
fi
fi
echo "Docker images ready for use"
# Phase 2: Build migrate service with GitHub Actions cache
- name: Build migrate Docker image with cache
working-directory: autogpt_platform
run: |
# Build the migrate image with buildx for GHA caching
docker buildx build \
--cache-from type=gha \
--cache-to type=gha,mode=max \
--target migrate \
--tag autogpt_platform-migrate:latest \
--load \
-f backend/Dockerfile \
..
# Start services using pre-built images
- name: Start Docker services for development
working-directory: autogpt_platform
run: |
# Start essential services (migrate image already built with correct tag)
docker compose --profile local up deps --no-build --detach
echo "Waiting for services to be ready..."
# Wait for database to be ready
echo "Checking database readiness..."
timeout 30 sh -c 'until docker compose exec -T db pg_isready -U postgres 2>/dev/null; do
echo " Waiting for database..."
sleep 2
done' && echo "✅ Database is ready" || echo "⚠️ Database ready check timeout after 30s, continuing..."
# Check migrate service status
echo "Checking migration status..."
docker compose ps migrate || echo " Migrate service not visible in ps output"
# Wait for migrate service to complete
echo "Waiting for migrations to complete..."
timeout 30 bash -c '
ATTEMPTS=0
while [ $ATTEMPTS -lt 15 ]; do
ATTEMPTS=$((ATTEMPTS + 1))
# Check using docker directly (more reliable than docker compose ps)
CONTAINER_STATUS=$(docker ps -a --filter "label=com.docker.compose.service=migrate" --format "{{.Status}}" | head -1)
if [ -z "$CONTAINER_STATUS" ]; then
echo " Attempt $ATTEMPTS: Migrate container not found yet..."
elif echo "$CONTAINER_STATUS" | grep -q "Exited (0)"; then
echo "✅ Migrations completed successfully"
docker compose logs migrate --tail=5 2>/dev/null || true
exit 0
elif echo "$CONTAINER_STATUS" | grep -q "Exited ([1-9]"; then
EXIT_CODE=$(echo "$CONTAINER_STATUS" | grep -oE "Exited \([0-9]+\)" | grep -oE "[0-9]+")
echo "❌ Migrations failed with exit code: $EXIT_CODE"
echo "Migration logs:"
docker compose logs migrate --tail=20 2>/dev/null || true
exit 1
elif echo "$CONTAINER_STATUS" | grep -q "Up"; then
echo " Attempt $ATTEMPTS: Migrate container is running... ($CONTAINER_STATUS)"
else
echo " Attempt $ATTEMPTS: Migrate container status: $CONTAINER_STATUS"
fi
sleep 2
done
echo "⚠️ Timeout: Could not determine migration status after 30 seconds"
echo "Final container check:"
docker ps -a --filter "label=com.docker.compose.service=migrate" || true
echo "Migration logs (if available):"
docker compose logs migrate --tail=10 2>/dev/null || echo " No logs available"
' || echo "⚠️ Migration check completed with warnings, continuing..."
# Brief wait for other services to stabilize
echo "Waiting 5 seconds for other services to stabilize..."
sleep 5
# Verify installations and provide environment info
- name: Verify setup and show environment info
run: |
echo "=== Python Setup ==="
python --version
poetry --version
echo "=== Node.js Setup ==="
node --version
pnpm --version
echo "=== Additional Tools ==="
docker --version
docker compose version
gh --version || true
echo "=== Services Status ==="
cd autogpt_platform
docker compose ps || true
echo "=== Backend Dependencies ==="
cd backend
poetry show | head -10 || true
echo "=== Frontend Dependencies ==="
cd ../frontend
pnpm list --depth=0 | head -10 || true
echo "=== Environment Files ==="
ls -la ../.env* || true
ls -la .env* || true
ls -la ../backend/.env* || true
echo "✅ AutoGPT Platform development environment setup complete!"
echo "🚀 Ready for development with Docker services running"
echo "📝 Backend server: poetry run serve (port 8000)"
echo "🌐 Frontend server: pnpm dev (port 3000)"
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus
additional_permissions: |
actions: read

View File

@@ -201,7 +201,7 @@ jobs:
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
SUPABASE_URL: ${{ steps.supabase.outputs.API_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
SUPABASE_JWT_SECRET: ${{ steps.supabase.outputs.JWT_SECRET }}
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"

View File

@@ -160,7 +160,7 @@ jobs:
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml up -d
NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d
env:
DOCKER_BUILDKIT: 1
BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache

View File

@@ -61,24 +61,27 @@ poetry run pytest path/to/test.py --snapshot-update
```bash
# Install dependencies
cd frontend && npm install
cd frontend && pnpm i
# Start development server
npm run dev
pnpm dev
# Run E2E tests
npm run test
pnpm test
# Run Storybook for component development
npm run storybook
pnpm storybook
# Build production
npm run build
pnpm build
# Type checking
npm run types
pnpm types
```
We have a components library in autogpt_platform/frontend/src/components/atoms that should be used when adding new pages and components.
## Architecture Overview
### Backend Architecture
@@ -149,12 +152,21 @@ Key models (defined in `/backend/schema.prisma`):
**Adding a new block:**
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
- Testing and validation
- File organization
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
4. Define input/output schemas using `BlockSchema`
5. Implement async `run` method
6. Generate unique block ID using `uuid.uuid4()`
7. Test with `poetry run pytest backend/blocks/test/test_block.py`
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?

View File

@@ -1,35 +0,0 @@
import hashlib
import secrets
from typing import NamedTuple
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
raw: str
prefix: str
postfix: str
hash: str
class APIKeyManager:
PREFIX: str = "agpt_"
PREFIX_LENGTH: int = 8
POSTFIX_LENGTH: int = 8
def generate_api_key(self) -> APIKeyContainer:
"""Generate a new API key with all its parts."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
return APIKeyContainer(
raw=raw_key,
prefix=raw_key[: self.PREFIX_LENGTH],
postfix=raw_key[-self.POSTFIX_LENGTH :],
hash=hashlib.sha256(raw_key.encode()).hexdigest(),
)
def verify_api_key(self, provided_key: str, stored_hash: str) -> bool:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
provided_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(provided_hash, stored_hash)

View File

@@ -0,0 +1,78 @@
import hashlib
import secrets
from typing import NamedTuple
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
key: str
head: str
tail: str
hash: str
salt: str
class APIKeySmith:
PREFIX: str = "agpt_"
HEAD_LENGTH: int = 8
TAIL_LENGTH: int = 8
def generate_key(self) -> APIKeyContainer:
"""Generate a new API key with secure hashing."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
hash, salt = self.hash_key(raw_key)
return APIKeyContainer(
key=raw_key,
head=raw_key[: self.HEAD_LENGTH],
tail=raw_key[-self.TAIL_LENGTH :],
hash=hash,
salt=salt,
)
def verify_key(
self, provided_key: str, known_hash: str, known_salt: str | None = None
) -> bool:
"""
Verify an API key against a known hash (+ salt).
Supports verifying both legacy SHA256 and secure Scrypt hashes.
"""
if not provided_key.startswith(self.PREFIX):
return False
# Handle legacy SHA256 hashes (migration support)
if known_salt is None:
legacy_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(legacy_hash, known_hash)
try:
salt_bytes = bytes.fromhex(known_salt)
provided_hash = self._hash_key_with_salt(provided_key, salt_bytes)
return secrets.compare_digest(provided_hash, known_hash)
except (ValueError, TypeError):
return False
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()
def _generate_salt(self) -> bytes:
"""Generate a random salt for hashing."""
return secrets.token_bytes(32)
def _hash_key_with_salt(self, raw_key: str, salt: bytes) -> str:
"""Hash API key using Scrypt with salt."""
kdf = Scrypt(
length=32,
salt=salt,
n=2**14, # CPU/memory cost parameter
r=8, # Block size parameter
p=1, # Parallelization parameter
)
key_hash = kdf.derive(raw_key.encode())
return key_hash.hex()

View File

@@ -0,0 +1,79 @@
import hashlib
from autogpt_libs.api_key.keysmith import APIKeySmith
def test_generate_api_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
assert key.key.startswith(keysmith.PREFIX)
assert key.head == key.key[: keysmith.HEAD_LENGTH]
assert key.tail == key.key[-keysmith.TAIL_LENGTH :]
assert len(key.hash) == 64 # 32 bytes hex encoded
assert len(key.salt) == 64 # 32 bytes hex encoded
def test_verify_new_secure_key():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test correct key validates
assert keysmith.verify_key(key.key, key.hash, key.salt) is True
# Test wrong key fails
wrong_key = f"{keysmith.PREFIX}wrongkey123"
assert keysmith.verify_key(wrong_key, key.hash, key.salt) is False
def test_verify_legacy_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}legacykey123"
legacy_hash = hashlib.sha256(legacy_key.encode()).hexdigest()
# Test legacy key validates without salt
assert keysmith.verify_key(legacy_key, legacy_hash) is True
# Test wrong legacy key fails
wrong_key = f"{keysmith.PREFIX}wronglegacy"
assert keysmith.verify_key(wrong_key, legacy_hash) is False
def test_rehash_existing_key():
keysmith = APIKeySmith()
legacy_key = f"{keysmith.PREFIX}migratekey123"
# Migrate the legacy key
new_hash, new_salt = keysmith.hash_key(legacy_key)
# Verify migrated key works
assert keysmith.verify_key(legacy_key, new_hash, new_salt) is True
# Verify different key fails with migrated hash
wrong_key = f"{keysmith.PREFIX}wrongkey"
assert keysmith.verify_key(wrong_key, new_hash, new_salt) is False
def test_invalid_key_prefix():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Test key without proper prefix fails
invalid_key = "invalid_prefix_key"
assert keysmith.verify_key(invalid_key, key.hash, key.salt) is False
def test_secure_hash_requires_salt():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Secure hash without salt should fail
assert keysmith.verify_key(key.key, key.hash) is False
def test_invalid_salt_format():
keysmith = APIKeySmith()
key = keysmith.generate_key()
# Invalid salt format should fail gracefully
assert keysmith.verify_key(key.key, key.hash, "invalid_hex") is False

View File

@@ -1,13 +1,13 @@
from .depends import requires_admin_user, requires_user
from .jwt_utils import parse_jwt_token
from .middleware import APIKeyValidator, auth_middleware
from .config import verify_settings
from .dependencies import get_user_id, requires_admin_user, requires_user
from .helpers import add_auth_responses_to_openapi
from .models import User
__all__ = [
"parse_jwt_token",
"requires_user",
"verify_settings",
"get_user_id",
"requires_admin_user",
"APIKeyValidator",
"auth_middleware",
"requires_user",
"add_auth_responses_to_openapi",
"User",
]

View File

@@ -1,11 +1,90 @@
import logging
import os
from jwt.algorithms import get_default_algorithms, has_crypto
logger = logging.getLogger(__name__)
class AuthConfigError(ValueError):
"""Raised when authentication configuration is invalid."""
pass
ALGO_RECOMMENDATION = (
"We highly recommend using an asymmetric algorithm such as ES256, "
"because when leaked, a shared secret would allow anyone to "
"forge valid tokens and impersonate users. "
"More info: https://supabase.com/docs/guides/auth/signing-keys#choosing-the-right-signing-algorithm" # noqa
)
class Settings:
def __init__(self):
self.JWT_SECRET_KEY: str = os.getenv("SUPABASE_JWT_SECRET", "")
self.ENABLE_AUTH: bool = os.getenv("ENABLE_AUTH", "false").lower() == "true"
self.JWT_ALGORITHM: str = "HS256"
self.JWT_VERIFY_KEY: str = os.getenv(
"JWT_VERIFY_KEY", os.getenv("SUPABASE_JWT_SECRET", "")
).strip()
self.JWT_ALGORITHM: str = os.getenv("JWT_SIGN_ALGORITHM", "HS256").strip()
self.validate()
def validate(self):
if not self.JWT_VERIFY_KEY:
raise AuthConfigError(
"JWT_VERIFY_KEY must be set. "
"An empty JWT secret would allow anyone to forge valid tokens."
)
if len(self.JWT_VERIFY_KEY) < 32:
logger.warning(
"⚠️ JWT_VERIFY_KEY appears weak (less than 32 characters). "
"Consider using a longer, cryptographically secure secret."
)
supported_algorithms = get_default_algorithms().keys()
if not has_crypto:
logger.warning(
"⚠️ Asymmetric JWT verification is not available "
"because the 'cryptography' package is not installed. "
+ ALGO_RECOMMENDATION
)
if (
self.JWT_ALGORITHM not in supported_algorithms
or self.JWT_ALGORITHM == "none"
):
raise AuthConfigError(
f"Invalid JWT_SIGN_ALGORITHM: '{self.JWT_ALGORITHM}'. "
"Supported algorithms are listed on "
"https://pyjwt.readthedocs.io/en/stable/algorithms.html"
)
if self.JWT_ALGORITHM.startswith("HS"):
logger.warning(
f"⚠️ JWT_SIGN_ALGORITHM is set to '{self.JWT_ALGORITHM}', "
"a symmetric shared-key signature algorithm. " + ALGO_RECOMMENDATION
)
settings = Settings()
_settings: Settings = None # type: ignore
def get_settings() -> Settings:
global _settings
if not _settings:
_settings = Settings()
return _settings
def verify_settings() -> None:
global _settings
if not _settings:
_settings = Settings() # calls validation indirectly
return
_settings.validate()

View File

@@ -0,0 +1,306 @@
"""
Comprehensive tests for auth configuration to ensure 100% line and branch coverage.
These tests verify critical security checks preventing JWT token forgery.
"""
import logging
import os
import pytest
from pytest_mock import MockerFixture
from autogpt_libs.auth.config import AuthConfigError, Settings
def test_environment_variable_precedence(mocker: MockerFixture):
"""Test that environment variables take precedence over defaults."""
secret = "environment-secret-key-with-proper-length-123456"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_environment_variable_backwards_compatible(mocker: MockerFixture):
"""Test that SUPABASE_JWT_SECRET is read if JWT_VERIFY_KEY is not set."""
secret = "environment-secret-key-with-proper-length-123456"
mocker.patch.dict(os.environ, {"SUPABASE_JWT_SECRET": secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_auth_config_error_inheritance():
"""Test that AuthConfigError is properly defined as an Exception."""
assert issubclass(AuthConfigError, Exception)
error = AuthConfigError("test message")
assert str(error) == "test message"
def test_settings_static_after_creation(mocker: MockerFixture):
"""Test that settings maintain their values after creation."""
secret = "immutable-secret-key-with-proper-length-12345"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
settings = Settings()
original_secret = settings.JWT_VERIFY_KEY
# Changing environment after creation shouldn't affect settings
os.environ["JWT_VERIFY_KEY"] = "different-secret"
assert settings.JWT_VERIFY_KEY == original_secret
def test_settings_load_with_valid_secret(mocker: MockerFixture):
"""Test auth enabled with a valid JWT secret."""
valid_secret = "a" * 32 # 32 character secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": valid_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == valid_secret
def test_settings_load_with_strong_secret(mocker: MockerFixture):
"""Test auth enabled with a cryptographically strong secret."""
strong_secret = "super-secret-jwt-token-with-at-least-32-characters-long"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": strong_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == strong_secret
assert len(settings.JWT_VERIFY_KEY) >= 32
def test_secret_empty_raises_error(mocker: MockerFixture):
"""Test that auth enabled with empty secret raises AuthConfigError."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": ""}, clear=True)
with pytest.raises(Exception) as exc_info:
Settings()
assert "JWT_VERIFY_KEY" in str(exc_info.value)
def test_secret_missing_raises_error(mocker: MockerFixture):
"""Test that auth enabled without secret env var raises AuthConfigError."""
mocker.patch.dict(os.environ, {}, clear=True)
with pytest.raises(Exception) as exc_info:
Settings()
assert "JWT_VERIFY_KEY" in str(exc_info.value)
@pytest.mark.parametrize("secret", [" ", " ", "\t", "\n", " \t\n "])
def test_secret_only_whitespace_raises_error(mocker: MockerFixture, secret: str):
"""Test that auth enabled with whitespace-only secret raises error."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
with pytest.raises(ValueError):
Settings()
def test_secret_weak_logs_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that weak JWT secret triggers warning log."""
weak_secret = "short" # Less than 32 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": weak_secret}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert settings.JWT_VERIFY_KEY == weak_secret
assert "key appears weak" in caplog.text.lower()
assert "less than 32 characters" in caplog.text
def test_secret_31_char_logs_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that 31-character secret triggers warning (boundary test)."""
secret_31 = "a" * 31 # Exactly 31 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret_31}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert len(settings.JWT_VERIFY_KEY) == 31
assert "key appears weak" in caplog.text.lower()
def test_secret_32_char_no_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture
):
"""Test that 32-character secret does not trigger warning (boundary test)."""
secret_32 = "a" * 32 # Exactly 32 characters
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret_32}, clear=True)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert len(settings.JWT_VERIFY_KEY) == 32
assert "JWT secret appears weak" not in caplog.text
def test_secret_whitespace_stripped(mocker: MockerFixture):
"""Test that JWT secret whitespace is stripped."""
secret = "a" * 32
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": f" {secret} "}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == secret
def test_secret_with_special_characters(mocker: MockerFixture):
"""Test JWT secret with special characters."""
special_secret = "!@#$%^&*()_+-=[]{}|;:,.<>?`~" + "a" * 10 # 40 chars total
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": special_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == special_secret
def test_secret_with_unicode(mocker: MockerFixture):
"""Test JWT secret with unicode characters."""
unicode_secret = "秘密🔐キー" + "a" * 25 # Ensure >32 bytes
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": unicode_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == unicode_secret
def test_secret_very_long(mocker: MockerFixture):
"""Test JWT secret with excessive length."""
long_secret = "a" * 1000 # 1000 character secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": long_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == long_secret
assert len(settings.JWT_VERIFY_KEY) == 1000
def test_secret_with_newline(mocker: MockerFixture):
"""Test JWT secret containing newlines."""
multiline_secret = "secret\nwith\nnewlines" + "a" * 20
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": multiline_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == multiline_secret
def test_secret_base64_encoded(mocker: MockerFixture):
"""Test JWT secret that looks like base64."""
base64_secret = "dGhpc19pc19hX3NlY3JldF9rZXlfd2l0aF9wcm9wZXJfbGVuZ3Ro"
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": base64_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == base64_secret
def test_secret_numeric_only(mocker: MockerFixture):
"""Test JWT secret with only numbers."""
numeric_secret = "1234567890" * 4 # 40 character numeric secret
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": numeric_secret}, clear=True)
settings = Settings()
assert settings.JWT_VERIFY_KEY == numeric_secret
def test_algorithm_default_hs256(mocker: MockerFixture):
"""Test that JWT algorithm defaults to HS256."""
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": "a" * 32}, clear=True)
settings = Settings()
assert settings.JWT_ALGORITHM == "HS256"
def test_algorithm_whitespace_stripped(mocker: MockerFixture):
"""Test that JWT algorithm whitespace is stripped."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": " HS256 "},
clear=True,
)
settings = Settings()
assert settings.JWT_ALGORITHM == "HS256"
def test_no_crypto_warning(mocker: MockerFixture, caplog: pytest.LogCaptureFixture):
"""Test warning when crypto package is not available."""
secret = "a" * 32
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
# Mock has_crypto to return False
mocker.patch("autogpt_libs.auth.config.has_crypto", False)
with caplog.at_level(logging.WARNING):
Settings()
assert "Asymmetric JWT verification is not available" in caplog.text
assert "cryptography" in caplog.text
def test_algorithm_invalid_raises_error(mocker: MockerFixture):
"""Test that invalid JWT algorithm raises AuthConfigError."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": "INVALID_ALG"},
clear=True,
)
with pytest.raises(AuthConfigError) as exc_info:
Settings()
assert "Invalid JWT_SIGN_ALGORITHM" in str(exc_info.value)
assert "INVALID_ALG" in str(exc_info.value)
def test_algorithm_none_raises_error(mocker: MockerFixture):
"""Test that 'none' algorithm raises AuthConfigError."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": "none"},
clear=True,
)
with pytest.raises(AuthConfigError) as exc_info:
Settings()
assert "Invalid JWT_SIGN_ALGORITHM" in str(exc_info.value)
@pytest.mark.parametrize("algorithm", ["HS256", "HS384", "HS512"])
def test_algorithm_symmetric_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture, algorithm: str
):
"""Test warning for symmetric algorithms (HS256, HS384, HS512)."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": algorithm},
clear=True,
)
with caplog.at_level(logging.WARNING):
settings = Settings()
assert algorithm in caplog.text
assert "symmetric shared-key signature algorithm" in caplog.text
assert settings.JWT_ALGORITHM == algorithm
@pytest.mark.parametrize(
"algorithm",
["ES256", "ES384", "ES512", "RS256", "RS384", "RS512", "PS256", "PS384", "PS512"],
)
def test_algorithm_asymmetric_no_warning(
mocker: MockerFixture, caplog: pytest.LogCaptureFixture, algorithm: str
):
"""Test that asymmetric algorithms do not trigger warning."""
secret = "a" * 32
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": secret, "JWT_SIGN_ALGORITHM": algorithm},
clear=True,
)
with caplog.at_level(logging.WARNING):
settings = Settings()
# Should not contain the symmetric algorithm warning
assert "symmetric shared-key signature algorithm" not in caplog.text
assert settings.JWT_ALGORITHM == algorithm

View File

@@ -0,0 +1,45 @@
"""
FastAPI dependency functions for JWT-based authentication and authorization.
These are the high-level dependency functions used in route definitions.
"""
import fastapi
from .jwt_utils import get_jwt_payload, verify_user
from .models import User
def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid authenticated user.
Raises:
HTTPException: 401 for authentication failures
"""
return verify_user(jwt_payload, admin_only=False)
def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid admin user.
Raises:
HTTPException: 401 for authentication failures, 403 for insufficient permissions
"""
return verify_user(jwt_payload, admin_only=True)
def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
"""
FastAPI dependency that returns the ID of the authenticated user.
Raises:
HTTPException: 401 for authentication failures or missing user ID
"""
user_id = jwt_payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
return user_id

View File

@@ -0,0 +1,335 @@
"""
Comprehensive integration tests for authentication dependencies.
Tests the full authentication flow from HTTP requests to user validation.
"""
import os
import pytest
from fastapi import FastAPI, HTTPException, Security
from fastapi.testclient import TestClient
from pytest_mock import MockerFixture
from autogpt_libs.auth.dependencies import (
get_user_id,
requires_admin_user,
requires_user,
)
from autogpt_libs.auth.models import User
class TestAuthDependencies:
"""Test suite for authentication dependency functions."""
@pytest.fixture
def app(self):
"""Create a test FastAPI application."""
app = FastAPI()
@app.get("/user")
def get_user_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id, "role": user.role}
@app.get("/admin")
def get_admin_endpoint(user: User = Security(requires_admin_user)):
return {"user_id": user.user_id, "role": user.role}
@app.get("/user-id")
def get_user_id_endpoint(user_id: str = Security(get_user_id)):
return {"user_id": user_id}
return app
@pytest.fixture
def client(self, app):
"""Create a test client."""
return TestClient(app)
def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user with valid JWT payload."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
# Mock get_jwt_payload to return our test payload
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
assert isinstance(user, User)
assert user.user_id == "user-123"
assert user.role == "user"
def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user accepts admin users."""
jwt_payload = {
"sub": "admin-456",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
assert user.user_id == "admin-456"
assert user.role == "admin"
def test_requires_user_missing_sub(self):
"""Test requires_user with missing user ID."""
jwt_payload = {"role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_requires_user_empty_sub(self):
"""Test requires_user with empty user ID."""
jwt_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
assert exc_info.value.status_code == 401
def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
"""Test requires_admin_user with admin role."""
jwt_payload = {
"sub": "admin-789",
"role": "admin",
"email": "admin@example.com",
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_admin_user(jwt_payload)
assert user.user_id == "admin-789"
assert user.role == "admin"
def test_requires_admin_user_with_regular_user(self):
"""Test requires_admin_user rejects regular users."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_admin_user(jwt_payload)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_requires_admin_user_missing_role(self):
"""Test requires_admin_user with missing role."""
jwt_payload = {"sub": "user-123", "email": "user@example.com"}
with pytest.raises(KeyError):
requires_admin_user(jwt_payload)
def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
"""Test get_user_id extracts user ID correctly."""
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = get_user_id(jwt_payload)
assert user_id == "user-id-xyz"
def test_get_user_id_missing_sub(self):
"""Test get_user_id with missing user ID."""
jwt_payload = {"role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_get_user_id_none_sub(self):
"""Test get_user_id with None user ID."""
jwt_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
class TestAuthDependenciesIntegration:
"""Integration tests for auth dependencies with FastAPI."""
acceptable_jwt_secret = "test-secret-with-proper-length-123456"
@pytest.fixture
def create_token(self, mocker: MockerFixture):
"""Helper to create JWT tokens."""
import jwt
mocker.patch.dict(
os.environ,
{"JWT_VERIFY_KEY": self.acceptable_jwt_secret},
clear=True,
)
def _create_token(payload, secret=self.acceptable_jwt_secret):
return jwt.encode(payload, secret, algorithm="HS256")
return _create_token
def test_endpoint_auth_enabled_no_token(self):
"""Test endpoints require token when auth is enabled."""
app = FastAPI()
@app.get("/test")
def test_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id}
client = TestClient(app)
# Should fail without auth header
response = client.get("/test")
assert response.status_code == 401
def test_endpoint_with_valid_token(self, create_token):
"""Test endpoint with valid JWT token."""
app = FastAPI()
@app.get("/test")
def test_endpoint(user: User = Security(requires_user)):
return {"user_id": user.user_id, "role": user.role}
client = TestClient(app)
token = create_token(
{"sub": "test-user", "role": "user", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get("/test", headers={"Authorization": f"Bearer {token}"})
assert response.status_code == 200
assert response.json()["user_id"] == "test-user"
def test_admin_endpoint_requires_admin_role(self, create_token):
"""Test admin endpoint rejects non-admin users."""
app = FastAPI()
@app.get("/admin")
def admin_endpoint(user: User = Security(requires_admin_user)):
return {"user_id": user.user_id}
client = TestClient(app)
# Regular user token
user_token = create_token(
{"sub": "regular-user", "role": "user", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get(
"/admin", headers={"Authorization": f"Bearer {user_token}"}
)
assert response.status_code == 403
# Admin token
admin_token = create_token(
{"sub": "admin-user", "role": "admin", "aud": "authenticated"},
secret=self.acceptable_jwt_secret,
)
response = client.get(
"/admin", headers={"Authorization": f"Bearer {admin_token}"}
)
assert response.status_code == 200
assert response.json()["user_id"] == "admin-user"
class TestAuthDependenciesEdgeCases:
"""Edge case tests for authentication dependencies."""
def test_dependency_with_complex_payload(self):
"""Test dependencies handle complex JWT payloads."""
complex_payload = {
"sub": "user-123",
"role": "admin",
"email": "test@example.com",
"app_metadata": {"provider": "email", "providers": ["email"]},
"user_metadata": {
"full_name": "Test User",
"avatar_url": "https://example.com/avatar.jpg",
},
"aud": "authenticated",
"iat": 1234567890,
"exp": 9999999999,
}
user = requires_user(complex_payload)
assert user.user_id == "user-123"
assert user.email == "test@example.com"
admin = requires_admin_user(complex_payload)
assert admin.role == "admin"
def test_dependency_with_unicode_in_payload(self):
"""Test dependencies handle unicode in JWT payloads."""
unicode_payload = {
"sub": "user-😀-123",
"role": "user",
"email": "测试@example.com",
"name": "日本語",
}
user = requires_user(unicode_payload)
assert "😀" in user.user_id
assert user.email == "测试@example.com"
def test_dependency_with_null_values(self):
"""Test dependencies handle null values in payload."""
null_payload = {
"sub": "user-123",
"role": "user",
"email": None,
"phone": None,
"metadata": None,
}
user = requires_user(null_payload)
assert user.user_id == "user-123"
assert user.email is None
def test_concurrent_requests_isolation(self):
"""Test that concurrent requests don't interfere with each other."""
payload1 = {"sub": "user-1", "role": "user"}
payload2 = {"sub": "user-2", "role": "admin"}
# Simulate concurrent processing
user1 = requires_user(payload1)
user2 = requires_admin_user(payload2)
assert user1.user_id == "user-1"
assert user2.user_id == "user-2"
assert user1.role == "user"
assert user2.role == "admin"
@pytest.mark.parametrize(
"payload,expected_error,admin_only",
[
(None, "Authorization header is missing", False),
({}, "User ID not found", False),
({"sub": ""}, "User ID not found", False),
({"role": "user"}, "User ID not found", False),
({"sub": "user", "role": "user"}, "Admin access required", True),
],
)
def test_dependency_error_cases(
self, payload, expected_error: str, admin_only: bool
):
"""Test that errors propagate correctly through dependencies."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
with pytest.raises(HTTPException) as exc_info:
verify_user(payload, admin_only=admin_only)
assert expected_error in exc_info.value.detail
def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
# Valid case
user = verify_user({"sub": "user", "role": "user"}, admin_only=False)
assert user.user_id == "user"

View File

@@ -1,46 +0,0 @@
import fastapi
from .config import settings
from .middleware import auth_middleware
from .models import DEFAULT_USER_ID, User
def requires_user(payload: dict = fastapi.Depends(auth_middleware)) -> User:
return verify_user(payload, admin_only=False)
def requires_admin_user(
payload: dict = fastapi.Depends(auth_middleware),
) -> User:
return verify_user(payload, admin_only=True)
def verify_user(payload: dict | None, admin_only: bool) -> User:
if not payload:
if settings.ENABLE_AUTH:
raise fastapi.HTTPException(
status_code=401, detail="Authorization header is missing"
)
# This handles the case when authentication is disabled
payload = {"sub": DEFAULT_USER_ID, "role": "admin"}
user_id = payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
if admin_only and payload["role"] != "admin":
raise fastapi.HTTPException(status_code=403, detail="Admin access required")
return User.from_payload(payload)
def get_user_id(payload: dict = fastapi.Depends(auth_middleware)) -> str:
user_id = payload.get("sub")
if not user_id:
raise fastapi.HTTPException(
status_code=401, detail="User ID not found in token"
)
return user_id

View File

@@ -1,68 +0,0 @@
import pytest
from .depends import requires_admin_user, requires_user, verify_user
def test_verify_user_no_payload():
user = verify_user(None, admin_only=False)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_verify_user_no_user_id():
with pytest.raises(Exception):
verify_user({"role": "admin"}, admin_only=False)
def test_verify_user_not_admin():
with pytest.raises(Exception):
verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"},
admin_only=True,
)
def test_verify_user_with_admin_role():
user = verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"},
admin_only=True,
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_verify_user_with_user_role():
user = verify_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"},
admin_only=False,
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "user"
def test_requires_user():
user = requires_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"}
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "user"
def test_requires_user_no_user_id():
with pytest.raises(Exception):
requires_user({"role": "user"})
def test_requires_admin_user():
user = requires_admin_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"}
)
assert user.user_id == "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
assert user.role == "admin"
def test_requires_admin_user_not_admin():
with pytest.raises(Exception):
requires_admin_user(
{"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "user"}
)

View File

@@ -0,0 +1,68 @@
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from .jwt_utils import bearer_jwt_auth
def add_auth_responses_to_openapi(app: FastAPI) -> None:
"""
Set up custom OpenAPI schema generation that adds 401 responses
to all authenticated endpoints.
This is needed when using HTTPBearer with auto_error=False to get proper
401 responses instead of 403, but FastAPI only automatically adds security
responses when auto_error=True.
"""
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
version=app.version,
description=app.description,
routes=app.routes,
)
# Add 401 response to all endpoints that have security requirements
for path, methods in openapi_schema["paths"].items():
for method, details in methods.items():
security_schemas = [
schema
for auth_option in details.get("security", [])
for schema in auth_option.keys()
]
if bearer_jwt_auth.scheme_name not in security_schemas:
continue
if "responses" not in details:
details["responses"] = {}
details["responses"]["401"] = {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
}
# Ensure #/components/responses exists
if "components" not in openapi_schema:
openapi_schema["components"] = {}
if "responses" not in openapi_schema["components"]:
openapi_schema["components"]["responses"] = {}
# Define 401 response
openapi_schema["components"]["responses"]["HTTP401NotAuthenticatedError"] = {
"description": "Authentication required",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"detail": {"type": "string"}},
}
}
},
}
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi

View File

@@ -0,0 +1,435 @@
"""
Comprehensive tests for auth helpers module to achieve 100% coverage.
Tests OpenAPI schema generation and authentication response handling.
"""
from unittest import mock
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from autogpt_libs.auth.helpers import add_auth_responses_to_openapi
from autogpt_libs.auth.jwt_utils import bearer_jwt_auth
def test_add_auth_responses_to_openapi_basic():
"""Test adding 401 responses to OpenAPI schema."""
app = FastAPI(title="Test App", version="1.0.0")
# Add some test endpoints with authentication
from fastapi import Depends
from autogpt_libs.auth.dependencies import requires_user
@app.get("/protected", dependencies=[Depends(requires_user)])
def protected_endpoint():
return {"message": "Protected"}
@app.get("/public")
def public_endpoint():
return {"message": "Public"}
# Apply the OpenAPI customization
add_auth_responses_to_openapi(app)
# Get the OpenAPI schema
schema = app.openapi()
# Verify basic schema properties
assert schema["info"]["title"] == "Test App"
assert schema["info"]["version"] == "1.0.0"
# Verify 401 response component is added
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify 401 response structure
error_response = schema["components"]["responses"]["HTTP401NotAuthenticatedError"]
assert error_response["description"] == "Authentication required"
assert "application/json" in error_response["content"]
assert "schema" in error_response["content"]["application/json"]
# Verify schema properties
response_schema = error_response["content"]["application/json"]["schema"]
assert response_schema["type"] == "object"
assert "detail" in response_schema["properties"]
assert response_schema["properties"]["detail"]["type"] == "string"
def test_add_auth_responses_to_openapi_with_security():
"""Test that 401 responses are added only to secured endpoints."""
app = FastAPI()
# Mock endpoint with security
from fastapi import Security
from autogpt_libs.auth.dependencies import get_user_id
@app.get("/secured")
def secured_endpoint(user_id: str = Security(get_user_id)):
return {"user_id": user_id}
@app.post("/also-secured")
def another_secured(user_id: str = Security(get_user_id)):
return {"status": "ok"}
@app.get("/unsecured")
def unsecured_endpoint():
return {"public": True}
# Apply OpenAPI customization
add_auth_responses_to_openapi(app)
# Get schema
schema = app.openapi()
# Check that secured endpoints have 401 responses
if "/secured" in schema["paths"]:
if "get" in schema["paths"]["/secured"]:
secured_get = schema["paths"]["/secured"]["get"]
if "responses" in secured_get:
assert "401" in secured_get["responses"]
assert (
secured_get["responses"]["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
if "/also-secured" in schema["paths"]:
if "post" in schema["paths"]["/also-secured"]:
secured_post = schema["paths"]["/also-secured"]["post"]
if "responses" in secured_post:
assert "401" in secured_post["responses"]
# Check that unsecured endpoint does not have 401 response
if "/unsecured" in schema["paths"]:
if "get" in schema["paths"]["/unsecured"]:
unsecured_get = schema["paths"]["/unsecured"]["get"]
if "responses" in unsecured_get:
assert "401" not in unsecured_get.get("responses", {})
def test_add_auth_responses_to_openapi_cached_schema():
"""Test that OpenAPI schema is cached after first generation."""
app = FastAPI()
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema twice
schema1 = app.openapi()
schema2 = app.openapi()
# Should return the same cached object
assert schema1 is schema2
def test_add_auth_responses_to_openapi_existing_responses():
"""Test handling endpoints that already have responses defined."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get(
"/with-responses",
responses={
200: {"description": "Success"},
404: {"description": "Not found"},
},
)
def endpoint_with_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Check that existing responses are preserved and 401 is added
if "/with-responses" in schema["paths"]:
if "get" in schema["paths"]["/with-responses"]:
responses = schema["paths"]["/with-responses"]["get"].get("responses", {})
# Original responses should be preserved
if "200" in responses:
assert responses["200"]["description"] == "Success"
if "404" in responses:
assert responses["404"]["description"] == "Not found"
# 401 should be added
if "401" in responses:
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_add_auth_responses_to_openapi_no_security_endpoints():
"""Test with app that has no secured endpoints."""
app = FastAPI()
@app.get("/public1")
def public1():
return {"message": "public1"}
@app.post("/public2")
def public2():
return {"message": "public2"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Component should still be added for consistency
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# But no endpoints should have 401 responses
for path in schema["paths"].values():
for method in path.values():
if isinstance(method, dict) and "responses" in method:
assert "401" not in method["responses"]
def test_add_auth_responses_to_openapi_multiple_security_schemes():
"""Test endpoints with multiple security requirements."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.dependencies import requires_admin_user, requires_user
from autogpt_libs.auth.models import User
@app.get("/multi-auth")
def multi_auth(
user: User = Security(requires_user),
admin: User = Security(requires_admin_user),
):
return {"status": "super secure"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Should have 401 response
if "/multi-auth" in schema["paths"]:
if "get" in schema["paths"]["/multi-auth"]:
responses = schema["paths"]["/multi-auth"]["get"].get("responses", {})
if "401" in responses:
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_add_auth_responses_to_openapi_empty_components():
"""Test when OpenAPI schema has no components section initially."""
app = FastAPI()
# Mock get_openapi to return schema without components
original_get_openapi = get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove components if it exists
if "components" in schema:
del schema["components"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Components should be created
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_add_auth_responses_to_openapi_all_http_methods():
"""Test that all HTTP methods are handled correctly."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get("/resource")
def get_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "GET"}
@app.post("/resource")
def post_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "POST"}
@app.put("/resource")
def put_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "PUT"}
@app.patch("/resource")
def patch_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "PATCH"}
@app.delete("/resource")
def delete_resource(jwt: dict = Security(get_jwt_payload)):
return {"method": "DELETE"}
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# All methods should have 401 response
if "/resource" in schema["paths"]:
for method in ["get", "post", "put", "patch", "delete"]:
if method in schema["paths"]["/resource"]:
method_spec = schema["paths"]["/resource"][method]
if "responses" in method_spec:
assert "401" in method_spec["responses"]
def test_bearer_jwt_auth_scheme_config():
"""Test that bearer_jwt_auth is configured correctly."""
assert bearer_jwt_auth.scheme_name == "HTTPBearerJWT"
assert bearer_jwt_auth.auto_error is False
def test_add_auth_responses_with_no_routes():
"""Test OpenAPI generation with app that has no routes."""
app = FastAPI(title="Empty App")
# Apply customization to empty app
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Should still have basic structure
assert schema["info"]["title"] == "Empty App"
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_custom_openapi_function_replacement():
"""Test that the custom openapi function properly replaces the default."""
app = FastAPI()
# Store original function
original_openapi = app.openapi
# Apply customization
add_auth_responses_to_openapi(app)
# Function should be replaced
assert app.openapi != original_openapi
assert callable(app.openapi)
def test_endpoint_without_responses_section():
"""Test endpoint that has security but no responses section initially."""
app = FastAPI()
from fastapi import Security
from fastapi.openapi.utils import get_openapi as original_get_openapi
from autogpt_libs.auth.jwt_utils import get_jwt_payload
# Create endpoint
@app.get("/no-responses")
def endpoint_without_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Mock get_openapi to remove responses from the endpoint
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove responses from our endpoint to trigger line 40
if "/no-responses" in schema.get("paths", {}):
if "get" in schema["paths"]["/no-responses"]:
# Delete responses to force the code to create it
if "responses" in schema["paths"]["/no-responses"]["get"]:
del schema["paths"]["/no-responses"]["get"]["responses"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema and verify 401 was added
schema = app.openapi()
# The endpoint should now have 401 response
if "/no-responses" in schema["paths"]:
if "get" in schema["paths"]["/no-responses"]:
responses = schema["paths"]["/no-responses"]["get"].get("responses", {})
assert "401" in responses
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_components_with_existing_responses():
"""Test when components already has a responses section."""
app = FastAPI()
# Mock get_openapi to return schema with existing components/responses
from fastapi.openapi.utils import get_openapi as original_get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Add existing components/responses
if "components" not in schema:
schema["components"] = {}
schema["components"]["responses"] = {
"ExistingResponse": {"description": "An existing response"}
}
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
schema = app.openapi()
# Both responses should exist
assert "ExistingResponse" in schema["components"]["responses"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify our 401 response structure
error_response = schema["components"]["responses"][
"HTTP401NotAuthenticatedError"
]
assert error_response["description"] == "Authentication required"
def test_openapi_schema_persistence():
"""Test that modifications to OpenAPI schema persist correctly."""
app = FastAPI()
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
@app.get("/test")
def test_endpoint(jwt: dict = Security(get_jwt_payload)):
return {"test": True}
# Apply customization
add_auth_responses_to_openapi(app)
# Get schema multiple times
schema1 = app.openapi()
# Modify the cached schema (shouldn't affect future calls)
schema1["info"]["title"] = "Modified Title"
# Clear cache and get again
app.openapi_schema = None
schema2 = app.openapi()
# Should regenerate with original title
assert schema2["info"]["title"] == app.title
assert schema2["info"]["title"] != "Modified Title"

View File

@@ -1,11 +1,48 @@
from typing import Any, Dict
import logging
from typing import Any
import jwt
from fastapi import HTTPException, Security
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from .config import settings
from .config import get_settings
from .models import User
logger = logging.getLogger(__name__)
# Bearer token authentication scheme
bearer_jwt_auth = HTTPBearer(
bearerFormat="jwt", scheme_name="HTTPBearerJWT", auto_error=False
)
def parse_jwt_token(token: str) -> Dict[str, Any]:
def get_jwt_payload(
credentials: HTTPAuthorizationCredentials | None = Security(bearer_jwt_auth),
) -> dict[str, Any]:
"""
Extract and validate JWT payload from HTTP Authorization header.
This is the core authentication function that handles:
- Reading the `Authorization` header to obtain the JWT token
- Verifying the JWT token's signature
- Decoding the JWT token's payload
:param credentials: HTTP Authorization credentials from bearer token
:return: JWT payload dictionary
:raises HTTPException: 401 if authentication fails
"""
if not credentials:
raise HTTPException(status_code=401, detail="Authorization header is missing")
try:
payload = parse_jwt_token(credentials.credentials)
logger.debug("Token decoded successfully")
return payload
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
def parse_jwt_token(token: str) -> dict[str, Any]:
"""
Parse and validate a JWT token.
@@ -13,10 +50,11 @@ def parse_jwt_token(token: str) -> Dict[str, Any]:
:return: The decoded payload
:raises ValueError: If the token is invalid or expired
"""
settings = get_settings()
try:
payload = jwt.decode(
token,
settings.JWT_SECRET_KEY,
settings.JWT_VERIFY_KEY,
algorithms=[settings.JWT_ALGORITHM],
audience="authenticated",
)
@@ -25,3 +63,18 @@ def parse_jwt_token(token: str) -> Dict[str, Any]:
raise ValueError("Token has expired")
except jwt.InvalidTokenError as e:
raise ValueError(f"Invalid token: {str(e)}")
def verify_user(jwt_payload: dict | None, admin_only: bool) -> User:
if jwt_payload is None:
raise HTTPException(status_code=401, detail="Authorization header is missing")
user_id = jwt_payload.get("sub")
if not user_id:
raise HTTPException(status_code=401, detail="User ID not found in token")
if admin_only and jwt_payload["role"] != "admin":
raise HTTPException(status_code=403, detail="Admin access required")
return User.from_payload(jwt_payload)

View File

@@ -0,0 +1,308 @@
"""
Comprehensive tests for JWT token parsing and validation.
Ensures 100% line and branch coverage for JWT security functions.
"""
import os
from datetime import datetime, timedelta, timezone
import jwt
import pytest
from fastapi import HTTPException
from fastapi.security import HTTPAuthorizationCredentials
from pytest_mock import MockerFixture
from autogpt_libs.auth import config, jwt_utils
from autogpt_libs.auth.config import Settings
from autogpt_libs.auth.models import User
MOCK_JWT_SECRET = "test-secret-key-with-at-least-32-characters"
TEST_USER_PAYLOAD = {
"sub": "test-user-id",
"role": "user",
"aud": "authenticated",
"email": "test@example.com",
}
TEST_ADMIN_PAYLOAD = {
"sub": "admin-user-id",
"role": "admin",
"aud": "authenticated",
"email": "admin@example.com",
}
@pytest.fixture(autouse=True)
def mock_config(mocker: MockerFixture):
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": MOCK_JWT_SECRET}, clear=True)
mocker.patch.object(config, "_settings", Settings())
yield
def create_token(payload, secret=None, algorithm="HS256"):
"""Helper to create JWT tokens."""
if secret is None:
secret = MOCK_JWT_SECRET
return jwt.encode(payload, secret, algorithm=algorithm)
def test_parse_jwt_token_valid():
"""Test parsing a valid JWT token."""
token = create_token(TEST_USER_PAYLOAD)
result = jwt_utils.parse_jwt_token(token)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
assert result["aud"] == "authenticated"
def test_parse_jwt_token_expired():
"""Test parsing an expired JWT token."""
expired_payload = {
**TEST_USER_PAYLOAD,
"exp": datetime.now(timezone.utc) - timedelta(hours=1),
}
token = create_token(expired_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Token has expired" in str(exc_info.value)
def test_parse_jwt_token_invalid_signature():
"""Test parsing a token with invalid signature."""
# Create token with different secret
token = create_token(TEST_USER_PAYLOAD, secret="wrong-secret")
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_malformed():
"""Test parsing a malformed token."""
malformed_tokens = [
"not.a.token",
"invalid",
"",
# Header only
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9",
# No signature
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0In0",
]
for token in malformed_tokens:
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_wrong_audience():
"""Test parsing a token with wrong audience."""
wrong_aud_payload = {**TEST_USER_PAYLOAD, "aud": "wrong-audience"}
token = create_token(wrong_aud_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_parse_jwt_token_missing_audience():
"""Test parsing a token without audience claim."""
no_aud_payload = {k: v for k, v in TEST_USER_PAYLOAD.items() if k != "aud"}
token = create_token(no_aud_payload)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)
def test_get_jwt_payload_with_valid_token():
"""Test extracting JWT payload with valid bearer token."""
token = create_token(TEST_USER_PAYLOAD)
credentials = HTTPAuthorizationCredentials(scheme="Bearer", credentials=token)
result = jwt_utils.get_jwt_payload(credentials)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
def test_get_jwt_payload_no_credentials():
"""Test JWT payload when no credentials provided."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(None)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_get_jwt_payload_invalid_token():
"""Test JWT payload extraction with invalid token."""
credentials = HTTPAuthorizationCredentials(
scheme="Bearer", credentials="invalid.token.here"
)
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(credentials)
assert exc_info.value.status_code == 401
assert "Invalid token" in exc_info.value.detail
def test_verify_user_with_valid_user():
"""Test verifying a valid user."""
user = jwt_utils.verify_user(TEST_USER_PAYLOAD, admin_only=False)
assert isinstance(user, User)
assert user.user_id == "test-user-id"
assert user.role == "user"
assert user.email == "test@example.com"
def test_verify_user_with_admin():
"""Test verifying an admin user."""
user = jwt_utils.verify_user(TEST_ADMIN_PAYLOAD, admin_only=True)
assert isinstance(user, User)
assert user.user_id == "admin-user-id"
assert user.role == "admin"
def test_verify_user_admin_only_with_regular_user():
"""Test verifying regular user when admin is required."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(TEST_USER_PAYLOAD, admin_only=True)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_verify_user_no_payload():
"""Test verifying user with no payload."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(None, admin_only=False)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_verify_user_missing_sub():
"""Test verifying user with payload missing 'sub' field."""
invalid_payload = {"role": "user", "email": "test@example.com"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_empty_sub():
"""Test verifying user with empty 'sub' field."""
invalid_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_none_sub():
"""Test verifying user with None 'sub' field."""
invalid_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
jwt_utils.verify_user(invalid_payload, admin_only=False)
assert exc_info.value.status_code == 401
assert "User ID not found in token" in exc_info.value.detail
def test_verify_user_missing_role_admin_check():
"""Test verifying admin when role field is missing."""
no_role_payload = {"sub": "user-id"}
with pytest.raises(KeyError):
# This will raise KeyError when checking payload["role"]
jwt_utils.verify_user(no_role_payload, admin_only=True)
# ======================== EDGE CASES ======================== #
def test_jwt_with_additional_claims():
"""Test JWT token with additional custom claims."""
extra_claims_payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
"custom_claim": "custom_value",
"permissions": ["read", "write"],
"metadata": {"key": "value"},
}
token = create_token(extra_claims_payload)
result = jwt_utils.parse_jwt_token(token)
assert result["sub"] == "user-id"
assert result["custom_claim"] == "custom_value"
assert result["permissions"] == ["read", "write"]
def test_jwt_with_numeric_sub():
"""Test JWT token with numeric user ID."""
payload = {
"sub": 12345, # Numeric ID
"role": "user",
"aud": "authenticated",
}
# Should convert to string internally
user = jwt_utils.verify_user(payload, admin_only=False)
assert user.user_id == 12345
def test_jwt_with_very_long_sub():
"""Test JWT token with very long user ID."""
long_id = "a" * 1000
payload = {
"sub": long_id,
"role": "user",
"aud": "authenticated",
}
user = jwt_utils.verify_user(payload, admin_only=False)
assert user.user_id == long_id
def test_jwt_with_special_characters_in_claims():
"""Test JWT token with special characters in claims."""
payload = {
"sub": "user@example.com/special-chars!@#$%",
"role": "admin",
"aud": "authenticated",
"email": "test+special@example.com",
}
user = jwt_utils.verify_user(payload, admin_only=True)
assert "special-chars!@#$%" in user.user_id
def test_jwt_with_future_iat():
"""Test JWT token with issued-at time in future."""
future_payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
"iat": datetime.now(timezone.utc) + timedelta(hours=1),
}
token = create_token(future_payload)
# PyJWT validates iat claim and should reject future tokens
with pytest.raises(ValueError, match="not yet valid"):
jwt_utils.parse_jwt_token(token)
def test_jwt_with_different_algorithms():
"""Test that only HS256 algorithm is accepted."""
payload = {
"sub": "user-id",
"role": "user",
"aud": "authenticated",
}
# Try different algorithms
algorithms = ["HS384", "HS512", "none"]
for algo in algorithms:
if algo == "none":
# Special case for 'none' algorithm (security vulnerability if accepted)
token = create_token(payload, "", algorithm="none")
else:
token = create_token(payload, algorithm=algo)
with pytest.raises(ValueError) as exc_info:
jwt_utils.parse_jwt_token(token)
assert "Invalid token" in str(exc_info.value)

View File

@@ -1,139 +0,0 @@
import inspect
import logging
import secrets
from typing import Any, Callable, Optional
from fastapi import HTTPException, Request, Security
from fastapi.security import APIKeyHeader, HTTPBearer
from starlette.status import HTTP_401_UNAUTHORIZED
from .config import settings
from .jwt_utils import parse_jwt_token
logger = logging.getLogger(__name__)
bearer_auth = HTTPBearer(auto_error=False)
async def auth_middleware(request: Request):
if not settings.ENABLE_AUTH:
# If authentication is disabled, allow the request to proceed
logger.warning("Auth disabled")
return {}
credentials = await bearer_auth(request)
if not credentials:
raise HTTPException(status_code=401, detail="Not authenticated")
try:
payload = parse_jwt_token(credentials.credentials)
request.state.user = payload
logger.debug("Token decoded successfully")
except ValueError as e:
raise HTTPException(status_code=401, detail=str(e))
return payload
class APIKeyValidator:
"""
Configurable API key validator that supports custom validation functions
for FastAPI applications.
This class provides a flexible way to implement API key authentication with optional
custom validation logic. It can be used for simple token matching
or more complex validation scenarios like database lookups.
Examples:
Simple token validation:
```python
validator = APIKeyValidator(
header_name="X-API-Key",
expected_token="your-secret-token"
)
@app.get("/protected", dependencies=[Depends(validator.get_dependency())])
def protected_endpoint():
return {"message": "Access granted"}
```
Custom validation with database lookup:
```python
async def validate_with_db(api_key: str):
api_key_obj = await db.get_api_key(api_key)
return api_key_obj if api_key_obj and api_key_obj.is_active else None
validator = APIKeyValidator(
header_name="X-API-Key",
validate_fn=validate_with_db
)
```
Args:
header_name (str): The name of the header containing the API key
expected_token (Optional[str]): The expected API key value for simple token matching
validate_fn (Optional[Callable]): Custom validation function that takes an API key
string and returns a boolean or object. Can be async.
error_status (int): HTTP status code to use for validation errors
error_message (str): Error message to return when validation fails
"""
def __init__(
self,
header_name: str,
expected_token: Optional[str] = None,
validate_fn: Optional[Callable[[str], bool]] = None,
error_status: int = HTTP_401_UNAUTHORIZED,
error_message: str = "Invalid API key",
):
# Create the APIKeyHeader as a class property
self.security_scheme = APIKeyHeader(name=header_name)
self.expected_token = expected_token
self.custom_validate_fn = validate_fn
self.error_status = error_status
self.error_message = error_message
async def default_validator(self, api_key: str) -> bool:
if not self.expected_token:
raise ValueError(
"Expected Token Required to be set when uisng API Key Validator default validation"
)
return secrets.compare_digest(api_key, self.expected_token)
async def __call__(
self, request: Request, api_key: str = Security(APIKeyHeader)
) -> Any:
if api_key is None:
raise HTTPException(status_code=self.error_status, detail="Missing API key")
# Use custom validation if provided, otherwise use default equality check
validator = self.custom_validate_fn or self.default_validator
result = (
await validator(api_key)
if inspect.iscoroutinefunction(validator)
else validator(api_key)
)
if not result:
raise HTTPException(
status_code=self.error_status, detail=self.error_message
)
# Store validation result in request state if it's not just a boolean
if result is not True:
request.state.api_key = result
return result
def get_dependency(self):
"""
Returns a callable dependency that FastAPI will recognize as a security scheme
"""
async def validate_api_key(
request: Request, api_key: str = Security(self.security_scheme)
) -> Any:
return await self(request, api_key)
# This helps FastAPI recognize it as a security dependency
validate_api_key.__name__ = f"validate_{self.security_scheme.model.name}"
return validate_api_key

View File

@@ -54,7 +54,7 @@ version = "1.2.0"
description = "Backport of asyncio.Runner, a context manager that controls event loop life cycle."
optional = false
python-versions = "<3.11,>=3.8"
groups = ["main"]
groups = ["dev"]
markers = "python_version < \"3.11\""
files = [
{file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"},
@@ -85,6 +85,87 @@ files = [
{file = "certifi-2025.7.14.tar.gz", hash = "sha256:8ea99dbdfaaf2ba2f9bac77b9249ef62ec5218e7c2b2e903378ed5fccf765995"},
]
[[package]]
name = "cffi"
version = "1.17.1"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
]
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "3.4.2"
@@ -208,12 +289,176 @@ version = "0.4.6"
description = "Cross-platform colored terminal text."
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["main"]
groups = ["main", "dev"]
files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
[[package]]
name = "coverage"
version = "7.10.5"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "coverage-7.10.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c6a5c3414bfc7451b879141ce772c546985163cf553f08e0f135f0699a911801"},
{file = "coverage-7.10.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bc8e4d99ce82f1710cc3c125adc30fd1487d3cf6c2cd4994d78d68a47b16989a"},
{file = "coverage-7.10.5-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:02252dc1216e512a9311f596b3169fad54abcb13827a8d76d5630c798a50a754"},
{file = "coverage-7.10.5-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:73269df37883e02d460bee0cc16be90509faea1e3bd105d77360b512d5bb9c33"},
{file = "coverage-7.10.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f8a81b0614642f91c9effd53eec284f965577591f51f547a1cbeb32035b4c2f"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6a29f8e0adb7f8c2b95fa2d4566a1d6e6722e0a637634c6563cb1ab844427dd9"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:fcf6ab569436b4a647d4e91accba12509ad9f2554bc93d3aee23cc596e7f99c3"},
{file = "coverage-7.10.5-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:90dc3d6fb222b194a5de60af8d190bedeeddcbc7add317e4a3cd333ee6b7c879"},
{file = "coverage-7.10.5-cp310-cp310-win32.whl", hash = "sha256:414a568cd545f9dc75f0686a0049393de8098414b58ea071e03395505b73d7a8"},
{file = "coverage-7.10.5-cp310-cp310-win_amd64.whl", hash = "sha256:e551f9d03347196271935fd3c0c165f0e8c049220280c1120de0084d65e9c7ff"},
{file = "coverage-7.10.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c177e6ffe2ebc7c410785307758ee21258aa8e8092b44d09a2da767834f075f2"},
{file = "coverage-7.10.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:14d6071c51ad0f703d6440827eaa46386169b5fdced42631d5a5ac419616046f"},
{file = "coverage-7.10.5-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:61f78c7c3bc272a410c5ae3fde7792b4ffb4acc03d35a7df73ca8978826bb7ab"},
{file = "coverage-7.10.5-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f39071caa126f69d63f99b324fb08c7b1da2ec28cbb1fe7b5b1799926492f65c"},
{file = "coverage-7.10.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343a023193f04d46edc46b2616cdbee68c94dd10208ecd3adc56fcc54ef2baa1"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:585ffe93ae5894d1ebdee69fc0b0d4b7c75d8007983692fb300ac98eed146f78"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:b0ef4e66f006ed181df29b59921bd8fc7ed7cd6a9289295cd8b2824b49b570df"},
{file = "coverage-7.10.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:eb7b0bbf7cc1d0453b843eca7b5fa017874735bef9bfdfa4121373d2cc885ed6"},
{file = "coverage-7.10.5-cp311-cp311-win32.whl", hash = "sha256:1d043a8a06987cc0c98516e57c4d3fc2c1591364831e9deb59c9e1b4937e8caf"},
{file = "coverage-7.10.5-cp311-cp311-win_amd64.whl", hash = "sha256:fefafcca09c3ac56372ef64a40f5fe17c5592fab906e0fdffd09543f3012ba50"},
{file = "coverage-7.10.5-cp311-cp311-win_arm64.whl", hash = "sha256:7e78b767da8b5fc5b2faa69bb001edafcd6f3995b42a331c53ef9572c55ceb82"},
{file = "coverage-7.10.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c2d05c7e73c60a4cecc7d9b60dbfd603b4ebc0adafaef371445b47d0f805c8a9"},
{file = "coverage-7.10.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:32ddaa3b2c509778ed5373b177eb2bf5662405493baeff52278a0b4f9415188b"},
{file = "coverage-7.10.5-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dd382410039fe062097aa0292ab6335a3f1e7af7bba2ef8d27dcda484918f20c"},
{file = "coverage-7.10.5-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7fa22800f3908df31cea6fb230f20ac49e343515d968cc3a42b30d5c3ebf9b5a"},
{file = "coverage-7.10.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f366a57ac81f5e12797136552f5b7502fa053c861a009b91b80ed51f2ce651c6"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5f1dc8f1980a272ad4a6c84cba7981792344dad33bf5869361576b7aef42733a"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:2285c04ee8676f7938b02b4936d9b9b672064daab3187c20f73a55f3d70e6b4a"},
{file = "coverage-7.10.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c2492e4dd9daab63f5f56286f8a04c51323d237631eb98505d87e4c4ff19ec34"},
{file = "coverage-7.10.5-cp312-cp312-win32.whl", hash = "sha256:38a9109c4ee8135d5df5505384fc2f20287a47ccbe0b3f04c53c9a1989c2bbaf"},
{file = "coverage-7.10.5-cp312-cp312-win_amd64.whl", hash = "sha256:6b87f1ad60b30bc3c43c66afa7db6b22a3109902e28c5094957626a0143a001f"},
{file = "coverage-7.10.5-cp312-cp312-win_arm64.whl", hash = "sha256:672a6c1da5aea6c629819a0e1461e89d244f78d7b60c424ecf4f1f2556c041d8"},
{file = "coverage-7.10.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ef3b83594d933020f54cf65ea1f4405d1f4e41a009c46df629dd964fcb6e907c"},
{file = "coverage-7.10.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2b96bfdf7c0ea9faebce088a3ecb2382819da4fbc05c7b80040dbc428df6af44"},
{file = "coverage-7.10.5-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:63df1fdaffa42d914d5c4d293e838937638bf75c794cf20bee12978fc8c4e3bc"},
{file = "coverage-7.10.5-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8002dc6a049aac0e81ecec97abfb08c01ef0c1fbf962d0c98da3950ace89b869"},
{file = "coverage-7.10.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:63d4bb2966d6f5f705a6b0c6784c8969c468dbc4bcf9d9ded8bff1c7e092451f"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1f672efc0731a6846b157389b6e6d5d5e9e59d1d1a23a5c66a99fd58339914d5"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:3f39cef43d08049e8afc1fde4a5da8510fc6be843f8dea350ee46e2a26b2f54c"},
{file = "coverage-7.10.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:2968647e3ed5a6c019a419264386b013979ff1fb67dd11f5c9886c43d6a31fc2"},
{file = "coverage-7.10.5-cp313-cp313-win32.whl", hash = "sha256:0d511dda38595b2b6934c2b730a1fd57a3635c6aa2a04cb74714cdfdd53846f4"},
{file = "coverage-7.10.5-cp313-cp313-win_amd64.whl", hash = "sha256:9a86281794a393513cf117177fd39c796b3f8e3759bb2764259a2abba5cce54b"},
{file = "coverage-7.10.5-cp313-cp313-win_arm64.whl", hash = "sha256:cebd8e906eb98bb09c10d1feed16096700b1198d482267f8bf0474e63a7b8d84"},
{file = "coverage-7.10.5-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0520dff502da5e09d0d20781df74d8189ab334a1e40d5bafe2efaa4158e2d9e7"},
{file = "coverage-7.10.5-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d9cd64aca68f503ed3f1f18c7c9174cbb797baba02ca8ab5112f9d1c0328cd4b"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0913dd1613a33b13c4f84aa6e3f4198c1a21ee28ccb4f674985c1f22109f0aae"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:1b7181c0feeb06ed8a02da02792f42f829a7b29990fef52eff257fef0885d760"},
{file = "coverage-7.10.5-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:36d42b7396b605f774d4372dd9c49bed71cbabce4ae1ccd074d155709dd8f235"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:b4fdc777e05c4940b297bf47bf7eedd56a39a61dc23ba798e4b830d585486ca5"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:42144e8e346de44a6f1dbd0a56575dd8ab8dfa7e9007da02ea5b1c30ab33a7db"},
{file = "coverage-7.10.5-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:66c644cbd7aed8fe266d5917e2c9f65458a51cfe5eeff9c05f15b335f697066e"},
{file = "coverage-7.10.5-cp313-cp313t-win32.whl", hash = "sha256:2d1b73023854068c44b0c554578a4e1ef1b050ed07cf8b431549e624a29a66ee"},
{file = "coverage-7.10.5-cp313-cp313t-win_amd64.whl", hash = "sha256:54a1532c8a642d8cc0bd5a9a51f5a9dcc440294fd06e9dda55e743c5ec1a8f14"},
{file = "coverage-7.10.5-cp313-cp313t-win_arm64.whl", hash = "sha256:74d5b63fe3f5f5d372253a4ef92492c11a4305f3550631beaa432fc9df16fcff"},
{file = "coverage-7.10.5-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:68c5e0bc5f44f68053369fa0d94459c84548a77660a5f2561c5e5f1e3bed7031"},
{file = "coverage-7.10.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cf33134ffae93865e32e1e37df043bef15a5e857d8caebc0099d225c579b0fa3"},
{file = "coverage-7.10.5-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ad8fa9d5193bafcf668231294241302b5e683a0518bf1e33a9a0dfb142ec3031"},
{file = "coverage-7.10.5-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:146fa1531973d38ab4b689bc764592fe6c2f913e7e80a39e7eeafd11f0ef6db2"},
{file = "coverage-7.10.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6013a37b8a4854c478d3219ee8bc2392dea51602dd0803a12d6f6182a0061762"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:eb90fe20db9c3d930fa2ad7a308207ab5b86bf6a76f54ab6a40be4012d88fcae"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:384b34482272e960c438703cafe63316dfbea124ac62006a455c8410bf2a2262"},
{file = "coverage-7.10.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:467dc74bd0a1a7de2bedf8deaf6811f43602cb532bd34d81ffd6038d6d8abe99"},
{file = "coverage-7.10.5-cp314-cp314-win32.whl", hash = "sha256:556d23d4e6393ca898b2e63a5bca91e9ac2d5fb13299ec286cd69a09a7187fde"},
{file = "coverage-7.10.5-cp314-cp314-win_amd64.whl", hash = "sha256:f4446a9547681533c8fa3e3c6cf62121eeee616e6a92bd9201c6edd91beffe13"},
{file = "coverage-7.10.5-cp314-cp314-win_arm64.whl", hash = "sha256:5e78bd9cf65da4c303bf663de0d73bf69f81e878bf72a94e9af67137c69b9fe9"},
{file = "coverage-7.10.5-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5661bf987d91ec756a47c7e5df4fbcb949f39e32f9334ccd3f43233bbb65e508"},
{file = "coverage-7.10.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a46473129244db42a720439a26984f8c6f834762fc4573616c1f37f13994b357"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1f64b8d3415d60f24b058b58d859e9512624bdfa57a2d1f8aff93c1ec45c429b"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:44d43de99a9d90b20e0163f9770542357f58860a26e24dc1d924643bd6aa7cb4"},
{file = "coverage-7.10.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a931a87e5ddb6b6404e65443b742cb1c14959622777f2a4efd81fba84f5d91ba"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:f9559b906a100029274448f4c8b8b0a127daa4dade5661dfd821b8c188058842"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b08801e25e3b4526ef9ced1aa29344131a8f5213c60c03c18fe4c6170ffa2874"},
{file = "coverage-7.10.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:ed9749bb8eda35f8b636fb7632f1c62f735a236a5d4edadd8bbcc5ea0542e732"},
{file = "coverage-7.10.5-cp314-cp314t-win32.whl", hash = "sha256:609b60d123fc2cc63ccee6d17e4676699075db72d14ac3c107cc4976d516f2df"},
{file = "coverage-7.10.5-cp314-cp314t-win_amd64.whl", hash = "sha256:0666cf3d2c1626b5a3463fd5b05f5e21f99e6aec40a3192eee4d07a15970b07f"},
{file = "coverage-7.10.5-cp314-cp314t-win_arm64.whl", hash = "sha256:bc85eb2d35e760120540afddd3044a5bf69118a91a296a8b3940dfc4fdcfe1e2"},
{file = "coverage-7.10.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:62835c1b00c4a4ace24c1a88561a5a59b612fbb83a525d1c70ff5720c97c0610"},
{file = "coverage-7.10.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5255b3bbcc1d32a4069d6403820ac8e6dbcc1d68cb28a60a1ebf17e47028e898"},
{file = "coverage-7.10.5-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3876385722e335d6e991c430302c24251ef9c2a9701b2b390f5473199b1b8ebf"},
{file = "coverage-7.10.5-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8048ce4b149c93447a55d279078c8ae98b08a6951a3c4d2d7e87f4efc7bfe100"},
{file = "coverage-7.10.5-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4028e7558e268dd8bcf4d9484aad393cafa654c24b4885f6f9474bf53183a82a"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:03f47dc870eec0367fcdd603ca6a01517d2504e83dc18dbfafae37faec66129a"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:2d488d7d42b6ded7ea0704884f89dcabd2619505457de8fc9a6011c62106f6e5"},
{file = "coverage-7.10.5-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b3dcf2ead47fa8be14224ee817dfc1df98043af568fe120a22f81c0eb3c34ad2"},
{file = "coverage-7.10.5-cp39-cp39-win32.whl", hash = "sha256:02650a11324b80057b8c9c29487020073d5e98a498f1857f37e3f9b6ea1b2426"},
{file = "coverage-7.10.5-cp39-cp39-win_amd64.whl", hash = "sha256:b45264dd450a10f9e03237b41a9a24e85cbb1e278e5a32adb1a303f58f0017f3"},
{file = "coverage-7.10.5-py3-none-any.whl", hash = "sha256:0be24d35e4db1d23d0db5c0f6a74a962e2ec83c426b5cac09f4234aadef38e4a"},
{file = "coverage-7.10.5.tar.gz", hash = "sha256:f2e57716a78bc3ae80b2207be0709a3b2b63b9f2dcf9740ee6ac03588a2015b6"},
]
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
[[package]]
name = "cryptography"
version = "45.0.6"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7"
groups = ["main"]
files = [
{file = "cryptography-45.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:048e7ad9e08cf4c0ab07ff7f36cc3115924e22e2266e034450a890d9e312dd74"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:44647c5d796f5fc042bbc6d61307d04bf29bccb74d188f18051b635f20a9c75f"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e40b80ecf35ec265c452eea0ba94c9587ca763e739b8e559c128d23bff7ebbbf"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:00e8724bdad672d75e6f069b27970883179bd472cd24a63f6e620ca7e41cc0c5"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7a3085d1b319d35296176af31c90338eeb2ddac8104661df79f80e1d9787b8b2"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1b7fa6a1c1188c7ee32e47590d16a5a0646270921f8020efc9a511648e1b2e08"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:275ba5cc0d9e320cd70f8e7b96d9e59903c815ca579ab96c1e37278d231fc402"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f4028f29a9f38a2025abedb2e409973709c660d44319c61762202206ed577c42"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ee411a1b977f40bd075392c80c10b58025ee5c6b47a822a33c1198598a7a5f05"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:e2a21a8eda2d86bb604934b6b37691585bd095c1f788530c1fcefc53a82b3453"},
{file = "cryptography-45.0.6-cp311-abi3-win32.whl", hash = "sha256:d063341378d7ee9c91f9d23b431a3502fc8bfacd54ef0a27baa72a0843b29159"},
{file = "cryptography-45.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:833dc32dfc1e39b7376a87b9a6a4288a10aae234631268486558920029b086ec"},
{file = "cryptography-45.0.6-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:3436128a60a5e5490603ab2adbabc8763613f638513ffa7d311c900a8349a2a0"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0d9ef57b6768d9fa58e92f4947cea96ade1233c0e236db22ba44748ffedca394"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea3c42f2016a5bbf71825537c2ad753f2870191134933196bee408aac397b3d9"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:20ae4906a13716139d6d762ceb3e0e7e110f7955f3bc3876e3a07f5daadec5f3"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2dac5ec199038b8e131365e2324c03d20e97fe214af051d20c49db129844e8b3"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:18f878a34b90d688982e43f4b700408b478102dd58b3e39de21b5ebf6509c301"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:5bd6020c80c5b2b2242d6c48487d7b85700f5e0038e67b29d706f98440d66eb5"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:eccddbd986e43014263eda489abbddfbc287af5cddfd690477993dbb31e31016"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:550ae02148206beb722cfe4ef0933f9352bab26b087af00e48fdfb9ade35c5b3"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5b64e668fc3528e77efa51ca70fadcd6610e8ab231e3e06ae2bab3b31c2b8ed9"},
{file = "cryptography-45.0.6-cp37-abi3-win32.whl", hash = "sha256:780c40fb751c7d2b0c6786ceee6b6f871e86e8718a8ff4bc35073ac353c7cd02"},
{file = "cryptography-45.0.6-cp37-abi3-win_amd64.whl", hash = "sha256:20d15aed3ee522faac1a39fbfdfee25d17b1284bafd808e1640a74846d7c4d1b"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:705bb7c7ecc3d79a50f236adda12ca331c8e7ecfbea51edd931ce5a7a7c4f012"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:826b46dae41a1155a0c0e66fafba43d0ede1dc16570b95e40c4d83bfcf0a451d"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:cc4d66f5dc4dc37b89cfef1bd5044387f7a1f6f0abb490815628501909332d5d"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:f68f833a9d445cc49f01097d95c83a850795921b3f7cc6488731e69bde3288da"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3b5bf5267e98661b9b888a9250d05b063220dfa917a8203744454573c7eb79db"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:2384f2ab18d9be88a6e4f8972923405e2dbb8d3e16c6b43f15ca491d7831bd18"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:fc022c1fa5acff6def2fc6d7819bbbd31ccddfe67d075331a65d9cfb28a20983"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3de77e4df42ac8d4e4d6cdb342d989803ad37707cf8f3fbf7b088c9cbdd46427"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:599c8d7df950aa68baa7e98f7b73f4f414c9f02d0e8104a30c0182a07732638b"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:31a2b9a10530a1cb04ffd6aa1cd4d3be9ed49f7d77a4dafe198f3b382f41545c"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:e5b3dda1b00fb41da3af4c5ef3f922a200e33ee5ba0f0bc9ecf0b0c173958385"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:629127cfdcdc6806dfe234734d7cb8ac54edaf572148274fa377a7d3405b0043"},
{file = "cryptography-45.0.6.tar.gz", hash = "sha256:5c966c732cf6e4a276ce83b6e4c729edda2df6929083a952cc7da973c539c719"},
]
[package.dependencies]
cffi = {version = ">=1.14", markers = "platform_python_implementation != \"PyPy\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs ; python_full_version >= \"3.8.0\"", "sphinx-rtd-theme (>=3.0.0) ; python_full_version >= \"3.8.0\""]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_full_version >= \"3.8.0\""]
pep8test = ["check-sdist ; python_full_version >= \"3.8.0\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "deprecation"
version = "2.1.0"
@@ -235,7 +480,7 @@ version = "1.3.0"
description = "Backport of PEP 654 (exception groups)"
optional = false
python-versions = ">=3.7"
groups = ["main"]
groups = ["main", "dev"]
markers = "python_version < \"3.11\""
files = [
{file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"},
@@ -710,7 +955,7 @@ version = "2.1.0"
description = "brain-dead simple config-ini parsing"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
files = [
{file = "iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"},
{file = "iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7"},
@@ -757,6 +1002,18 @@ dynamodb = ["boto3 (>=1.9.71)"]
redis = ["redis (>=2.10.5)"]
test-filesource = ["pyyaml (>=5.3.1)", "watchdog (>=3.0.0)"]
[[package]]
name = "nodeenv"
version = "1.9.1"
description = "Node.js virtual environment builder"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
groups = ["dev"]
files = [
{file = "nodeenv-1.9.1-py2.py3-none-any.whl", hash = "sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9"},
{file = "nodeenv-1.9.1.tar.gz", hash = "sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f"},
]
[[package]]
name = "opentelemetry-api"
version = "1.35.0"
@@ -779,7 +1036,7 @@ version = "25.0"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["main", "dev"]
files = [
{file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"},
{file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"},
@@ -791,7 +1048,7 @@ version = "1.6.0"
description = "plugin and hook calling mechanisms for python"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"},
{file = "pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3"},
@@ -883,6 +1140,19 @@ files = [
[package.dependencies]
pyasn1 = ">=0.6.1,<0.7.0"
[[package]]
name = "pycparser"
version = "2.22"
description = "C parser in Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
]
[[package]]
name = "pydantic"
version = "2.11.7"
@@ -1047,7 +1317,7 @@ version = "2.19.2"
description = "Pygments is a syntax highlighting package written in Python."
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
@@ -1068,6 +1338,9 @@ files = [
{file = "pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953"},
]
[package.dependencies]
cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""}
[package.extras]
crypto = ["cryptography (>=3.4.0)"]
dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
@@ -1086,13 +1359,34 @@ files = [
{file = "pyrfc3339-2.0.1.tar.gz", hash = "sha256:e47843379ea35c1296c3b6c67a948a1a490ae0584edfcbdea0eaffb5dd29960b"},
]
[[package]]
name = "pyright"
version = "1.1.404"
description = "Command line wrapper for pyright"
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"},
{file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"},
]
[package.dependencies]
nodeenv = ">=1.6.0"
typing-extensions = ">=4.1"
[package.extras]
all = ["nodejs-wheel-binaries", "twine (>=3.4.1)"]
dev = ["twine (>=3.4.1)"]
nodejs = ["nodejs-wheel-binaries"]
[[package]]
name = "pytest"
version = "8.4.1"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7"},
{file = "pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"},
@@ -1116,7 +1410,7 @@ version = "1.1.0"
description = "Pytest support for asyncio"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pytest_asyncio-1.1.0-py3-none-any.whl", hash = "sha256:5fe2d69607b0bd75c656d1211f969cadba035030156745ee09e7d71740e58ecf"},
{file = "pytest_asyncio-1.1.0.tar.gz", hash = "sha256:796aa822981e01b68c12e4827b8697108f7205020f24b5793b3c41555dab68ea"},
@@ -1130,13 +1424,33 @@ pytest = ">=8.2,<9"
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"]
testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"]
[[package]]
name = "pytest-cov"
version = "6.2.1"
description = "Pytest plugin for measuring coverage."
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "pytest_cov-6.2.1-py3-none-any.whl", hash = "sha256:f5bc4c23f42f1cdd23c70b1dab1bbaef4fc505ba950d53e0081d0730dd7e86d5"},
{file = "pytest_cov-6.2.1.tar.gz", hash = "sha256:25cc6cc0a5358204b8108ecedc51a9b57b34cc6b8c967cc2c01a4e00d8a67da2"},
]
[package.dependencies]
coverage = {version = ">=7.5", extras = ["toml"]}
pluggy = ">=1.2"
pytest = ">=6.2.5"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
[[package]]
name = "pytest-mock"
version = "3.14.1"
description = "Thin-wrapper around the mock package for easier use with pytest"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
files = [
{file = "pytest_mock-3.14.1-py3-none-any.whl", hash = "sha256:178aefcd11307d874b4cd3100344e7e2d888d9791a6a1d9bfe90fbc1b74fd1d0"},
{file = "pytest_mock-3.14.1.tar.gz", hash = "sha256:159e9edac4c451ce77a5cdb9fc5d1100708d2dd4ba3c3df572f14097351af80e"},
@@ -1253,31 +1567,31 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.12.9"
version = "0.12.11"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.9-py3-none-linux_armv6l.whl", hash = "sha256:fcebc6c79fcae3f220d05585229463621f5dbf24d79fdc4936d9302e177cfa3e"},
{file = "ruff-0.12.9-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:aed9d15f8c5755c0e74467731a007fcad41f19bcce41cd75f768bbd687f8535f"},
{file = "ruff-0.12.9-py3-none-macosx_11_0_arm64.whl", hash = "sha256:5b15ea354c6ff0d7423814ba6d44be2807644d0c05e9ed60caca87e963e93f70"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d596c2d0393c2502eaabfef723bd74ca35348a8dac4267d18a94910087807c53"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1b15599931a1a7a03c388b9c5df1bfa62be7ede6eb7ef753b272381f39c3d0ff"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3d02faa2977fb6f3f32ddb7828e212b7dd499c59eb896ae6c03ea5c303575756"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:17d5b6b0b3a25259b69ebcba87908496e6830e03acfb929ef9fd4c58675fa2ea"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:72db7521860e246adbb43f6ef464dd2a532ef2ef1f5dd0d470455b8d9f1773e0"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a03242c1522b4e0885af63320ad754d53983c9599157ee33e77d748363c561ce"},
{file = "ruff-0.12.9-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fc83e4e9751e6c13b5046d7162f205d0a7bac5840183c5beebf824b08a27340"},
{file = "ruff-0.12.9-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:881465ed56ba4dd26a691954650de6ad389a2d1fdb130fe51ff18a25639fe4bb"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:43f07a3ccfc62cdb4d3a3348bf0588358a66da756aa113e071b8ca8c3b9826af"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:07adb221c54b6bba24387911e5734357f042e5669fa5718920ee728aba3cbadc"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f5cd34fabfdea3933ab85d72359f118035882a01bff15bd1d2b15261d85d5f66"},
{file = "ruff-0.12.9-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:f6be1d2ca0686c54564da8e7ee9e25f93bdd6868263805f8c0b8fc6a449db6d7"},
{file = "ruff-0.12.9-py3-none-win32.whl", hash = "sha256:cc7a37bd2509974379d0115cc5608a1a4a6c4bff1b452ea69db83c8855d53f93"},
{file = "ruff-0.12.9-py3-none-win_amd64.whl", hash = "sha256:6fb15b1977309741d7d098c8a3cb7a30bc112760a00fb6efb7abc85f00ba5908"},
{file = "ruff-0.12.9-py3-none-win_arm64.whl", hash = "sha256:63c8c819739d86b96d500cce885956a1a48ab056bbcbc61b747ad494b2485089"},
{file = "ruff-0.12.9.tar.gz", hash = "sha256:fbd94b2e3c623f659962934e52c2bea6fc6da11f667a427a368adaf3af2c866a"},
{file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"},
{file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"},
{file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"},
{file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"},
{file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"},
{file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"},
{file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
]
[[package]]
@@ -1411,7 +1725,7 @@ version = "2.2.1"
description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
groups = ["main"]
groups = ["dev"]
markers = "python_version < \"3.11\""
files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
@@ -1454,7 +1768,7 @@ version = "4.14.1"
description = "Backported and Experimental Type Hints for Python 3.9+"
optional = false
python-versions = ">=3.9"
groups = ["main"]
groups = ["main", "dev"]
files = [
{file = "typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76"},
{file = "typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36"},
@@ -1615,4 +1929,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "4cc687aabe5865665fb8c4ccc0ea7e0af80b41e401ca37919f57efa6e0b5be00"
content-hash = "0c40b63c3c921846cf05ccfb4e685d4959854b29c2c302245f9832e20aac6954"

View File

@@ -9,21 +9,25 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^45.0"
expiringdict = "^1.2.2"
fastapi = "^0.116.1"
google-cloud-logging = "^3.12.1"
launchdarkly-server-sdk = "^9.12.0"
pydantic = "^2.11.7"
pydantic-settings = "^2.10.1"
pyjwt = "^2.10.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pyjwt = { version = "^2.10.1", extras = ["crypto"] }
redis = "^6.2.0"
supabase = "^2.16.0"
uvicorn = "^0.35.0"
[tool.poetry.group.dev.dependencies]
ruff = "^0.12.9"
pyright = "^1.1.404"
pytest = "^8.4.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pytest-cov = "^6.2.1"
ruff = "^0.12.11"
[build-system]
requires = ["poetry-core"]

View File

@@ -16,7 +16,6 @@ DB_SCHEMA=platform
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
PRISMA_SCHEMA="postgres/schema.prisma"
ENABLE_AUTH=true
## ===== REQUIRED SERVICE CREDENTIALS ===== ##
# Redis Configuration
@@ -31,7 +30,7 @@ RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
# Supabase Authentication
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
JWT_VERIFY_KEY=your-super-secret-jwt-token-with-at-least-32-characters-long
## ===== REQUIRED SECURITY KEYS ===== ##
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()
@@ -175,4 +174,4 @@ SMARTLEAD_API_KEY=
ZEROBOUNCE_API_KEY=
# Other Services
AUTOMOD_API_KEY=
AUTOMOD_API_KEY=

View File

@@ -132,17 +132,58 @@ def test_endpoint_success(snapshot: Snapshot):
### Testing with Authentication
For the main API routes that use JWT authentication, auth is provided by the `autogpt_libs.auth` module. If the test actually uses the `user_id`, the recommended approach for testing is to mock the `get_jwt_payload` function, which underpins all higher-level auth functions used in the API (`requires_user`, `requires_admin_user`, `get_user_id`).
If the test doesn't need the `user_id` specifically, mocking is not necessary as during tests auth is disabled anyway (see `conftest.py`).
#### Using Global Auth Fixtures
Two global auth fixtures are provided by `backend/server/conftest.py`:
- `mock_jwt_user` - Regular user with `test_user_id` ("test-user-id")
- `mock_jwt_admin` - Admin user with `admin_user_id` ("admin-user-id")
These provide the easiest way to set up authentication mocking in test modules:
```python
def override_auth_middleware():
return {"sub": "test-user-id"}
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
def override_get_user_id():
return "test-user-id"
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
app.dependency_overrides[auth_middleware] = override_auth_middleware
app.dependency_overrides[get_user_id] = override_get_user_id
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user['get_jwt_payload']
yield
app.dependency_overrides.clear()
```
For admin-only endpoints, use `mock_jwt_admin` instead:
```python
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_admin):
"""Setup auth overrides for admin tests"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_admin['get_jwt_payload']
yield
app.dependency_overrides.clear()
```
The IDs are also available separately as fixtures:
- `test_user_id`
- `admin_user_id`
- `target_user_id` (for admin <-> user operations)
### Mocking External Services
```python
@@ -153,10 +194,10 @@ def test_external_api_call(mocker, snapshot):
"backend.services.external_api.call",
return_value=mock_response
)
response = client.post("/api/process")
assert response.status_code == 200
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
@@ -187,6 +228,17 @@ def test_external_api_call(mocker, snapshot):
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
### 5. Fixtures
#### Global Fixtures (conftest.py)
Authentication fixtures are available globally from `conftest.py`:
- `mock_jwt_user` - Standard user authentication
- `mock_jwt_admin` - Admin user authentication
- `configured_snapshot` - Pre-configured snapshot fixture
#### Custom Fixtures
Create reusable fixtures for common test data:
```python
@@ -202,9 +254,18 @@ def test_create_user(sample_user, snapshot):
# ... test implementation
```
#### Test Isolation
All tests must use fixtures that ensure proper isolation:
- Authentication overrides are automatically cleaned up after each test
- Database connections are properly managed with cleanup
- Mock objects are reset between tests
## CI/CD Integration
The GitHub Actions workflow automatically runs tests on:
- Pull requests
- Pushes to main branch
@@ -216,16 +277,19 @@ Snapshot tests work in CI by:
## Troubleshooting
### Snapshot Mismatches
- Review the diff carefully
- If changes are expected: `poetry run pytest --snapshot-update`
- If changes are unexpected: Fix the code causing the difference
### Async Test Issues
- Ensure async functions use `@pytest.mark.asyncio`
- Use `AsyncMock` for mocking async functions
- FastAPI TestClient handles async automatically
### Import Errors
- Check that all dependencies are in `pyproject.toml`
- Run `poetry install` to ensure dependencies are installed
- Verify import paths are correct
@@ -234,4 +298,4 @@ Snapshot tests work in CI by:
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
Remember: Good tests are as important as good code!
Remember: Good tests are as important as good code!

View File

@@ -1,8 +1,6 @@
import logging
from typing import Any, Optional
from pydantic import JsonValue
from backend.data.block import (
Block,
BlockCategory,
@@ -12,7 +10,7 @@ from backend.data.block import (
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus
from backend.data.execution import ExecutionStatus, NodesInputMasks
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
@@ -33,7 +31,7 @@ class AgentExecutorBlock(Block):
input_schema: dict = SchemaField(description="Input schema for the graph")
output_schema: dict = SchemaField(description="Output schema for the graph")
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = SchemaField(
nodes_input_masks: Optional[NodesInputMasks] = SchemaField(
default=None, hidden=True
)

View File

@@ -0,0 +1,154 @@
from enum import Enum
from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType
class GeminiImageModel(str, Enum):
NANO_BANANA = "google/nano-banana"
class OutputFormat(str, Enum):
JPG = "jpg"
PNG = "png"
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
class AIImageCustomizerBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
description="Replicate API key with permissions for Google Gemini image models",
)
prompt: str = SchemaField(
description="A text description of the image you want to generate",
title="Prompt",
)
model: GeminiImageModel = SchemaField(
description="The AI model to use for image generation and editing",
default=GeminiImageModel.NANO_BANANA,
title="Model",
)
images: list[MediaFileType] = SchemaField(
description="Optional list of input images to reference or modify",
default=[],
title="Input Images",
)
output_format: OutputFormat = SchemaField(
description="Format of the output image",
default=OutputFormat.PNG,
title="Output Format",
)
class Output(BlockSchema):
image_url: MediaFileType = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
id="d76bbe4c-930e-4894-8469-b66775511f71",
description=(
"Generate and edit custom images using Google's Nano-Banana model from Gemini 2.5. "
"Provide a prompt and optional reference images to create or modify images."
),
categories={BlockCategory.AI, BlockCategory.MULTIMEDIA},
input_schema=AIImageCustomizerBlock.Input,
output_schema=AIImageCustomizerBlock.Output,
test_input={
"prompt": "Make the scene more vibrant and colorful",
"model": GeminiImageModel.NANO_BANANA,
"images": [],
"output_format": OutputFormat.JPG,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("image_url", "https://replicate.delivery/generated-image.jpg"),
],
test_mock={
"run_model": lambda *args, **kwargs: MediaFileType(
"https://replicate.delivery/generated-image.jpg"
),
},
test_credentials=TEST_CREDENTIALS,
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
try:
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.value,
prompt=input_data.prompt,
images=input_data.images,
output_format=input_data.output_format.value,
)
yield "image_url", result
except Exception as e:
yield "error", str(e)
async def run_model(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
images: list[MediaFileType],
output_format: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params: dict = {
"prompt": prompt,
"output_format": output_format,
}
# Add images to input if provided (API expects "image_input" parameter)
if images:
input_params["image_input"] = [str(img) for img in images]
output: FileOutput | str = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
if isinstance(output, FileOutput):
return MediaFileType(output.url)
if isinstance(output, str):
return MediaFileType(output)
raise ValueError("No output received from the model")

View File

@@ -661,6 +661,167 @@ async def update_field(
#################################################################
async def get_table_schema(
credentials: Credentials,
base_id: str,
table_id_or_name: str,
) -> dict:
"""
Get the schema for a specific table, including all field definitions.
Args:
credentials: Airtable API credentials
base_id: The base ID
table_id_or_name: The table ID or name
Returns:
Dict containing table schema with fields information
"""
# First get all tables to find the right one
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
tables = data.get("tables", [])
# Find the matching table
for table in tables:
if table.get("id") == table_id_or_name or table.get("name") == table_id_or_name:
return table
raise ValueError(f"Table '{table_id_or_name}' not found in base '{base_id}'")
def get_empty_value_for_field(field_type: str) -> Any:
"""
Return the appropriate empty value for a given Airtable field type.
Args:
field_type: The Airtable field type
Returns:
The appropriate empty value for that field type
"""
# Fields that should be false when empty
if field_type == "checkbox":
return False
# Fields that should be empty arrays
if field_type in [
"multipleSelects",
"multipleRecordLinks",
"multipleAttachments",
"multipleLookupValues",
"multipleCollaborators",
]:
return []
# Fields that should be 0 when empty (numeric types)
if field_type in [
"number",
"percent",
"currency",
"rating",
"duration",
"count",
"autoNumber",
]:
return 0
# Fields that should be empty strings
if field_type in [
"singleLineText",
"multilineText",
"email",
"url",
"phoneNumber",
"richText",
"barcode",
]:
return ""
# Everything else gets null (dates, single selects, formulas, etc.)
return None
async def normalize_records(
records: list[dict],
table_schema: dict,
include_field_metadata: bool = False,
) -> dict:
"""
Normalize Airtable records to include all fields with proper empty values.
Args:
records: List of record objects from Airtable API
table_schema: Table schema containing field definitions
include_field_metadata: Whether to include field metadata in response
Returns:
Dict with normalized records and optionally field metadata
"""
fields = table_schema.get("fields", [])
# Normalize each record
normalized_records = []
for record in records:
normalized = {
"id": record.get("id"),
"createdTime": record.get("createdTime"),
"fields": {},
}
# Add existing fields
existing_fields = record.get("fields", {})
# Add all fields from schema, using empty values for missing ones
for field in fields:
field_name = field["name"]
field_type = field["type"]
if field_name in existing_fields:
# Field exists, use its value
normalized["fields"][field_name] = existing_fields[field_name]
else:
# Field is missing, add appropriate empty value
normalized["fields"][field_name] = get_empty_value_for_field(field_type)
normalized_records.append(normalized)
# Build result dictionary
if include_field_metadata:
field_metadata = {}
for field in fields:
metadata = {"type": field["type"], "id": field["id"]}
# Add type-specific metadata
options = field.get("options", {})
if field["type"] == "currency" and "symbol" in options:
metadata["symbol"] = options["symbol"]
metadata["precision"] = options.get("precision", 2)
elif field["type"] == "duration" and "durationFormat" in options:
metadata["format"] = options["durationFormat"]
elif field["type"] == "percent" and "precision" in options:
metadata["precision"] = options["precision"]
elif (
field["type"] in ["singleSelect", "multipleSelects"]
and "choices" in options
):
metadata["choices"] = [choice["name"] for choice in options["choices"]]
elif field["type"] == "rating" and "max" in options:
metadata["max"] = options["max"]
metadata["icon"] = options.get("icon", "star")
metadata["color"] = options.get("color", "yellowBright")
field_metadata[field["name"]] = metadata
return {"records": normalized_records, "field_metadata": field_metadata}
else:
return {"records": normalized_records}
async def list_records(
credentials: Credentials,
base_id: str,
@@ -1249,3 +1410,26 @@ async def list_bases(
)
return response.json()
async def get_base_tables(
credentials: Credentials,
base_id: str,
) -> list[dict]:
"""
Get all tables for a specific base.
Args:
credentials: Airtable API credentials
base_id: The ID of the base
Returns:
list[dict]: List of table objects with their schemas
"""
response = await Requests().get(
f"https://api.airtable.com/v0/meta/bases/{base_id}/tables",
headers={"Authorization": credentials.auth_header()},
)
data = response.json()
return data.get("tables", [])

View File

@@ -14,13 +14,13 @@ from backend.sdk import (
SchemaField,
)
from ._api import create_base, list_bases
from ._api import create_base, get_base_tables, list_bases
from ._config import airtable
class AirtableCreateBaseBlock(Block):
"""
Creates a new base in an Airtable workspace.
Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists.
"""
class Input(BlockSchema):
@@ -31,6 +31,10 @@ class AirtableCreateBaseBlock(Block):
description="The workspace ID where the base will be created"
)
name: str = SchemaField(description="The name of the new base")
find_existing: bool = SchemaField(
description="If true, return existing base with same name instead of creating duplicate",
default=True,
)
tables: list[dict] = SchemaField(
description="At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties",
default=[
@@ -50,14 +54,18 @@ class AirtableCreateBaseBlock(Block):
)
class Output(BlockSchema):
base_id: str = SchemaField(description="The ID of the created base")
base_id: str = SchemaField(description="The ID of the created or found base")
tables: list[dict] = SchemaField(description="Array of table objects")
table: dict = SchemaField(description="A single table object")
was_created: bool = SchemaField(
description="True if a new base was created, False if existing was found",
default=True,
)
def __init__(self):
super().__init__(
id="f59b88a8-54ce-4676-a508-fd614b4e8dce",
description="Create a new base in Airtable",
description="Create or find a base in Airtable",
categories={BlockCategory.DATA},
input_schema=self.Input,
output_schema=self.Output,
@@ -66,6 +74,31 @@ class AirtableCreateBaseBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# If find_existing is true, check if a base with this name already exists
if input_data.find_existing:
# List all bases to check for existing one with same name
# Note: Airtable API doesn't have a direct search, so we need to list and filter
existing_bases = await list_bases(credentials)
for base in existing_bases.get("bases", []):
if base.get("name") == input_data.name:
# Base already exists, return it
base_id = base.get("id")
yield "base_id", base_id
yield "was_created", False
# Get the tables for this base
try:
tables = await get_base_tables(credentials, base_id)
yield "tables", tables
for table in tables:
yield "table", table
except Exception:
# If we can't get tables, return empty list
yield "tables", []
return
# No existing base found or find_existing is false, create new one
data = await create_base(
credentials,
input_data.workspace_id,
@@ -74,6 +107,7 @@ class AirtableCreateBaseBlock(Block):
)
yield "base_id", data.get("id", None)
yield "was_created", True
yield "tables", data.get("tables", [])
for table in data.get("tables", []):
yield "table", table

View File

@@ -2,7 +2,7 @@
Airtable record operation blocks.
"""
from typing import Optional
from typing import Optional, cast
from backend.sdk import (
APIKeyCredentials,
@@ -18,7 +18,9 @@ from ._api import (
create_record,
delete_multiple_records,
get_record,
get_table_schema,
list_records,
normalize_records,
update_multiple_records,
)
from ._config import airtable
@@ -54,12 +56,24 @@ class AirtableListRecordsBlock(Block):
return_fields: list[str] = SchemaField(
description="Specific fields to return (comma-separated)", default=[]
)
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
records: list[dict] = SchemaField(description="Array of record objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more records)", default=None
)
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -73,6 +87,7 @@ class AirtableListRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
data = await list_records(
credentials,
input_data.base_id,
@@ -88,8 +103,33 @@ class AirtableListRecordsBlock(Block):
fields=input_data.return_fields if input_data.return_fields else None,
)
yield "records", data.get("records", [])
yield "offset", data.get("offset", None)
records = data.get("records", [])
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
records,
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
yield "records", normalized_data["records"]
yield "offset", data.get("offset", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "records", records
yield "offset", data.get("offset", None)
class AirtableGetRecordBlock(Block):
@@ -104,11 +144,23 @@ class AirtableGetRecordBlock(Block):
base_id: str = SchemaField(description="The Airtable base ID")
table_id_or_name: str = SchemaField(description="Table ID or name")
record_id: str = SchemaField(description="The record ID to retrieve")
normalize_output: bool = SchemaField(
description="Normalize output to include all fields with proper empty values (disable to skip schema fetch and get raw Airtable response)",
default=True,
)
include_field_metadata: bool = SchemaField(
description="Include field type and configuration metadata (requires normalize_output=true)",
default=False,
)
class Output(BlockSchema):
id: str = SchemaField(description="The record ID")
fields: dict = SchemaField(description="The record fields")
created_time: str = SchemaField(description="The record created time")
field_metadata: Optional[dict] = SchemaField(
description="Field type and configuration metadata (only when include_field_metadata=true)",
default=None,
)
def __init__(self):
super().__init__(
@@ -122,6 +174,7 @@ class AirtableGetRecordBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
record = await get_record(
credentials,
input_data.base_id,
@@ -129,9 +182,34 @@ class AirtableGetRecordBlock(Block):
input_data.record_id,
)
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
# Normalize output if requested
if input_data.normalize_output:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the single record (wrap in list and unwrap result)
normalized_data = await normalize_records(
[record],
table_schema,
include_field_metadata=input_data.include_field_metadata,
)
normalized_record = normalized_data["records"][0]
yield "id", normalized_record.get("id", None)
yield "fields", normalized_record.get("fields", None)
yield "created_time", normalized_record.get("createdTime", None)
if (
input_data.include_field_metadata
and "field_metadata" in normalized_data
):
yield "field_metadata", normalized_data["field_metadata"]
else:
yield "id", record.get("id", None)
yield "fields", record.get("fields", None)
yield "created_time", record.get("createdTime", None)
class AirtableCreateRecordsBlock(Block):
@@ -148,6 +226,10 @@ class AirtableCreateRecordsBlock(Block):
records: list[dict] = SchemaField(
description="Array of records to create (each with 'fields' object)"
)
skip_normalization: bool = SchemaField(
description="Skip output normalization to get raw Airtable response (faster but may have missing fields)",
default=False,
)
typecast: bool = SchemaField(
description="Automatically convert string values to appropriate types",
default=False,
@@ -159,7 +241,6 @@ class AirtableCreateRecordsBlock(Block):
class Output(BlockSchema):
records: list[dict] = SchemaField(description="Array of created record objects")
details: dict = SchemaField(description="Details of the created records")
def __init__(self):
super().__init__(
@@ -173,7 +254,7 @@ class AirtableCreateRecordsBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# The create_record API expects records in a specific format
data = await create_record(
credentials,
input_data.base_id,
@@ -182,11 +263,22 @@ class AirtableCreateRecordsBlock(Block):
typecast=input_data.typecast if input_data.typecast else None,
return_fields_by_field_id=input_data.return_fields_by_field_id,
)
result_records = cast(list[dict], data.get("records", []))
yield "records", data.get("records", [])
details = data.get("details", None)
if details:
yield "details", details
# Normalize output unless explicitly disabled
if not input_data.skip_normalization and result_records:
# Fetch table schema
table_schema = await get_table_schema(
credentials, input_data.base_id, input_data.table_id_or_name
)
# Normalize the records
normalized_data = await normalize_records(
result_records, table_schema, include_field_metadata=False
)
result_records = normalized_data["records"]
yield "records", result_records
class AirtableUpdateRecordsBlock(Block):

View File

@@ -0,0 +1,3 @@
from .text_overlay import BannerbearTextOverlayBlock
__all__ = ["BannerbearTextOverlayBlock"]

View File

@@ -0,0 +1,8 @@
from backend.sdk import BlockCostType, ProviderBuilder
bannerbear = (
ProviderBuilder("bannerbear")
.with_api_key("BANNERBEAR_API_KEY", "Bannerbear API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,239 @@
import uuid
from typing import TYPE_CHECKING, Any, Dict, List
if TYPE_CHECKING:
pass
from pydantic import SecretStr
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
Requests,
SchemaField,
)
from ._config import bannerbear
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="bannerbear",
api_key=SecretStr("mock-bannerbear-api-key"),
title="Mock Bannerbear API Key",
)
class TextModification(BlockSchema):
name: str = SchemaField(
description="The name of the layer to modify in the template"
)
text: str = SchemaField(description="The text content to add to this layer")
color: str = SchemaField(
description="Hex color code for the text (e.g., '#FF0000')",
default="",
advanced=True,
)
font_family: str = SchemaField(
description="Font family to use for the text",
default="",
advanced=True,
)
font_size: int = SchemaField(
description="Font size in pixels",
default=0,
advanced=True,
)
font_weight: str = SchemaField(
description="Font weight (e.g., 'bold', 'normal')",
default="",
advanced=True,
)
text_align: str = SchemaField(
description="Text alignment (left, center, right)",
default="",
advanced=True,
)
class BannerbearTextOverlayBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = bannerbear.credentials_field(
description="API credentials for Bannerbear"
)
template_id: str = SchemaField(
description="The unique ID of your Bannerbear template"
)
project_id: str = SchemaField(
description="Optional: Project ID (required when using Master API Key)",
default="",
advanced=True,
)
text_modifications: List[TextModification] = SchemaField(
description="List of text layers to modify in the template"
)
image_url: str = SchemaField(
description="Optional: URL of an image to use in the template",
default="",
advanced=True,
)
image_layer_name: str = SchemaField(
description="Optional: Name of the image layer in the template",
default="photo",
advanced=True,
)
webhook_url: str = SchemaField(
description="Optional: URL to receive webhook notification when image is ready",
default="",
advanced=True,
)
metadata: str = SchemaField(
description="Optional: Custom metadata to attach to the image",
default="",
advanced=True,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the image generation was successfully initiated"
)
image_url: str = SchemaField(
description="URL of the generated image (if synchronous) or placeholder"
)
uid: str = SchemaField(description="Unique identifier for the generated image")
status: str = SchemaField(description="Status of the image generation")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
id="c7d3a5c2-05fc-450e-8dce-3b0e04626009",
description="Add text overlay to images using Bannerbear templates. Perfect for creating social media graphics, marketing materials, and dynamic image content.",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"template_id": "jJWBKNELpQPvbX5R93Gk",
"text_modifications": [
{
"name": "headline",
"text": "Amazing Product Launch!",
"color": "#FF0000",
},
{
"name": "subtitle",
"text": "50% OFF Today Only",
},
],
"credentials": {
"provider": "bannerbear",
"id": str(uuid.uuid4()),
"type": "api_key",
},
},
test_output=[
("success", True),
("image_url", "https://cdn.bannerbear.com/test-image.jpg"),
("uid", "test-uid-123"),
("status", "completed"),
],
test_mock={
"_make_api_request": lambda *args, **kwargs: {
"uid": "test-uid-123",
"status": "completed",
"image_url": "https://cdn.bannerbear.com/test-image.jpg",
}
},
test_credentials=TEST_CREDENTIALS,
)
async def _make_api_request(self, payload: dict, api_key: str) -> dict:
"""Make the actual API request to Bannerbear. This is separated for easy mocking in tests."""
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
response = await Requests().post(
"https://sync.api.bannerbear.com/v2/images",
headers=headers,
json=payload,
)
if response.status in [200, 201, 202]:
return response.json()
else:
error_msg = f"API request failed with status {response.status}"
if response.text:
try:
error_data = response.json()
error_msg = (
f"{error_msg}: {error_data.get('message', response.text)}"
)
except Exception:
error_msg = f"{error_msg}: {response.text}"
raise Exception(error_msg)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Build the modifications array
modifications = []
# Add text modifications
for text_mod in input_data.text_modifications:
mod_data: Dict[str, Any] = {
"name": text_mod.name,
"text": text_mod.text,
}
# Add optional text styling parameters only if they have values
if text_mod.color and text_mod.color.strip():
mod_data["color"] = text_mod.color
if text_mod.font_family and text_mod.font_family.strip():
mod_data["font_family"] = text_mod.font_family
if text_mod.font_size and text_mod.font_size > 0:
mod_data["font_size"] = text_mod.font_size
if text_mod.font_weight and text_mod.font_weight.strip():
mod_data["font_weight"] = text_mod.font_weight
if text_mod.text_align and text_mod.text_align.strip():
mod_data["text_align"] = text_mod.text_align
modifications.append(mod_data)
# Add image modification if provided and not empty
if input_data.image_url and input_data.image_url.strip():
modifications.append(
{
"name": input_data.image_layer_name,
"image_url": input_data.image_url,
}
)
# Build the request payload - only include non-empty optional fields
payload = {
"template": input_data.template_id,
"modifications": modifications,
}
# Add project_id if provided (required for Master API keys)
if input_data.project_id and input_data.project_id.strip():
payload["project_id"] = input_data.project_id
if input_data.webhook_url and input_data.webhook_url.strip():
payload["webhook_url"] = input_data.webhook_url
if input_data.metadata and input_data.metadata.strip():
payload["metadata"] = input_data.metadata
# Make the API request using the private method
data = await self._make_api_request(
payload, credentials.api_key.get_secret_value()
)
# Synchronous request - image should be ready
yield "success", True
yield "image_url", data.get("image_url", "")
yield "uid", data.get("uid", "")
yield "status", data.get("status", "completed")

View File

@@ -93,11 +93,11 @@ class Webset(BaseModel):
"""
Set of key-value pairs you want to associate with this object.
"""
created_at: Annotated[datetime, Field(alias="createdAt")] | None = None
created_at: Annotated[datetime | None, Field(alias="createdAt")] = None
"""
The date and time the webset was created
"""
updated_at: Annotated[datetime, Field(alias="updatedAt")] | None = None
updated_at: Annotated[datetime | None, Field(alias="updatedAt")] = None
"""
The date and time the webset was last updated
"""

View File

@@ -1094,6 +1094,117 @@ class GmailGetThreadBlock(GmailBase):
return thread
async def _build_reply_message(
service, input_data, graph_exec_id: str, user_id: str
) -> tuple[str, str]:
"""
Builds a reply MIME message for Gmail threads.
Returns:
tuple: (base64-encoded raw message, threadId)
"""
# Get parent message for reply context
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
)
# Build headers dictionary, preserving all values for duplicate headers
headers = {}
for h in parent.get("payload", {}).get("headers", []):
name = h["name"].lower()
value = h["value"]
if name in headers:
# For duplicate headers, keep the first occurrence (most relevant for reply context)
continue
headers[name] = value
# Determine recipients if not specified
if not (input_data.to or input_data.cc or input_data.bcc):
if input_data.replyAll:
recipients = [parseaddr(headers.get("from", ""))[1]]
recipients += [addr for _, addr in getaddresses([headers.get("to", "")])]
recipients += [addr for _, addr in getaddresses([headers.get("cc", "")])]
# Use dict.fromkeys() for O(n) deduplication while preserving order
input_data.to = list(dict.fromkeys(filter(None, recipients)))
else:
# Check Reply-To header first, fall back to From header
reply_to = headers.get("reply-to", "")
from_addr = headers.get("from", "")
sender = parseaddr(reply_to if reply_to else from_addr)[1]
input_data.to = [sender] if sender else []
# Set subject with Re: prefix if not already present
if input_data.subject:
subject = input_data.subject
else:
parent_subject = headers.get("subject", "").strip()
# Only add "Re:" if not already present (case-insensitive check)
if parent_subject.lower().startswith("re:"):
subject = parent_subject
else:
subject = f"Re: {parent_subject}" if parent_subject else "Re:"
# Build references header for proper threading
references = headers.get("references", "").split()
if headers.get("message-id"):
references.append(headers["message-id"])
# Create MIME message
msg = MIMEMultipart()
if input_data.to:
msg["To"] = ", ".join(input_data.to)
if input_data.cc:
msg["Cc"] = ", ".join(input_data.cc)
if input_data.bcc:
msg["Bcc"] = ", ".join(input_data.bcc)
msg["Subject"] = subject
if headers.get("message-id"):
msg["In-Reply-To"] = headers["message-id"]
if references:
msg["References"] = " ".join(references)
# Use the helper function for consistent content type handling
msg.attach(_make_mime_text(input_data.body, input_data.content_type))
# Handle attachments
for attach in input_data.attachments:
local_path = await store_media_file(
user_id=user_id,
graph_exec_id=graph_exec_id,
file=attach,
return_content=False,
)
abs_path = get_exec_file_path(graph_exec_id, local_path)
part = MIMEBase("application", "octet-stream")
with open(abs_path, "rb") as f:
part.set_payload(f.read())
encoders.encode_base64(part)
part.add_header(
"Content-Disposition", f"attachment; filename={Path(abs_path).name}"
)
msg.attach(part)
# Encode message
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
return raw, input_data.threadId
class GmailReplyBlock(GmailBase):
"""
Replies to Gmail threads with intelligent content type detection.
@@ -1230,93 +1341,146 @@ class GmailReplyBlock(GmailBase):
async def _reply(
self, service, input_data: Input, graph_exec_id: str, user_id: str
) -> dict:
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
# Build the reply message using the shared helper
raw, thread_id = await _build_reply_message(
service, input_data, graph_exec_id, user_id
)
headers = {
h["name"].lower(): h["value"]
for h in parent.get("payload", {}).get("headers", [])
}
if not (input_data.to or input_data.cc or input_data.bcc):
if input_data.replyAll:
recipients = [parseaddr(headers.get("from", ""))[1]]
recipients += [
addr for _, addr in getaddresses([headers.get("to", "")])
]
recipients += [
addr for _, addr in getaddresses([headers.get("cc", "")])
]
dedup: list[str] = []
for r in recipients:
if r and r not in dedup:
dedup.append(r)
input_data.to = dedup
else:
sender = parseaddr(headers.get("reply-to", headers.get("from", "")))[1]
input_data.to = [sender] if sender else []
subject = input_data.subject or (f"Re: {headers.get('subject', '')}".strip())
references = headers.get("references", "").split()
if headers.get("message-id"):
references.append(headers["message-id"])
msg = MIMEMultipart()
if input_data.to:
msg["To"] = ", ".join(input_data.to)
if input_data.cc:
msg["Cc"] = ", ".join(input_data.cc)
if input_data.bcc:
msg["Bcc"] = ", ".join(input_data.bcc)
msg["Subject"] = subject
if headers.get("message-id"):
msg["In-Reply-To"] = headers["message-id"]
if references:
msg["References"] = " ".join(references)
# Use the new helper function for consistent content type handling
msg.attach(_make_mime_text(input_data.body, input_data.content_type))
for attach in input_data.attachments:
local_path = await store_media_file(
user_id=user_id,
graph_exec_id=graph_exec_id,
file=attach,
return_content=False,
)
abs_path = get_exec_file_path(graph_exec_id, local_path)
part = MIMEBase("application", "octet-stream")
with open(abs_path, "rb") as f:
part.set_payload(f.read())
encoders.encode_base64(part)
part.add_header(
"Content-Disposition", f"attachment; filename={Path(abs_path).name}"
)
msg.attach(part)
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
# Send the message
return await asyncio.to_thread(
lambda: service.users()
.messages()
.send(userId="me", body={"threadId": input_data.threadId, "raw": raw})
.send(userId="me", body={"threadId": thread_id, "raw": raw})
.execute()
)
class GmailDraftReplyBlock(GmailBase):
"""
Creates draft replies to Gmail threads with intelligent content type detection.
Features:
- Automatic HTML detection: Draft replies containing HTML tags are formatted as text/html
- No hard-wrap for plain text: Plain text draft replies preserve natural line flow
- Manual content type override: Use content_type parameter to force specific format
- Reply-all functionality: Option to reply to all original recipients
- Thread preservation: Maintains proper email threading with headers
- Full Unicode/emoji support with UTF-8 encoding
"""
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
[
"https://www.googleapis.com/auth/gmail.modify",
"https://www.googleapis.com/auth/gmail.readonly",
]
)
threadId: str = SchemaField(description="Thread ID to reply in")
parentMessageId: str = SchemaField(
description="ID of the message being replied to"
)
to: list[str] = SchemaField(description="To recipients", default_factory=list)
cc: list[str] = SchemaField(description="CC recipients", default_factory=list)
bcc: list[str] = SchemaField(description="BCC recipients", default_factory=list)
replyAll: bool = SchemaField(
description="Reply to all original recipients", default=False
)
subject: str = SchemaField(description="Email subject", default="")
body: str = SchemaField(description="Email body (plain text or HTML)")
content_type: Optional[Literal["auto", "plain", "html"]] = SchemaField(
description="Content type: 'auto' (default - detects HTML), 'plain', or 'html'",
default=None,
advanced=True,
)
attachments: list[MediaFileType] = SchemaField(
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
draftId: str = SchemaField(description="Created draft ID")
messageId: str = SchemaField(description="Draft message ID")
threadId: str = SchemaField(description="Thread ID")
status: str = SchemaField(description="Draft creation status")
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
id="d7a9f3e2-8b4c-4d6f-9e1a-3c5b7f8d2a6e",
description="Create draft replies to Gmail threads with automatic HTML detection and proper text formatting. Plain text draft replies maintain natural paragraph flow without 78-character line wrapping. HTML content is automatically detected and formatted correctly.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailDraftReplyBlock.Input,
output_schema=GmailDraftReplyBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
test_input={
"threadId": "t1",
"parentMessageId": "m1",
"body": "Thanks for your message. I'll review and get back to you.",
"replyAll": False,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("draftId", "draft1"),
("messageId", "m2"),
("threadId", "t1"),
("status", "draft_created"),
],
test_mock={
"_create_draft_reply": lambda *args, **kwargs: {
"id": "draft1",
"message": {"id": "m2", "threadId": "t1"},
}
},
)
async def run(
self,
input_data: Input,
*,
credentials: GoogleCredentials,
graph_exec_id: str,
user_id: str,
**kwargs,
) -> BlockOutput:
service = self._build_service(credentials, **kwargs)
draft = await self._create_draft_reply(
service,
input_data,
graph_exec_id,
user_id,
)
yield "draftId", draft["id"]
yield "messageId", draft["message"]["id"]
yield "threadId", draft["message"].get("threadId", input_data.threadId)
yield "status", "draft_created"
async def _create_draft_reply(
self, service, input_data: Input, graph_exec_id: str, user_id: str
) -> dict:
# Build the reply message using the shared helper
raw, thread_id = await _build_reply_message(
service, input_data, graph_exec_id, user_id
)
# Create draft with proper thread association
draft = await asyncio.to_thread(
lambda: service.users()
.drafts()
.create(
userId="me",
body={
"message": {
"threadId": thread_id,
"raw": raw,
}
},
)
.execute()
)
return draft
class GmailGetProfileBlock(GmailBase):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(

View File

@@ -30,6 +30,7 @@ TEST_CREDENTIALS_INPUT = {
class IdeogramModelName(str, Enum):
V3 = "V_3"
V2 = "V_2"
V1 = "V_1"
V1_TURBO = "V_1_TURBO"
@@ -95,8 +96,8 @@ class IdeogramModelBlock(Block):
title="Prompt",
)
ideogram_model_name: IdeogramModelName = SchemaField(
description="The name of the Image Generation Model, e.g., V_2",
default=IdeogramModelName.V2,
description="The name of the Image Generation Model, e.g., V_3",
default=IdeogramModelName.V3,
title="Image Generation Model",
advanced=False,
)
@@ -236,6 +237,111 @@ class IdeogramModelBlock(Block):
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
# Use V3 endpoint for V3 model, legacy endpoint for others
if model_name == "V_3":
return await self._run_model_v3(
api_key,
prompt,
seed,
aspect_ratio,
magic_prompt_option,
style_type,
negative_prompt,
color_palette_name,
custom_colors,
)
else:
return await self._run_model_legacy(
api_key,
model_name,
prompt,
seed,
aspect_ratio,
magic_prompt_option,
style_type,
negative_prompt,
color_palette_name,
custom_colors,
)
async def _run_model_v3(
self,
api_key: SecretStr,
prompt: str,
seed: Optional[int],
aspect_ratio: str,
magic_prompt_option: str,
style_type: str,
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
url = "https://api.ideogram.ai/v1/ideogram-v3/generate"
headers = {
"Api-Key": api_key.get_secret_value(),
"Content-Type": "application/json",
}
# Map legacy aspect ratio values to V3 format
aspect_ratio_map = {
"ASPECT_10_16": "10x16",
"ASPECT_16_10": "16x10",
"ASPECT_9_16": "9x16",
"ASPECT_16_9": "16x9",
"ASPECT_3_2": "3x2",
"ASPECT_2_3": "2x3",
"ASPECT_4_3": "4x3",
"ASPECT_3_4": "3x4",
"ASPECT_1_1": "1x1",
"ASPECT_1_3": "1x3",
"ASPECT_3_1": "3x1",
# Additional V3 supported ratios
"ASPECT_1_2": "1x2",
"ASPECT_2_1": "2x1",
"ASPECT_4_5": "4x5",
"ASPECT_5_4": "5x4",
}
v3_aspect_ratio = aspect_ratio_map.get(
aspect_ratio, "1x1"
) # Default to 1x1 if not found
# Use JSON for V3 endpoint (simpler than multipart/form-data)
data: Dict[str, Any] = {
"prompt": prompt,
"aspect_ratio": v3_aspect_ratio,
"magic_prompt": magic_prompt_option,
"style_type": style_type,
}
if seed is not None:
data["seed"] = seed
if negative_prompt:
data["negative_prompt"] = negative_prompt
# Note: V3 endpoint may have different color palette support
# For now, we'll omit color palettes for V3 to avoid errors
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image with V3 endpoint: {str(e)}")
async def _run_model_legacy(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
seed: Optional[int],
aspect_ratio: str,
magic_prompt_option: str,
style_type: str,
negative_prompt: Optional[str],
color_palette_name: str,
custom_colors: Optional[list[str]],
):
url = "https://api.ideogram.ai/generate"
headers = {
@@ -249,28 +355,33 @@ class IdeogramModelBlock(Block):
"model": model_name,
"aspect_ratio": aspect_ratio,
"magic_prompt_option": magic_prompt_option,
"style_type": style_type,
}
}
# Only add style_type for V2, V2_TURBO, and V3 models (V1 models don't support it)
if model_name in ["V_2", "V_2_TURBO", "V_3"]:
data["image_request"]["style_type"] = style_type
if seed is not None:
data["image_request"]["seed"] = seed
if negative_prompt:
data["image_request"]["negative_prompt"] = negative_prompt
if color_palette_name != "NONE":
data["color_palette"] = {"name": color_palette_name}
elif custom_colors:
data["color_palette"] = {
"members": [{"color_hex": color} for color in custom_colors]
}
# Only add color palette for V2 and V2_TURBO models (V1 models don't support it)
if model_name in ["V_2", "V_2_TURBO"]:
if color_palette_name != "NONE":
data["color_palette"] = {"name": color_palette_name}
elif custom_colors:
data["color_palette"] = {
"members": [{"color_hex": color} for color in custom_colors]
}
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image: {str(e)}")
raise Exception(f"Failed to fetch image with legacy endpoint: {str(e)}")
async def upscale_image(self, api_key: SecretStr, image_url: str):
url = "https://api.ideogram.ai/upscale"

View File

@@ -0,0 +1,8 @@
from backend.sdk import BlockCostType, ProviderBuilder
stagehand = (
ProviderBuilder("stagehand")
.with_api_key("STAGEHAND_API_KEY", "Stagehand API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)

View File

@@ -0,0 +1,393 @@
import logging
import signal
import threading
from contextlib import contextmanager
from enum import Enum
# Monkey patch Stagehands to prevent signal handling in worker threads
import stagehand.main
from stagehand import Stagehand
from backend.blocks.llm import (
MODEL_METADATA,
AICredentials,
AICredentialsField,
LlmModel,
ModelMetadata,
)
from backend.blocks.stagehand._config import stagehand as stagehand_provider
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
CredentialsMetaInput,
SchemaField,
)
# Store the original method
original_register_signal_handlers = stagehand.main.Stagehand._register_signal_handlers
def safe_register_signal_handlers(self):
"""Only register signal handlers in the main thread"""
if threading.current_thread() is threading.main_thread():
original_register_signal_handlers(self)
else:
# Skip signal handling in worker threads
pass
# Replace the method
stagehand.main.Stagehand._register_signal_handlers = safe_register_signal_handlers
@contextmanager
def disable_signal_handling():
"""Context manager to temporarily disable signal handling"""
if threading.current_thread() is not threading.main_thread():
# In worker threads, temporarily replace signal.signal with a no-op
original_signal = signal.signal
def noop_signal(*args, **kwargs):
pass
signal.signal = noop_signal
try:
yield
finally:
signal.signal = original_signal
else:
# In main thread, don't modify anything
yield
logger = logging.getLogger(__name__)
class StagehandRecommendedLlmModel(str, Enum):
"""
This is subset of LLModel from autogpt_platform/backend/backend/blocks/llm.py
It contains only the models recommended by Stagehand
"""
# OpenAI
GPT41 = "gpt-4.1-2025-04-14"
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
@property
def provider_name(self) -> str:
"""
Returns the provider name for the model in the required format for Stagehand:
provider/model_name
"""
model_metadata = MODEL_METADATA[LlmModel(self.value)]
model_name = self.value
if len(model_name.split("/")) == 1 and not self.value.startswith(
model_metadata.provider
):
assert (
model_metadata.provider != "open_router"
), "Logic failed and open_router provider attempted to be prepended to model name! in stagehand/_config.py"
model_name = f"{model_metadata.provider}/{model_name}"
logger.error(f"Model name: {model_name}")
return model_name
@property
def provider(self) -> str:
return MODEL_METADATA[LlmModel(self.value)].provider
@property
def metadata(self) -> ModelMetadata:
return MODEL_METADATA[LlmModel(self.value)]
@property
def context_window(self) -> int:
return MODEL_METADATA[LlmModel(self.value)].context_window
@property
def max_output_tokens(self) -> int | None:
return MODEL_METADATA[LlmModel(self.value)].max_output_tokens
class StagehandObserveBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
instruction: str = SchemaField(
description="Natural language description of elements or actions to discover.",
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
class Output(BlockSchema):
selector: str = SchemaField(description="XPath selector to locate element.")
description: str = SchemaField(description="Human-readable description")
method: str | None = SchemaField(description="Suggested action method")
arguments: list[str] | None = SchemaField(
description="Additional action parameters"
)
def __init__(self):
super().__init__(
id="d3863944-0eaf-45c4-a0c9-63e0fe1ee8b9",
description="Find suggested actions for your workflows",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandObserveBlock.Input,
output_schema=StagehandObserveBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"OBSERVE: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"OBSERVE: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
observe_results = await page.observe(
input_data.instruction,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
)
for result in observe_results:
yield "selector", result.selector
yield "description", result.description
yield "method", result.method
yield "arguments", result.arguments
class StagehandActBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
action: list[str] = SchemaField(
description="Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step.",
)
variables: dict[str, str] = SchemaField(
description="Variables to use in the action. Variables contains data you want the action to use.",
default_factory=dict,
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
timeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM ready. Extended timeout for slow-loading forms",
default=60000,
)
class Output(BlockSchema):
success: bool = SchemaField(
description="Whether the action was completed successfully"
)
message: str = SchemaField(description="Details about the actions execution.")
action: str = SchemaField(description="Action performed")
def __init__(self):
super().__init__(
id="86eba68b-9549-4c0b-a0db-47d85a56cc27",
description="Interact with a web page by performing actions on a web page. Use it to build self-healing and deterministic automations that adapt to website chang.",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandActBlock.Input,
output_schema=StagehandActBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"ACT: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"ACT: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
for action in input_data.action:
action_results = await page.act(
action,
variables=input_data.variables,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
timeoutMs=input_data.timeoutMs,
)
yield "success", action_results.success
yield "message", action_results.message
yield "action", action_results.action
class StagehandExtractBlock(Block):
class Input(BlockSchema):
# Browserbase credentials (Stagehand provider) or raw API key
stagehand_credentials: CredentialsMetaInput = (
stagehand_provider.credentials_field(
description="Stagehand/Browserbase API key"
)
)
browserbase_project_id: str = SchemaField(
description="Browserbase project ID (required if using Browserbase)",
)
# Model selection and credentials (provider-discriminated like llm.py)
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
url: str = SchemaField(
description="URL to navigate to.",
)
instruction: str = SchemaField(
description="Natural language description of elements or actions to discover.",
)
iframes: bool = SchemaField(
description="Whether to search within iframes. If True, Stagehand will search for actions within iframes.",
default=True,
)
domSettleTimeoutMs: int = SchemaField(
description="Timeout in milliseconds for DOM settlement.Wait longer for dynamic content",
default=45000,
)
class Output(BlockSchema):
extraction: str = SchemaField(description="Extracted data from the page.")
def __init__(self):
super().__init__(
id="fd3c0b18-2ba6-46ae-9339-fcb40537ad98",
description="Extract structured data from a webpage.",
categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS},
input_schema=StagehandExtractBlock.Input,
output_schema=StagehandExtractBlock.Output,
)
async def run(
self,
input_data: Input,
*,
stagehand_credentials: APIKeyCredentials,
model_credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
logger.info(f"EXTRACT: Stagehand credentials: {stagehand_credentials}")
logger.info(
f"EXTRACT: Model credentials: {model_credentials} for provider {model_credentials.provider} secret: {model_credentials.api_key.get_secret_value()}"
)
with disable_signal_handling():
stagehand = Stagehand(
api_key=stagehand_credentials.api_key.get_secret_value(),
project_id=input_data.browserbase_project_id,
model_name=input_data.model.provider_name,
model_api_key=model_credentials.api_key.get_secret_value(),
)
await stagehand.init()
page = stagehand.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(input_data.url)
extraction = await page.extract(
input_data.instruction,
iframes=input_data.iframes,
domSettleTimeoutMs=input_data.domSettleTimeoutMs,
)
yield "extraction", str(extraction.model_dump()["extraction"])

View File

@@ -35,20 +35,19 @@ async def execute_graph(
logger.info("Input data: %s", input_data)
# --- Test adding new executions --- #
response = await agent_server.test_execute_graph(
graph_exec = await agent_server.test_execute_graph(
user_id=test_user.id,
graph_id=test_graph.id,
graph_version=test_graph.version,
node_input=input_data,
)
graph_exec_id = response.graph_exec_id
logger.info("Created execution with ID: %s", graph_exec_id)
logger.info("Created execution with ID: %s", graph_exec.id)
# Execution queue should be empty
logger.info("Waiting for execution to complete...")
result = await wait_execution(test_user.id, graph_exec_id, 30)
result = await wait_execution(test_user.id, graph_exec.id, 30)
logger.info("Execution completed with %d results", len(result))
return graph_exec_id
return graph_exec.id
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -4,6 +4,8 @@ from logging import getLogger
from typing import Any, Dict, List, Union
from urllib.parse import urlencode
from pydantic import field_serializer
from backend.sdk import BaseModel, Credentials, Requests
logger = getLogger(__name__)
@@ -382,8 +384,9 @@ class CreatePostRequest(BaseModel):
# Advanced
metadata: List[Dict[str, Any]] | None = None
class Config:
json_encoders = {datetime: lambda v: v.isoformat()}
@field_serializer("date")
def serialize_date(self, value: datetime | None) -> str | None:
return value.isoformat() if value else None
class PostAuthor(BaseModel):

View File

@@ -6,8 +6,6 @@ from dotenv import load_dotenv
from backend.util.logging import configure_logging
os.environ["ENABLE_AUTH"] = "false"
load_dotenv()
# NOTE: You can run tests like with the --log-cli-level=INFO to see the logs

View File

@@ -1,5 +1,8 @@
from backend.server.v2.library.model import LibraryAgentPreset
from .graph import NodeModel
from .integrations import Webhook # noqa: F401
# Resolve Webhook <- NodeModel forward reference
# Resolve Webhook forward references
NodeModel.model_rebuild()
LibraryAgentPreset.model_rebuild()

View File

@@ -1,57 +1,31 @@
import logging
import uuid
from datetime import datetime, timezone
from typing import List, Optional
from typing import Optional
from autogpt_libs.api_key.key_manager import APIKeyManager
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.errors import PrismaError
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import (
APIKeyCreateInput,
APIKeyUpdateInput,
APIKeyWhereInput,
APIKeyWhereUniqueInput,
)
from pydantic import BaseModel
from prisma.types import APIKeyWhereUniqueInput
from pydantic import BaseModel, Field
from backend.data.db import BaseDbModel
from backend.util.exceptions import NotAuthorizedError, NotFoundError
logger = logging.getLogger(__name__)
keysmith = APIKeySmith()
# Some basic exceptions
class APIKeyError(Exception):
"""Base exception for API key operations"""
pass
class APIKeyNotFoundError(APIKeyError):
"""Raised when an API key is not found"""
pass
class APIKeyPermissionError(APIKeyError):
"""Raised when there are permission issues with API key operations"""
pass
class APIKeyValidationError(APIKeyError):
"""Raised when API key validation fails"""
pass
class APIKey(BaseDbModel):
class APIKeyInfo(BaseModel):
id: str
name: str
prefix: str
key: str
status: APIKeyStatus = APIKeyStatus.ACTIVE
permissions: List[APIKeyPermission]
postfix: str
head: str = Field(
description=f"The first {APIKeySmith.HEAD_LENGTH} characters of the key"
)
tail: str = Field(
description=f"The last {APIKeySmith.TAIL_LENGTH} characters of the key"
)
status: APIKeyStatus
permissions: list[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None
@@ -60,266 +34,211 @@ class APIKey(BaseDbModel):
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKey(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
key=api_key.key,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKey from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
return APIKeyInfo(
id=api_key.id,
name=api_key.name,
head=api_key.head,
tail=api_key.tail,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
class APIKeyWithoutHash(BaseModel):
id: str
name: str
prefix: str
postfix: str
status: APIKeyStatus
permissions: List[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime]
revoked_at: Optional[datetime]
description: Optional[str]
user_id: str
class APIKeyInfoWithHash(APIKeyInfo):
hash: str
salt: str | None = None # None for legacy keys
def match(self, plaintext_key: str) -> bool:
"""Returns whether the given key matches this API key object."""
return keysmith.verify_key(plaintext_key, self.hash, self.salt)
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKeyWithoutHash(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKeyWithoutHash from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
return APIKeyInfoWithHash(
**APIKeyInfo.from_db(api_key).model_dump(),
hash=api_key.hash,
salt=api_key.salt,
)
def without_hash(self) -> APIKeyInfo:
return APIKeyInfo(**self.model_dump(exclude={"hash", "salt"}))
async def generate_api_key(
async def create_api_key(
name: str,
user_id: str,
permissions: List[APIKeyPermission],
permissions: list[APIKeyPermission],
description: Optional[str] = None,
) -> tuple[APIKeyWithoutHash, str]:
) -> tuple[APIKeyInfo, str]:
"""
Generate a new API key and store it in the database.
Returns the API key object (without hash) and the plain text key.
"""
try:
api_manager = APIKeyManager()
key = api_manager.generate_api_key()
generated_key = keysmith.generate_key()
api_key = await PrismaAPIKey.prisma().create(
data=APIKeyCreateInput(
id=str(uuid.uuid4()),
name=name,
prefix=key.prefix,
postfix=key.postfix,
key=key.hash,
permissions=[p for p in permissions],
description=description,
userId=user_id,
)
)
saved_key_obj = await PrismaAPIKey.prisma().create(
data={
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
}
)
api_key_without_hash = APIKeyWithoutHash.from_db(api_key)
return api_key_without_hash, key.raw
except PrismaError as e:
logger.error(f"Database error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
return APIKeyInfo.from_db(saved_key_obj), generated_key.key
async def validate_api_key(plain_text_key: str) -> Optional[APIKey]:
async def get_active_api_keys_by_head(head: str) -> list[APIKeyInfoWithHash]:
results = await PrismaAPIKey.prisma().find_many(
where={"head": head, "status": APIKeyStatus.ACTIVE}
)
return [APIKeyInfoWithHash.from_db(key) for key in results]
async def validate_api_key(plaintext_key: str) -> Optional[APIKeyInfo]:
"""
Validate an API key and return the API key object if valid.
Validate an API key and return the API key object if valid and active.
"""
try:
if not plain_text_key.startswith(APIKeyManager.PREFIX):
if not plaintext_key.startswith(APIKeySmith.PREFIX):
logger.warning("Invalid API key format")
return None
prefix = plain_text_key[: APIKeyManager.PREFIX_LENGTH]
api_manager = APIKeyManager()
head = plaintext_key[: APIKeySmith.HEAD_LENGTH]
potential_matches = await get_active_api_keys_by_head(head)
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(prefix=prefix, status=(APIKeyStatus.ACTIVE))
matched_api_key = next(
(pm for pm in potential_matches if pm.match(plaintext_key)),
None,
)
if not api_key:
logger.warning(f"No active API key found with prefix {prefix}")
if not matched_api_key:
# API key not found or invalid
return None
is_valid = api_manager.verify_api_key(plain_text_key, api_key.key)
if not is_valid:
logger.warning("API key verification failed")
return None
return APIKey.from_db(api_key)
except Exception as e:
logger.error(f"Error validating API key: {str(e)}")
raise APIKeyValidationError(f"Failed to validate API key: {str(e)}")
async def revoke_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to revoke this API key."
# Migrate legacy keys to secure format on successful validation
if matched_api_key.salt is None:
matched_api_key = await _migrate_key_to_secure_hash(
plaintext_key, matched_api_key
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(
status=APIKeyStatus.REVOKED, revokedAt=datetime.now(timezone.utc)
),
)
return matched_api_key.without_hash()
except Exception as e:
logger.error(f"Error while validating API key: {e}")
raise RuntimeError("Failed to validate API key") from e
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
async def _migrate_key_to_secure_hash(
plaintext_key: str, key_obj: APIKeyInfoWithHash
) -> APIKeyInfoWithHash:
"""Replace the SHA256 hash of a legacy API key with a salted Scrypt hash."""
try:
new_hash, new_salt = keysmith.hash_key(plaintext_key)
await PrismaAPIKey.prisma().update(
where={"id": key_obj.id}, data={"hash": new_hash, "salt": new_salt}
)
logger.info(f"Migrated legacy API key #{key_obj.id} to secure format")
# Update the API key object with new values for return
key_obj.hash = new_hash
key_obj.salt = new_salt
except Exception as e:
logger.error(f"Failed to migrate legacy API key #{key_obj.id}: {e}")
return key_obj
async def revoke_api_key(key_id: str, user_id: str) -> APIKeyInfo:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to revoke this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={
"status": APIKeyStatus.REVOKED,
"revokedAt": datetime.now(timezone.utc),
},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to revoke.")
return APIKeyInfo.from_db(updated_api_key)
async def list_user_api_keys(user_id: str) -> list[APIKeyInfo]:
api_keys = await PrismaAPIKey.prisma().find_many(
where={"userId": user_id}, order={"createdAt": "desc"}
)
return [APIKeyInfo.from_db(key) for key in api_keys]
async def suspend_api_key(key_id: str, user_id: str) -> APIKeyInfo:
selector: APIKeyWhereUniqueInput = {"id": key_id}
api_key = await PrismaAPIKey.prisma().find_unique(where=selector)
if not api_key:
raise NotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to suspend this API key.")
updated_api_key = await PrismaAPIKey.prisma().update(
where=selector, data={"status": APIKeyStatus.SUSPENDED}
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to suspend.")
return APIKeyInfo.from_db(updated_api_key)
def has_permission(api_key: APIKeyInfo, required_permission: APIKeyPermission) -> bool:
return required_permission in api_key.permissions
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyInfo]:
api_key = await PrismaAPIKey.prisma().find_first(
where={"id": key_id, "userId": user_id}
)
if not api_key:
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
async def list_user_api_keys(user_id: str) -> List[APIKeyWithoutHash]:
try:
where_clause: APIKeyWhereInput = {"userId": user_id}
api_keys = await PrismaAPIKey.prisma().find_many(
where=where_clause, order={"createdAt": "desc"}
)
return [APIKeyWithoutHash.from_db(key) for key in api_keys]
except PrismaError as e:
logger.error(f"Database error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
async def suspend_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to suspend this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(status=APIKeyStatus.SUSPENDED),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
def has_permission(api_key: APIKey, required_permission: APIKeyPermission) -> bool:
try:
return required_permission in api_key.permissions
except Exception as e:
logger.error(f"Error checking API key permissions: {str(e)}")
return False
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(id=key_id, userId=user_id)
)
if not api_key:
return None
return APIKeyWithoutHash.from_db(api_key)
except PrismaError as e:
logger.error(f"Database error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
return APIKeyInfo.from_db(api_key)
async def update_api_key_permissions(
key_id: str, user_id: str, permissions: List[APIKeyPermission]
) -> Optional[APIKeyWithoutHash]:
key_id: str, user_id: str, permissions: list[APIKeyPermission]
) -> APIKeyInfo:
"""
Update the permissions of an API key.
"""
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if api_key is None:
raise APIKeyNotFoundError("No such API key found.")
if api_key is None:
raise NotFoundError("No such API key found.")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to update this API key."
)
if api_key.userId != user_id:
raise NotAuthorizedError("You do not have permission to update this API key.")
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(permissions=permissions),
)
updated_api_key = await PrismaAPIKey.prisma().update(
where={"id": key_id},
data={"permissions": permissions},
)
if not updated_api_key:
raise NotFoundError(f"API key #{key_id} vanished while trying to update.")
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
return APIKeyInfo.from_db(updated_api_key)

View File

@@ -8,6 +8,7 @@ from enum import Enum
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
Generic,
Optional,
@@ -44,9 +45,10 @@ if TYPE_CHECKING:
app_config = Config()
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
BlockOutput = AsyncGen[BlockData, None] # Output: 1 output pin produces n data.
BlockOutputEntry = tuple[str, Any] # Output data should be a tuple of (name, value).
BlockOutput = AsyncGen[BlockOutputEntry, None] # Output: 1 output pin produces n data.
BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]]
CompletedBlockOutput = dict[str, list[Any]] # Completed stream, collected as a dict.
@@ -89,6 +91,45 @@ class BlockCategory(Enum):
return {"category": self.name, "description": self.value}
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)
class BlockInfo(BaseModel):
id: str
name: str
inputSchema: dict[str, Any]
outputSchema: dict[str, Any]
costs: list[BlockCost]
description: str
categories: list[dict[str, str]]
contributors: list[dict[str, Any]]
staticOutput: bool
uiType: str
class BlockSchema(BaseModel):
cached_jsonschema: ClassVar[dict[str, Any]]
@@ -306,7 +347,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
input_schema: Type[BlockSchemaInputType] = EmptySchema,
output_schema: Type[BlockSchemaOutputType] = EmptySchema,
test_input: BlockInput | list[BlockInput] | None = None,
test_output: BlockData | list[BlockData] | None = None,
test_output: BlockTestOutput | list[BlockTestOutput] | None = None,
test_mock: dict[str, Any] | None = None,
test_credentials: Optional[Credentials | dict[str, Credentials]] = None,
disabled: bool = False,
@@ -452,6 +493,24 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
"uiType": self.block_type.value,
}
def get_info(self) -> BlockInfo:
from backend.data.credit import get_block_cost
return BlockInfo(
id=self.id,
name=self.name,
inputSchema=self.input_schema.jsonschema(),
outputSchema=self.output_schema.jsonschema(),
costs=get_block_cost(self),
description=self.description,
categories=[category.dict() for category in self.categories],
contributors=[
contributor.model_dump() for contributor in self.contributors
],
staticOutput=self.static_output,
uiType=self.block_type.value,
)
async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
if error := self.input_schema.validate_data(input_data):
raise ValueError(

View File

@@ -29,8 +29,7 @@ from backend.blocks.replicate.replicate_block import ReplicateModelBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.data.block import Block
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import Block, BlockCost, BlockCostType
from backend.integrations.credentials_store import (
aiml_api_credentials,
anthropic_credentials,
@@ -307,7 +306,18 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
"type": ideogram_credentials.type,
}
},
)
),
BlockCost(
cost_amount=18,
cost_filter={
"ideogram_model_name": "V_3",
"credentials": {
"id": ideogram_credentials.id,
"provider": ideogram_credentials.provider,
"type": ideogram_credentials.type,
},
},
),
],
AIShortformVideoCreatorBlock: [
BlockCost(

View File

@@ -1,32 +0,0 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from backend.data.block import BlockInput
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)

View File

@@ -2,7 +2,7 @@ import logging
from abc import ABC, abstractmethod
from collections import defaultdict
from datetime import datetime, timezone
from typing import Any, cast
from typing import TYPE_CHECKING, Any, cast
import stripe
from prisma import Json
@@ -23,7 +23,6 @@ from pydantic import BaseModel
from backend.data import db
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.data.model import (
AutoTopUpConfig,
RefundRequest,
@@ -41,6 +40,9 @@ from backend.util.models import Pagination
from backend.util.retry import func_retry
from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.data.block import Block, BlockCost
settings = Settings()
stripe.api_key = settings.secrets.stripe_api_key
logger = logging.getLogger(__name__)
@@ -997,10 +999,14 @@ def get_user_credit_model() -> UserCreditBase:
return UserCredit()
def get_block_costs() -> dict[str, list[BlockCost]]:
def get_block_costs() -> dict[str, list["BlockCost"]]:
return {block().id: costs for block, costs in BLOCK_COSTS.items()}
def get_block_cost(block: "Block") -> list["BlockCost"]:
return BLOCK_COSTS.get(block.__class__, [])
async def get_stripe_customer_id(user_id: str) -> str:
user = await get_user_by_id(user_id)

View File

@@ -11,11 +11,14 @@ from typing import (
Generator,
Generic,
Literal,
Mapping,
Optional,
TypeVar,
cast,
overload,
)
from prisma import Json
from prisma.enums import AgentExecutionStatus
from prisma.models import (
AgentGraphExecution,
@@ -24,7 +27,6 @@ from prisma.models import (
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionUpdateManyMutationInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
@@ -60,7 +62,7 @@ from .includes import (
GRAPH_EXECUTION_INCLUDE_WITH_NODES,
graph_execution_include,
)
from .model import GraphExecutionStats, NodeExecutionStats
from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats
T = TypeVar("T")
@@ -87,6 +89,8 @@ class BlockErrorStats(BaseModel):
ExecutionStatus = AgentExecutionStatus
NodeInputMask = Mapping[str, JsonValue]
NodesInputMasks = Mapping[str, NodeInputMask]
class GraphExecutionMeta(BaseDbModel):
@@ -94,7 +98,10 @@ class GraphExecutionMeta(BaseDbModel):
user_id: str
graph_id: str
graph_version: int
preset_id: Optional[str] = None
inputs: Optional[BlockInput] # no default -> required in the OpenAPI spec
credential_inputs: Optional[dict[str, CredentialsMetaInput]]
nodes_input_masks: Optional[dict[str, BlockInput]]
preset_id: Optional[str]
status: ExecutionStatus
started_at: datetime
ended_at: datetime
@@ -179,6 +186,18 @@ class GraphExecutionMeta(BaseDbModel):
user_id=_graph_exec.userId,
graph_id=_graph_exec.agentGraphId,
graph_version=_graph_exec.agentGraphVersion,
inputs=cast(BlockInput | None, _graph_exec.inputs),
credential_inputs=(
{
name: CredentialsMetaInput.model_validate(cmi)
for name, cmi in cast(dict, _graph_exec.credentialInputs).items()
}
if _graph_exec.credentialInputs
else None
),
nodes_input_masks=cast(
dict[str, BlockInput] | None, _graph_exec.nodesInputMasks
),
preset_id=_graph_exec.agentPresetId,
status=ExecutionStatus(_graph_exec.executionStatus),
started_at=start_time,
@@ -206,7 +225,7 @@ class GraphExecutionMeta(BaseDbModel):
class GraphExecution(GraphExecutionMeta):
inputs: BlockInput
inputs: BlockInput # type: ignore - incompatible override is intentional
outputs: CompletedBlockOutput
@staticmethod
@@ -226,15 +245,18 @@ class GraphExecution(GraphExecutionMeta):
)
inputs = {
**{
# inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
},
**(
graph_exec.inputs
or {
# fallback: extract inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
)
}
),
**{
# input from webhook-triggered block
"payload": exec.input_data["payload"]
@@ -252,14 +274,13 @@ class GraphExecution(GraphExecutionMeta):
if (
block := get_block(exec.block_id)
) and block.block_type == BlockType.OUTPUT:
outputs[exec.input_data["name"]].append(
exec.input_data.get("value", None)
)
outputs[exec.input_data["name"]].append(exec.input_data.get("value"))
return GraphExecution(
**{
field_name: getattr(graph_exec, field_name)
for field_name in GraphExecutionMeta.model_fields
if field_name != "inputs"
},
inputs=inputs,
outputs=outputs,
@@ -292,13 +313,17 @@ class GraphExecutionWithNodes(GraphExecution):
node_executions=node_executions,
)
def to_graph_execution_entry(self, user_context: "UserContext"):
def to_graph_execution_entry(
self,
user_context: "UserContext",
compiled_nodes_input_masks: Optional[NodesInputMasks] = None,
):
return GraphExecutionEntry(
user_id=self.user_id,
graph_id=self.graph_id,
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
nodes_input_masks={}, # FIXME: store credentials on AgentGraphExecution
nodes_input_masks=compiled_nodes_input_masks,
user_context=user_context,
)
@@ -335,7 +360,7 @@ class NodeExecutionResult(BaseModel):
else:
input_data: BlockInput = defaultdict()
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, type[Any])
input_data[data.name] = type_utils.convert(data.data, JsonValue)
output_data: CompletedBlockOutput = defaultdict(list)
@@ -344,7 +369,7 @@ class NodeExecutionResult(BaseModel):
output_data[name].extend(messages)
else:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
output_data[data.name].append(type_utils.convert(data.data, JsonValue))
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:
@@ -539,9 +564,12 @@ async def get_graph_execution(
async def create_graph_execution(
graph_id: str,
graph_version: int,
starting_nodes_input: list[tuple[str, BlockInput]],
starting_nodes_input: list[tuple[str, BlockInput]], # list[(node_id, BlockInput)]
inputs: Mapping[str, JsonValue],
user_id: str,
preset_id: str | None = None,
preset_id: Optional[str] = None,
credential_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> GraphExecutionWithNodes:
"""
Create a new AgentGraphExecution record.
@@ -549,11 +577,18 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data=AgentGraphExecutionCreateInput(
agentGraphId=graph_id,
agentGraphVersion=graph_version,
executionStatus=ExecutionStatus.QUEUED,
NodeExecutions={
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.QUEUED,
"inputs": SafeJson(inputs),
"credentialInputs": (
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
@@ -569,9 +604,9 @@ async def create_graph_execution(
for node_id, node_input in starting_nodes_input
]
},
userId=user_id,
agentPresetId=preset_id,
),
"userId": user_id,
"agentPresetId": preset_id,
},
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -582,7 +617,7 @@ async def upsert_execution_input(
node_id: str,
graph_exec_id: str,
input_name: str,
input_data: Any,
input_data: JsonValue,
node_exec_id: str | None = None,
) -> tuple[str, BlockInput]:
"""
@@ -631,7 +666,7 @@ async def upsert_execution_input(
)
return existing_execution.id, {
**{
input_data.name: type_utils.convert(input_data.data, type[Any])
input_data.name: type_utils.convert(input_data.data, JsonValue)
for input_data in existing_execution.Input or []
},
input_name: input_data,
@@ -888,7 +923,7 @@ class GraphExecutionEntry(BaseModel):
graph_exec_id: str
graph_id: str
graph_version: int
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None
nodes_input_masks: Optional[NodesInputMasks] = None
user_context: UserContext

View File

@@ -12,7 +12,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import Field, JsonValue, create_model
from pydantic import BaseModel, Field, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -34,6 +34,7 @@ from .db import BaseDbModel, query_raw_with_schema, transaction
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
if TYPE_CHECKING:
from .execution import NodesInputMasks
from .integrations import Webhook
logger = logging.getLogger(__name__)
@@ -159,6 +160,7 @@ class BaseGraph(BaseDbModel):
is_active: bool = True
name: str
description: str
recommended_schedule_cron: str | None = None
nodes: list[Node] = []
links: list[Link] = []
forked_from_id: str | None = None
@@ -205,6 +207,35 @@ class BaseGraph(BaseDbModel):
None,
)
@computed_field
@property
def trigger_setup_info(self) -> "GraphTriggerInfo | None":
if not (
self.webhook_input_node
and (trigger_block := self.webhook_input_node.block).webhook_config
):
return None
return GraphTriggerInfo(
provider=trigger_block.webhook_config.provider,
config_schema={
**(json_schema := trigger_block.input_schema.jsonschema()),
"properties": {
pn: sub_schema
for pn, sub_schema in json_schema["properties"].items()
if not is_credentials_field_name(pn)
},
"required": [
pn
for pn in json_schema.get("required", [])
if not is_credentials_field_name(pn)
],
},
credentials_input_name=next(
iter(trigger_block.input_schema.get_credentials_fields()), None
),
)
@staticmethod
def _generate_schema(
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
@@ -238,6 +269,14 @@ class BaseGraph(BaseDbModel):
}
class GraphTriggerInfo(BaseModel):
provider: ProviderName
config_schema: dict[str, Any] = Field(
description="Input schema for the trigger block"
)
credentials_input_name: Optional[str]
class Graph(BaseGraph):
sub_graphs: list[BaseGraph] = [] # Flattened sub-graphs
@@ -414,7 +453,7 @@ class GraphModel(Graph):
def validate_graph(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
):
"""
Validate graph structure and raise `ValueError` on issues.
@@ -428,7 +467,7 @@ class GraphModel(Graph):
def _validate_graph(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> None:
errors = GraphModel._validate_graph_get_errors(
graph, for_run, nodes_input_masks
@@ -442,7 +481,7 @@ class GraphModel(Graph):
def validate_graph_get_errors(
self,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -464,7 +503,7 @@ class GraphModel(Graph):
def _validate_graph_get_errors(
graph: BaseGraph,
for_run: bool = False,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional["NodesInputMasks"] = None,
) -> dict[str, dict[str, str]]:
"""
Validate graph and return structured errors per node.
@@ -658,6 +697,7 @@ class GraphModel(Graph):
is_active=graph.isActive,
name=graph.name or "",
description=graph.description or "",
recommended_schedule_cron=graph.recommendedScheduleCron,
nodes=[NodeModel.from_db(node, for_export) for node in graph.Nodes or []],
links=list(
{
@@ -1045,6 +1085,7 @@ async def __create_graph(tx, graph: Graph, user_id: str):
version=graph.version,
name=graph.name,
description=graph.description,
recommendedScheduleCron=graph.recommended_schedule_cron,
isActive=graph.is_active,
userId=user_id,
forkedFromId=graph.forked_from_id,

View File

@@ -2,7 +2,6 @@ import json
from typing import Any
from uuid import UUID
import autogpt_libs.auth.models
import fastapi.exceptions
import pytest
from pytest_snapshot.plugin import Snapshot
@@ -317,12 +316,7 @@ async def test_access_store_listing_graph(server: SpinTestServer):
is_approved=True,
comments="Test comments",
),
autogpt_libs.auth.models.User(
user_id=admin_user.id,
role="admin",
email=admin_user.email,
phone_number="1234567890",
),
user_id=admin_user.id,
)
# Now we check the graph can be accessed by a user that does not own the graph

View File

@@ -59,9 +59,15 @@ def graph_execution_include(
}
AGENT_PRESET_INCLUDE: prisma.types.AgentPresetInclude = {
"InputPresets": True,
"Webhook": True,
}
INTEGRATION_WEBHOOK_INCLUDE: prisma.types.IntegrationWebhookInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE},
"AgentPresets": {"include": {"InputPresets": True}},
"AgentPresets": {"include": AGENT_PRESET_INCLUDE},
}

View File

@@ -10,7 +10,6 @@ from typing import TYPE_CHECKING, Any, Optional, TypeVar, cast
from pika.adapters.blocking_connection import BlockingChannel
from pika.spec import Basic, BasicProperties
from pydantic import JsonValue
from redis.asyncio.lock import Lock as RedisLock
from backend.blocks.io import AgentOutputBlock
@@ -38,9 +37,9 @@ from prometheus_client import Gauge, start_http_server
from backend.blocks.agent import AgentExecutorBlock
from backend.data import redis_client as redis
from backend.data.block import (
BlockData,
BlockInput,
BlockOutput,
BlockOutputEntry,
BlockSchema,
get_block,
)
@@ -52,6 +51,7 @@ from backend.data.execution import (
GraphExecutionEntry,
NodeExecutionEntry,
NodeExecutionResult,
NodesInputMasks,
UserContext,
)
from backend.data.graph import Link, Node
@@ -131,7 +131,7 @@ async def execute_node(
creds_manager: IntegrationCredentialsManager,
data: NodeExecutionEntry,
execution_stats: NodeExecutionStats | None = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> BlockOutput:
"""
Execute a node in the graph. This will trigger a block execution on a node,
@@ -237,12 +237,12 @@ async def execute_node(
async def _enqueue_next_nodes(
db_client: "DatabaseManagerAsyncClient",
node: Node,
output: BlockData,
output: BlockOutputEntry,
user_id: str,
graph_exec_id: str,
graph_id: str,
log_metadata: LogMetadata,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]],
nodes_input_masks: Optional[NodesInputMasks],
user_context: UserContext,
) -> list[NodeExecutionEntry]:
async def add_enqueued_execution(
@@ -419,7 +419,7 @@ class ExecutionProcessor:
self,
node_exec: NodeExecutionEntry,
node_exec_progress: NodeExecutionProgress,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]],
nodes_input_masks: Optional[NodesInputMasks],
graph_stats_pair: tuple[GraphExecutionStats, threading.Lock],
) -> NodeExecutionStats:
log_metadata = LogMetadata(
@@ -487,7 +487,7 @@ class ExecutionProcessor:
stats: NodeExecutionStats,
db_client: "DatabaseManagerAsyncClient",
log_metadata: LogMetadata,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> ExecutionStatus:
status = ExecutionStatus.RUNNING
@@ -1053,7 +1053,7 @@ class ExecutionProcessor:
node_id: str,
graph_exec: GraphExecutionEntry,
log_metadata: LogMetadata,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]],
nodes_input_masks: Optional[NodesInputMasks],
execution_queue: ExecutionQueue[NodeExecutionEntry],
) -> None:
"""Process a node's output, update its status, and enqueue next nodes.
@@ -1394,14 +1394,14 @@ class ExecutionManager(AppProcess):
delivery_tag = method.delivery_tag
@func_retry
def _ack_message(reject: bool = False):
def _ack_message(reject: bool, requeue: bool):
"""Acknowledge or reject the message based on execution status."""
# Connection can be lost, so always get a fresh channel
channel = self.run_client.get_channel()
if reject:
channel.connection.add_callback_threadsafe(
lambda: channel.basic_nack(delivery_tag, requeue=True)
lambda: channel.basic_nack(delivery_tag, requeue=requeue)
)
else:
channel.connection.add_callback_threadsafe(
@@ -1413,13 +1413,13 @@ class ExecutionManager(AppProcess):
logger.info(
f"[{self.service_name}] Rejecting new execution during shutdown"
)
_ack_message(reject=True)
_ack_message(reject=True, requeue=True)
return
# Check if we can accept more runs
self._cleanup_completed_runs()
if len(self.active_graph_runs) >= self.pool_size:
_ack_message(reject=True)
_ack_message(reject=True, requeue=True)
return
try:
@@ -1428,7 +1428,7 @@ class ExecutionManager(AppProcess):
logger.error(
f"[{self.service_name}] Could not parse run message: {e}, body={body}"
)
_ack_message(reject=True)
_ack_message(reject=True, requeue=False)
return
graph_exec_id = graph_exec_entry.graph_exec_id
@@ -1440,7 +1440,7 @@ class ExecutionManager(AppProcess):
logger.error(
f"[{self.service_name}] Graph {graph_exec_id} already running; rejecting duplicate run."
)
_ack_message(reject=True)
_ack_message(reject=True, requeue=False)
return
cancel_event = threading.Event()
@@ -1456,9 +1456,9 @@ class ExecutionManager(AppProcess):
logger.error(
f"[{self.service_name}] Execution for {graph_exec_id} failed: {type(exec_error)} {exec_error}"
)
_ack_message(reject=True)
_ack_message(reject=True, requeue=True)
else:
_ack_message(reject=False)
_ack_message(reject=False, requeue=False)
except BaseException as e:
logger.exception(
f"[{self.service_name}] Error in run completion callback: {e}"

View File

@@ -1,6 +1,5 @@
import logging
import autogpt_libs.auth.models
import fastapi.responses
import pytest
@@ -36,21 +35,20 @@ async def execute_graph(
logger.info(f"Input data: {input_data}")
# --- Test adding new executions --- #
response = await agent_server.test_execute_graph(
graph_exec = await agent_server.test_execute_graph(
user_id=test_user.id,
graph_id=test_graph.id,
graph_version=test_graph.version,
node_input=input_data,
)
graph_exec_id = response.graph_exec_id
logger.info(f"Created execution with ID: {graph_exec_id}")
logger.info(f"Created execution with ID: {graph_exec.id}")
# Execution queue should be empty
logger.info("Waiting for execution to complete...")
result = await wait_execution(test_user.id, graph_exec_id, 30)
result = await wait_execution(test_user.id, graph_exec.id, 30)
logger.info(f"Execution completed with {len(result)} results")
assert len(result) == num_execs
return graph_exec_id
return graph_exec.id
async def assert_sample_graph_executions(
@@ -380,7 +378,7 @@ async def test_execute_preset(server: SpinTestServer):
# Verify execution
assert result is not None
graph_exec_id = result["id"]
graph_exec_id = result.id
# Wait for execution to complete
executions = await wait_execution(test_user.id, graph_exec_id)
@@ -469,7 +467,7 @@ async def test_execute_preset_with_clash(server: SpinTestServer):
# Verify execution
assert result is not None, "Result must not be None"
graph_exec_id = result["id"]
graph_exec_id = result.id
# Wait for execution to complete
executions = await wait_execution(test_user.id, graph_exec_id)
@@ -521,12 +519,7 @@ async def test_store_listing_graph(server: SpinTestServer):
is_approved=True,
comments="Test comments",
),
autogpt_libs.auth.models.User(
user_id=admin_user.id,
role="admin",
email=admin_user.email,
phone_number="1234567890",
),
user_id=admin_user.id,
)
alt_test_user = admin_user

View File

@@ -4,20 +4,27 @@ import threading
import time
from collections import defaultdict
from concurrent.futures import Future
from typing import Any, Optional
from typing import Any, Mapping, Optional, cast
from pydantic import BaseModel, JsonValue, ValidationError
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.block import Block, BlockData, BlockInput, BlockType, get_block
from backend.data.block import (
Block,
BlockCostType,
BlockInput,
BlockOutputEntry,
BlockType,
get_block,
)
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCostType
from backend.data.db import prisma
from backend.data.execution import (
ExecutionStatus,
GraphExecutionStats,
GraphExecutionWithNodes,
NodesInputMasks,
UserContext,
)
from backend.data.graph import GraphModel, Node
@@ -239,7 +246,7 @@ def _tokenise(path: str) -> list[tuple[str, str]] | None:
# --------------------------------------------------------------------------- #
def parse_execution_output(output: BlockData, name: str) -> Any | None:
def parse_execution_output(output: BlockOutputEntry, name: str) -> JsonValue | None:
"""
Retrieve a nested value out of `output` using the flattened *name*.
@@ -263,7 +270,7 @@ def parse_execution_output(output: BlockData, name: str) -> Any | None:
if tokens is None:
return None
cur: Any = data
cur: JsonValue = data
for delim, ident in tokens:
if delim == LIST_SPLIT:
# list[index]
@@ -428,7 +435,7 @@ def validate_exec(
async def _validate_node_input_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> dict[str, dict[str, str]]:
"""
Checks all credentials for all nodes of the graph and returns structured errors.
@@ -508,8 +515,8 @@ async def _validate_node_input_credentials(
def make_node_credentials_input_map(
graph: GraphModel,
graph_credentials_input: dict[str, CredentialsMetaInput],
) -> dict[str, dict[str, JsonValue]]:
graph_credentials_input: Mapping[str, CredentialsMetaInput],
) -> NodesInputMasks:
"""
Maps credentials for an execution to the correct nodes.
@@ -544,8 +551,8 @@ def make_node_credentials_input_map(
async def validate_graph_with_credentials(
graph: GraphModel,
user_id: str,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> dict[str, dict[str, str]]:
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> Mapping[str, Mapping[str, str]]:
"""
Validate graph including credentials and return structured errors per node.
@@ -575,7 +582,7 @@ async def _construct_starting_node_execution_input(
graph: GraphModel,
user_id: str,
graph_inputs: BlockInput,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> list[tuple[str, BlockInput]]:
"""
Validates and prepares the input data for executing a graph.
@@ -616,7 +623,7 @@ async def _construct_starting_node_execution_input(
# Extract request input data, and assign it to the input pin.
if block.block_type == BlockType.INPUT:
input_name = node.input_default.get("name")
input_name = cast(str | None, node.input_default.get("name"))
if input_name and input_name in graph_inputs:
input_data = {"value": graph_inputs[input_name]}
@@ -643,9 +650,9 @@ async def validate_and_construct_node_execution_input(
user_id: str,
graph_inputs: BlockInput,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], dict[str, dict[str, JsonValue]]]:
graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> tuple[GraphModel, list[tuple[str, BlockInput]], NodesInputMasks]:
"""
Public wrapper that handles graph fetching, credential mapping, and validation+construction.
This centralizes the logic used by both scheduler validation and actual execution.
@@ -659,7 +666,9 @@ async def validate_and_construct_node_execution_input(
nodes_input_masks: Node inputs to use.
Returns:
tuple[GraphModel, list[tuple[str, BlockInput]]]: Graph model and list of tuples for node execution input.
GraphModel: Full graph object for the given `graph_id`.
list[tuple[node_id, BlockInput]]: Starting node IDs with corresponding inputs.
dict[str, BlockInput]: Node input masks including all passed-in credentials.
Raises:
NotFoundError: If the graph is not found.
@@ -700,11 +709,11 @@ async def validate_and_construct_node_execution_input(
def _merge_nodes_input_masks(
overrides_map_1: dict[str, dict[str, JsonValue]],
overrides_map_2: dict[str, dict[str, JsonValue]],
) -> dict[str, dict[str, JsonValue]]:
overrides_map_1: NodesInputMasks,
overrides_map_2: NodesInputMasks,
) -> NodesInputMasks:
"""Perform a per-node merge of input overrides"""
result = overrides_map_1.copy()
result = dict(overrides_map_1).copy()
for node_id, overrides2 in overrides_map_2.items():
if node_id in result:
result[node_id] = {**result[node_id], **overrides2}
@@ -854,8 +863,8 @@ async def add_graph_execution(
inputs: Optional[BlockInput] = None,
preset_id: Optional[str] = None,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = None,
graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> GraphExecutionWithNodes:
"""
Adds a graph execution to the queue and returns the execution entry.
@@ -879,7 +888,7 @@ async def add_graph_execution(
else:
edb = get_database_manager_async_client()
graph, starting_nodes_input, nodes_input_masks = (
graph, starting_nodes_input, compiled_nodes_input_masks = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
@@ -892,10 +901,15 @@ async def add_graph_execution(
graph_exec = None
try:
# Sanity check: running add_graph_execution with the properties of
# the graph_exec created here should create the same execution again.
graph_exec = await edb.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
inputs=inputs or {},
credential_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
@@ -904,9 +918,9 @@ async def add_graph_execution(
user_context = await get_user_context(user_id)
queue = await get_async_execution_queue()
graph_exec_entry = graph_exec.to_graph_execution_entry(user_context)
if nodes_input_masks:
graph_exec_entry.nodes_input_masks = nodes_input_masks
graph_exec_entry = graph_exec.to_graph_execution_entry(
user_context, compiled_nodes_input_masks
)
logger.info(
f"Created graph execution #{graph_exec.id} for graph "
@@ -952,7 +966,7 @@ async def add_graph_execution(
class ExecutionOutputEntry(BaseModel):
node: Node
node_exec_id: str
data: BlockData
data: BlockOutputEntry
class NodeExecutionProgress:

View File

@@ -1,6 +1,7 @@
from typing import cast
import pytest
from pytest_mock import MockerFixture
from backend.executor.utils import merge_execution_input, parse_execution_output
from backend.util.mock import MockObject
@@ -276,3 +277,142 @@ def test_merge_execution_input():
result = merge_execution_input(data)
assert "mixed" in result
assert result["mixed"].attr[0]["key"] == "value3"
@pytest.mark.asyncio
async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
"""
Verify that calling the function with its own output creates the same execution again.
"""
from backend.data.execution import GraphExecutionWithNodes
from backend.data.model import CredentialsMetaInput
from backend.executor.utils import add_graph_execution
from backend.integrations.providers import ProviderName
# Mock data
graph_id = "test-graph-id"
user_id = "test-user-id"
inputs = {"test_input": "test_value"}
preset_id = "test-preset-id"
graph_version = 1
graph_credentials_inputs = {
"cred_key": CredentialsMetaInput(
id="cred-id", provider=ProviderName("test_provider"), type="oauth2"
)
}
nodes_input_masks = {"node1": {"input1": "masked_value"}}
# Mock the graph object returned by validate_and_construct_node_execution_input
mock_graph = mocker.MagicMock()
mock_graph.version = graph_version
# Mock the starting nodes input and compiled nodes input masks
starting_nodes_input = [
("node1", {"input1": "value1"}),
("node2", {"input1": "value2"}),
]
compiled_nodes_input_masks = {"node1": {"input1": "compiled_mask"}}
# Mock the graph execution object
mock_graph_exec = mocker.MagicMock(spec=GraphExecutionWithNodes)
mock_graph_exec.id = "execution-id-123"
mock_graph_exec.to_graph_execution_entry.return_value = mocker.MagicMock()
# Mock user context
mock_user_context = {"user_id": user_id, "context": "test_context"}
# Mock the queue and event bus
mock_queue = mocker.AsyncMock()
mock_event_bus = mocker.MagicMock()
mock_event_bus.publish = mocker.AsyncMock()
# Setup mocks
mock_validate = mocker.patch(
"backend.executor.utils.validate_and_construct_node_execution_input"
)
mock_edb = mocker.patch("backend.executor.utils.execution_db")
mock_prisma = mocker.patch("backend.executor.utils.prisma")
mock_get_user_context = mocker.patch("backend.executor.utils.get_user_context")
mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue")
mock_get_event_bus = mocker.patch(
"backend.executor.utils.get_async_execution_event_bus"
)
# Setup mock returns
mock_validate.return_value = (
mock_graph,
starting_nodes_input,
compiled_nodes_input_masks,
)
mock_prisma.is_connected.return_value = True
mock_edb.create_graph_execution = mocker.AsyncMock(return_value=mock_graph_exec)
mock_get_user_context.return_value = mock_user_context
mock_get_queue.return_value = mock_queue
mock_get_event_bus.return_value = mock_event_bus
# Call the function - first execution
result1 = await add_graph_execution(
graph_id=graph_id,
user_id=user_id,
inputs=inputs,
preset_id=preset_id,
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
)
# Store the parameters used in the first call to create_graph_execution
first_call_kwargs = mock_edb.create_graph_execution.call_args[1]
# Verify the create_graph_execution was called with correct parameters
mock_edb.create_graph_execution.assert_called_once_with(
user_id=user_id,
graph_id=graph_id,
graph_version=mock_graph.version,
inputs=inputs,
credential_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
)
# Set up the graph execution mock to have properties we can extract
mock_graph_exec.graph_id = graph_id
mock_graph_exec.user_id = user_id
mock_graph_exec.graph_version = graph_version
mock_graph_exec.inputs = inputs
mock_graph_exec.credential_inputs = graph_credentials_inputs
mock_graph_exec.nodes_input_masks = nodes_input_masks
mock_graph_exec.preset_id = preset_id
# Create a second mock execution for the sanity check
mock_graph_exec_2 = mocker.MagicMock(spec=GraphExecutionWithNodes)
mock_graph_exec_2.id = "execution-id-456"
mock_graph_exec_2.to_graph_execution_entry.return_value = mocker.MagicMock()
# Reset mocks and set up for second call
mock_edb.create_graph_execution.reset_mock()
mock_edb.create_graph_execution.return_value = mock_graph_exec_2
mock_validate.reset_mock()
# Sanity check: call add_graph_execution with properties from first result
# This should create the same execution parameters
result2 = await add_graph_execution(
graph_id=mock_graph_exec.graph_id,
user_id=mock_graph_exec.user_id,
inputs=mock_graph_exec.inputs,
preset_id=mock_graph_exec.preset_id,
graph_version=mock_graph_exec.graph_version,
graph_credentials_inputs=mock_graph_exec.credential_inputs,
nodes_input_masks=mock_graph_exec.nodes_input_masks,
)
# Verify that create_graph_execution was called with identical parameters
second_call_kwargs = mock_edb.create_graph_execution.call_args[1]
# The sanity check: both calls should use identical parameters
assert first_call_kwargs == second_call_kwargs
# Both executions should succeed (though they create different objects)
assert result1 == mock_graph_exec
assert result2 == mock_graph_exec_2

View File

@@ -63,7 +63,7 @@ except ImportError:
# Cost System
try:
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import BlockCost, BlockCostType
except ImportError:
from backend.data.block_cost_config import BlockCost, BlockCostType

View File

@@ -8,7 +8,7 @@ from typing import Callable, List, Optional, Type
from pydantic import SecretStr
from backend.data.cost import BlockCost, BlockCostType
from backend.data.block import BlockCost, BlockCostType
from backend.data.model import (
APIKeyCredentials,
Credentials,

View File

@@ -8,9 +8,8 @@ BLOCK_COSTS configuration used by the execution system.
import logging
from typing import List, Type
from backend.data.block import Block
from backend.data.block import Block, BlockCost
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost
from backend.sdk.registry import AutoRegistry
logger = logging.getLogger(__name__)

View File

@@ -7,7 +7,7 @@ from typing import Any, Callable, List, Optional, Set, Type
from pydantic import BaseModel, SecretStr
from backend.data.cost import BlockCost
from backend.data.block import BlockCost
from backend.data.model import (
APIKeyCredentials,
Credentials,

View File

@@ -11,7 +11,45 @@ def configured_snapshot(snapshot: Snapshot) -> Snapshot:
return snapshot
# Test ID constants
TEST_USER_ID = "test-user-id"
ADMIN_USER_ID = "admin-user-id"
TARGET_USER_ID = "target-user-id"
@pytest.fixture
def test_user_id() -> str:
"""Test user ID fixture."""
return "test-user-id"
@pytest.fixture
def admin_user_id() -> str:
"""Admin user ID fixture."""
return "admin-user-id"
@pytest.fixture
def target_user_id() -> str:
"""Target user ID fixture."""
return "target-user-id"
@pytest.fixture
def mock_jwt_user(test_user_id):
"""Provide mock JWT payload for regular user testing."""
import fastapi
def override_get_jwt_payload(request: fastapi.Request) -> dict[str, str]:
return {"sub": test_user_id, "role": "user", "email": "test@example.com"}
return {"get_jwt_payload": override_get_jwt_payload, "user_id": test_user_id}
@pytest.fixture
def mock_jwt_admin(admin_user_id):
"""Provide mock JWT payload for admin user testing."""
import fastapi
def override_get_jwt_payload(request: fastapi.Request) -> dict[str, str]:
return {
"sub": admin_user_id,
"role": "admin",
"email": "test-admin@example.com",
}
return {"get_jwt_payload": override_get_jwt_payload, "user_id": admin_user_id}

View File

@@ -88,6 +88,7 @@ async def test_send_graph_execution_result(
user_id="user-1",
graph_id="test_graph",
graph_version=1,
preset_id=None,
status=ExecutionStatus.COMPLETED,
started_at=datetime.now(tz=timezone.utc),
ended_at=datetime.now(tz=timezone.utc),
@@ -101,6 +102,8 @@ async def test_send_graph_execution_result(
"input_1": "some input value :)",
"input_2": "some *other* input value",
},
credential_inputs=None,
nodes_input_masks=None,
outputs={
"the_output": ["some output value"],
"other_output": ["sike there was another output"],

View File

@@ -1,16 +1,14 @@
from fastapi import Depends, HTTPException, Request
from fastapi import HTTPException, Security
from fastapi.security import APIKeyHeader
from prisma.enums import APIKeyPermission
from backend.data.api_key import has_permission, validate_api_key
from backend.data.api_key import APIKeyInfo, has_permission, validate_api_key
api_key_header = APIKeyHeader(name="X-API-Key")
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
async def require_api_key(request: Request):
async def require_api_key(api_key: str | None = Security(api_key_header)) -> APIKeyInfo:
"""Base middleware for API key authentication"""
api_key = await api_key_header(request)
if api_key is None:
raise HTTPException(status_code=401, detail="Missing API key")
@@ -19,18 +17,19 @@ async def require_api_key(request: Request):
if not api_key_obj:
raise HTTPException(status_code=401, detail="Invalid API key")
request.state.api_key = api_key_obj
return api_key_obj
def require_permission(permission: APIKeyPermission):
"""Dependency function for checking specific permissions"""
async def check_permission(api_key=Depends(require_api_key)):
async def check_permission(
api_key: APIKeyInfo = Security(require_api_key),
) -> APIKeyInfo:
if not has_permission(api_key, permission):
raise HTTPException(
status_code=403,
detail=f"API key missing required permission: {permission}",
detail=f"API key lacks the required permission '{permission}'",
)
return api_key

View File

@@ -2,14 +2,14 @@ import logging
from collections import defaultdict
from typing import Annotated, Any, Optional, Sequence
from fastapi import APIRouter, Body, Depends, HTTPException
from fastapi import APIRouter, Body, HTTPException, Security
from prisma.enums import AgentExecutionStatus, APIKeyPermission
from typing_extensions import TypedDict
import backend.data.block
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.api_key import APIKey
from backend.data.api_key import APIKeyInfo
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.executor.utils import add_graph_execution
from backend.server.external.middleware import require_permission
@@ -47,7 +47,7 @@ class GraphExecutionResult(TypedDict):
@v1_router.get(
path="/blocks",
tags=["blocks"],
dependencies=[Depends(require_permission(APIKeyPermission.READ_BLOCK))],
dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))],
)
def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in backend.data.block.get_blocks().values()]
@@ -57,12 +57,12 @@ def get_graph_blocks() -> Sequence[dict[Any, Any]]:
@v1_router.post(
path="/blocks/{block_id}/execute",
tags=["blocks"],
dependencies=[Depends(require_permission(APIKeyPermission.EXECUTE_BLOCK))],
dependencies=[Security(require_permission(APIKeyPermission.EXECUTE_BLOCK))],
)
async def execute_graph_block(
block_id: str,
data: BlockInput,
api_key: APIKey = Depends(require_permission(APIKeyPermission.EXECUTE_BLOCK)),
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_BLOCK)),
) -> CompletedBlockOutput:
obj = backend.data.block.get_block(block_id)
if not obj:
@@ -82,7 +82,7 @@ async def execute_graph(
graph_id: str,
graph_version: int,
node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
api_key: APIKey = Depends(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
) -> dict[str, Any]:
try:
graph_exec = await add_graph_execution(
@@ -104,7 +104,7 @@ async def execute_graph(
async def get_graph_execution_results(
graph_id: str,
graph_exec_id: str,
api_key: APIKey = Depends(require_permission(APIKeyPermission.READ_GRAPH)),
api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.READ_GRAPH)),
) -> GraphExecutionResult:
graph = await graph_db.get_graph(graph_id, user_id=api_key.user_id)
if not graph:

View File

@@ -58,17 +58,13 @@ class ProviderConstants(BaseModel):
default_factory=lambda: {
name.upper().replace("-", "_"): name for name in get_all_provider_names()
},
)
class Config:
schema_extra = {
"example": {
"PROVIDER_NAMES": {
"OPENAI": "openai",
"ANTHROPIC": "anthropic",
"EXA": "exa",
"GEM": "gem",
"EXAMPLE_SERVICE": "example-service",
}
examples=[
{
"OPENAI": "openai",
"ANTHROPIC": "anthropic",
"EXA": "exa",
"GEM": "gem",
"EXAMPLE_SERVICE": "example-service",
}
}
],
)

View File

@@ -3,14 +3,15 @@ import logging
from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING, Annotated, Awaitable, List, Literal
from autogpt_libs.auth import get_user_id
from fastapi import (
APIRouter,
Body,
Depends,
HTTPException,
Path,
Query,
Request,
Security,
status,
)
from pydantic import BaseModel, Field, SecretStr
@@ -50,8 +51,6 @@ from backend.util.settings import Settings
if TYPE_CHECKING:
from backend.integrations.oauth import BaseOAuthHandler
from ..utils import get_user_id
logger = logging.getLogger(__name__)
settings = Settings()
router = APIRouter()
@@ -69,7 +68,7 @@ async def login(
provider: Annotated[
ProviderName, Path(title="The provider to initiate an OAuth flow for")
],
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
request: Request,
scopes: Annotated[
str, Query(title="Comma-separated list of authorization scopes")
@@ -109,7 +108,7 @@ async def callback(
],
code: Annotated[str, Body(title="Authorization code acquired by user login")],
state_token: Annotated[str, Body(title="Anti-CSRF nonce")],
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
request: Request,
) -> CredentialsMetaResponse:
logger.debug(f"Received OAuth callback for provider: {provider}")
@@ -182,7 +181,7 @@ async def callback(
@router.get("/credentials")
async def list_credentials(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> list[CredentialsMetaResponse]:
credentials = await creds_manager.store.get_all_creds(user_id)
return [
@@ -204,7 +203,7 @@ async def list_credentials_by_provider(
provider: Annotated[
ProviderName, Path(title="The provider to list credentials for")
],
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> list[CredentialsMetaResponse]:
credentials = await creds_manager.store.get_creds_by_provider(user_id, provider)
return [
@@ -227,7 +226,7 @@ async def get_credential(
ProviderName, Path(title="The provider to retrieve credentials for")
],
cred_id: Annotated[str, Path(title="The ID of the credentials to retrieve")],
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> Credentials:
credential = await creds_manager.get(user_id, cred_id)
if not credential:
@@ -244,7 +243,7 @@ async def get_credential(
@router.post("/{provider}/credentials", status_code=201)
async def create_credentials(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
provider: Annotated[
ProviderName, Path(title="The provider to create credentials for")
],
@@ -288,7 +287,7 @@ async def delete_credentials(
ProviderName, Path(title="The provider to delete credentials for")
],
cred_id: Annotated[str, Path(title="The ID of the credentials to delete")],
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
force: Annotated[
bool, Query(title="Whether to proceed if any linked webhooks are still in use")
] = False,
@@ -429,7 +428,7 @@ async def webhook_ingress_generic(
@router.post("/webhooks/{webhook_id}/ping")
async def webhook_ping(
webhook_id: Annotated[str, Path(title="Our ID for the webhook")],
user_id: Annotated[str, Depends(get_user_id)], # require auth
user_id: Annotated[str, Security(get_user_id)], # require auth
):
webhook = await get_webhook(webhook_id)
webhook_manager = get_webhook_manager(webhook.provider)
@@ -568,7 +567,7 @@ def _get_provider_oauth_handler(
@router.get("/ayrshare/sso_url")
async def get_ayrshare_sso_url(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> AyrshareSSOResponse:
"""
Generate an SSO URL for Ayrshare social media integration.

View File

@@ -3,7 +3,7 @@ from typing import Any, Optional
import pydantic
from backend.data.api_key import APIKeyPermission, APIKeyWithoutHash
from backend.data.api_key import APIKeyInfo, APIKeyPermission
from backend.data.graph import Graph
from backend.util.timezone_name import TimeZoneName
@@ -34,10 +34,6 @@ class WSSubscribeGraphExecutionsRequest(pydantic.BaseModel):
graph_id: str
class ExecuteGraphResponse(pydantic.BaseModel):
graph_exec_id: str
class CreateGraph(pydantic.BaseModel):
graph: Graph
@@ -49,7 +45,7 @@ class CreateAPIKeyRequest(pydantic.BaseModel):
class CreateAPIKeyResponse(pydantic.BaseModel):
api_key: APIKeyWithoutHash
api_key: APIKeyInfo
plain_text_key: str

View File

@@ -3,14 +3,16 @@ import logging
from enum import Enum
from typing import Any, Optional
import autogpt_libs.auth.models
import fastapi
import fastapi.responses
import pydantic
import starlette.middleware.cors
import uvicorn
from autogpt_libs.auth import add_auth_responses_to_openapi
from autogpt_libs.auth import verify_settings as verify_auth_settings
from fastapi.exceptions import RequestValidationError
from fastapi.routing import APIRoute
from prisma.errors import PrismaError
import backend.data.block
import backend.data.db
@@ -38,6 +40,7 @@ from backend.server.external.api import external_app
from backend.server.middleware.security import SecurityHeadersMiddleware
from backend.util import json
from backend.util.cloud_storage import shutdown_cloud_storage_handler
from backend.util.exceptions import NotAuthorizedError, NotFoundError
from backend.util.feature_flag import initialize_launchdarkly, shutdown_launchdarkly
from backend.util.service import UnhealthyServiceError
@@ -61,6 +64,8 @@ def launch_darkly_context():
@contextlib.asynccontextmanager
async def lifespan_context(app: fastapi.FastAPI):
verify_auth_settings()
await backend.data.db.connect()
# Ensure SDK auto-registration is patched before initializing blocks
@@ -131,6 +136,9 @@ app = fastapi.FastAPI(
app.add_middleware(SecurityHeadersMiddleware)
# Add 401 responses to authenticated endpoints in OpenAPI spec
add_auth_responses_to_openapi(app)
def handle_internal_http_error(status_code: int = 500, log_error: bool = True):
def handler(request: fastapi.Request, exc: Exception):
@@ -189,10 +197,14 @@ async def validation_error_handler(
)
app.add_exception_handler(PrismaError, handle_internal_http_error(500))
app.add_exception_handler(NotFoundError, handle_internal_http_error(404, False))
app.add_exception_handler(NotAuthorizedError, handle_internal_http_error(403, False))
app.add_exception_handler(RequestValidationError, validation_error_handler)
app.add_exception_handler(pydantic.ValidationError, validation_error_handler)
app.add_exception_handler(ValueError, handle_internal_http_error(400))
app.add_exception_handler(Exception, handle_internal_http_error(500))
app.include_router(backend.server.routers.v1.v1_router, tags=["v1"], prefix="/api")
app.include_router(
backend.server.v2.store.routes.router, tags=["v2"], prefix="/api/store"
@@ -240,13 +252,19 @@ async def health():
class AgentServer(backend.util.service.AppProcess):
def run(self):
server_app = starlette.middleware.cors.CORSMiddleware(
app=app,
allow_origins=settings.config.backend_cors_allow_origins,
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
)
if settings.config.enable_cors_all_origins:
server_app = starlette.middleware.cors.CORSMiddleware(
app=app,
allow_origins=settings.config.backend_cors_allow_origins,
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
)
else:
logger.info("CORS is disabled")
server_app = app
uvicorn.run(
server_app,
host=backend.util.settings.Config().agent_api_host,
@@ -357,6 +375,7 @@ class AgentServer(backend.util.service.AppProcess):
preset_id=preset_id,
user_id=user_id,
inputs=inputs or {},
credential_inputs={},
)
@staticmethod
@@ -370,10 +389,10 @@ class AgentServer(backend.util.service.AppProcess):
@staticmethod
async def test_review_store_listing(
request: backend.server.v2.store.model.ReviewSubmissionRequest,
user: autogpt_libs.auth.models.User,
user_id: str,
):
return await backend.server.v2.admin.store_admin_routes.review_submission(
request.store_listing_version_id, request, user
request.store_listing_version_id, request, user_id
)
@staticmethod

View File

@@ -5,9 +5,9 @@ from typing import Annotated
import fastapi
import pydantic
from autogpt_libs.auth import get_user_id
import backend.data.analytics
from backend.server.utils import get_user_id
router = fastapi.APIRouter()
logger = logging.getLogger(__name__)
@@ -21,7 +21,7 @@ class LogRawMetricRequest(pydantic.BaseModel):
@router.post(path="/log_raw_metric")
async def log_raw_metric(
user_id: Annotated[str, fastapi.Depends(get_user_id)],
user_id: Annotated[str, fastapi.Security(get_user_id)],
request: LogRawMetricRequest,
):
try:
@@ -47,7 +47,7 @@ async def log_raw_metric(
@router.post("/log_raw_analytics")
async def log_raw_analytics(
user_id: Annotated[str, fastapi.Depends(get_user_id)],
user_id: Annotated[str, fastapi.Security(get_user_id)],
type: Annotated[str, fastapi.Body(..., embed=True)],
data: Annotated[
dict,

View File

@@ -5,18 +5,17 @@ from unittest.mock import AsyncMock, Mock
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.routers.analytics as analytics_routes
from backend.server.conftest import TEST_USER_ID
from backend.server.test_helpers import (
assert_error_response_structure,
assert_mock_called_with_partial,
assert_response_status,
safe_parse_json,
)
from backend.server.utils import get_user_id
app = fastapi.FastAPI()
app.include_router(analytics_routes.router)
@@ -24,17 +23,20 @@ app.include_router(analytics_routes.router)
client = fastapi.testclient.TestClient(app)
def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return TEST_USER_ID
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def test_log_raw_metric_success_improved(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw metric logging with improved assertions."""
# Mock the analytics function
@@ -63,7 +65,7 @@ def test_log_raw_metric_success_improved(
# Verify the function was called with correct parameters
assert_mock_called_with_partial(
mock_log_metric,
user_id=TEST_USER_ID,
user_id=test_user_id,
metric_name="page_load_time",
metric_value=2.5,
data_string="/dashboard",

View File

@@ -10,8 +10,6 @@ import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.routers.analytics as analytics_routes
from backend.server.conftest import TEST_USER_ID
from backend.server.utils import get_user_id
app = fastapi.FastAPI()
app.include_router(analytics_routes.router)
@@ -19,12 +17,14 @@ app.include_router(analytics_routes.router)
client = fastapi.testclient.TestClient(app)
def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return TEST_USER_ID
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
@pytest.mark.parametrize(

View File

@@ -3,12 +3,11 @@ from unittest.mock import AsyncMock, Mock
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.routers.analytics as analytics_routes
from backend.server.conftest import TEST_USER_ID
from backend.server.utils import get_user_id
app = fastapi.FastAPI()
app.include_router(analytics_routes.router)
@@ -16,17 +15,20 @@ app.include_router(analytics_routes.router)
client = fastapi.testclient.TestClient(app)
def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return TEST_USER_ID
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def test_log_raw_metric_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw metric logging"""
@@ -53,7 +55,7 @@ def test_log_raw_metric_success(
# Verify the function was called with correct parameters
mock_log_metric.assert_called_once_with(
user_id=TEST_USER_ID,
user_id=test_user_id,
metric_name="page_load_time",
metric_value=2.5,
data_string="/dashboard",
@@ -121,6 +123,7 @@ def test_log_raw_metric_various_values(
def test_log_raw_analytics_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test successful raw analytics logging"""
@@ -155,7 +158,7 @@ def test_log_raw_analytics_success(
# Verify the function was called with correct parameters
mock_log_analytics.assert_called_once_with(
TEST_USER_ID,
test_user_id,
"user_action",
request_data["data"],
"button_click_submit_form",

View File

@@ -1,8 +1,7 @@
import logging
from typing import Annotated
from autogpt_libs.auth.middleware import APIKeyValidator
from fastapi import APIRouter, Body, Depends, HTTPException, Query
from fastapi import APIRouter, Body, HTTPException, Query, Security
from fastapi.responses import JSONResponse
from backend.data.user import (
@@ -20,18 +19,18 @@ from backend.server.routers.postmark.models import (
PostmarkSubscriptionChangeWebhook,
PostmarkWebhook,
)
from backend.server.utils.api_key_auth import APIKeyAuthenticator
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
postmark_validator = APIKeyValidator(
"X-Postmark-Webhook-Token",
settings.secrets.postmark_webhook_token,
)
router = APIRouter()
logger = logging.getLogger(__name__)
postmark_api_key_auth = APIKeyAuthenticator(
"X-Postmark-Webhook-Token",
settings.secrets.postmark_webhook_token,
)
@router.post("/unsubscribe", summary="One Click Email Unsubscribe")
@@ -50,7 +49,7 @@ async def unsubscribe_via_one_click(token: Annotated[str, Query()]):
@router.post(
"/",
dependencies=[Depends(postmark_validator.get_dependency())],
dependencies=[Security(postmark_api_key_auth)],
summary="Handle Postmark Email Webhooks",
)
async def postmark_webhook_handler(

View File

@@ -7,17 +7,18 @@ from typing import Annotated, Any, Sequence
import pydantic
import stripe
from autogpt_libs.auth.middleware import auth_middleware
from autogpt_libs.auth import get_user_id, requires_user
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from fastapi import (
APIRouter,
Body,
Depends,
File,
HTTPException,
Path,
Query,
Request,
Response,
Security,
UploadFile,
)
from starlette.status import HTTP_204_NO_CONTENT, HTTP_404_NOT_FOUND
@@ -26,20 +27,9 @@ from typing_extensions import Optional, TypedDict
import backend.server.integrations.router
import backend.server.routers.analytics
import backend.server.v2.library.db as library_db
from backend.data import api_key as api_key_db
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.api_key import (
APIKeyError,
APIKeyNotFoundError,
APIKeyPermissionError,
APIKeyWithoutHash,
generate_api_key,
get_api_key_by_id,
list_user_api_keys,
revoke_api_key,
suspend_api_key,
update_api_key_permissions,
)
from backend.data.block import BlockInput, CompletedBlockOutput, get_block, get_blocks
from backend.data.credit import (
AutoTopUpConfig,
@@ -77,7 +67,6 @@ from backend.server.model import (
CreateAPIKeyRequest,
CreateAPIKeyResponse,
CreateGraph,
ExecuteGraphResponse,
RequestTopUp,
SetGraphActiveVersion,
TimezoneResponse,
@@ -85,7 +74,6 @@ from backend.server.model import (
UpdateTimezoneRequest,
UploadFileResponse,
)
from backend.server.utils import get_user_id
from backend.util.clients import get_scheduler_client
from backend.util.cloud_storage import get_cloud_storage_handler
from backend.util.exceptions import GraphValidationError, NotFoundError
@@ -124,7 +112,7 @@ v1_router.include_router(
backend.server.routers.analytics.router,
prefix="/analytics",
tags=["analytics"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
@@ -137,9 +125,9 @@ v1_router.include_router(
"/auth/user",
summary="Get or create user",
tags=["auth"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_or_create_user_route(user_data: dict = Depends(auth_middleware)):
async def get_or_create_user_route(user_data: dict = Security(get_jwt_payload)):
user = await get_or_create_user(user_data)
return user.model_dump()
@@ -148,10 +136,10 @@ async def get_or_create_user_route(user_data: dict = Depends(auth_middleware)):
"/auth/user/email",
summary="Update user email",
tags=["auth"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def update_user_email_route(
user_id: Annotated[str, Depends(get_user_id)], email: str = Body(...)
user_id: Annotated[str, Security(get_user_id)], email: str = Body(...)
) -> dict[str, str]:
await update_user_email(user_id, email)
@@ -162,10 +150,10 @@ async def update_user_email_route(
"/auth/user/timezone",
summary="Get user timezone",
tags=["auth"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_user_timezone_route(
user_data: dict = Depends(auth_middleware),
user_data: dict = Security(get_jwt_payload),
) -> TimezoneResponse:
"""Get user timezone setting."""
user = await get_or_create_user(user_data)
@@ -176,11 +164,10 @@ async def get_user_timezone_route(
"/auth/user/timezone",
summary="Update user timezone",
tags=["auth"],
dependencies=[Depends(auth_middleware)],
response_model=TimezoneResponse,
dependencies=[Security(requires_user)],
)
async def update_user_timezone_route(
user_id: Annotated[str, Depends(get_user_id)], request: UpdateTimezoneRequest
user_id: Annotated[str, Security(get_user_id)], request: UpdateTimezoneRequest
) -> TimezoneResponse:
"""Update user timezone. The timezone should be a valid IANA timezone identifier."""
user = await update_user_timezone(user_id, str(request.timezone))
@@ -191,10 +178,10 @@ async def update_user_timezone_route(
"/auth/user/preferences",
summary="Get notification preferences",
tags=["auth"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_preferences(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> NotificationPreference:
preferences = await get_user_notification_preference(user_id)
return preferences
@@ -204,10 +191,10 @@ async def get_preferences(
"/auth/user/preferences",
summary="Update notification preferences",
tags=["auth"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def update_preferences(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
preferences: NotificationPreferenceDTO = Body(...),
) -> NotificationPreference:
output = await update_user_notification_preference(user_id, preferences)
@@ -223,9 +210,9 @@ async def update_preferences(
"/onboarding",
summary="Get onboarding status",
tags=["onboarding"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_onboarding(user_id: Annotated[str, Depends(get_user_id)]):
async def get_onboarding(user_id: Annotated[str, Security(get_user_id)]):
return await get_user_onboarding(user_id)
@@ -233,10 +220,10 @@ async def get_onboarding(user_id: Annotated[str, Depends(get_user_id)]):
"/onboarding",
summary="Update onboarding progress",
tags=["onboarding"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def update_onboarding(
user_id: Annotated[str, Depends(get_user_id)], data: UserOnboardingUpdate
user_id: Annotated[str, Security(get_user_id)], data: UserOnboardingUpdate
):
return await update_user_onboarding(user_id, data)
@@ -245,10 +232,10 @@ async def update_onboarding(
"/onboarding/agents",
summary="Get recommended agents",
tags=["onboarding"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_onboarding_agents(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
):
return await get_recommended_agents(user_id)
@@ -257,7 +244,7 @@ async def get_onboarding_agents(
"/onboarding/enabled",
summary="Check onboarding enabled",
tags=["onboarding", "public"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def is_onboarding_enabled():
return await onboarding_enabled()
@@ -272,7 +259,7 @@ async def is_onboarding_enabled():
path="/blocks",
summary="List available blocks",
tags=["blocks"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in get_blocks().values()]
@@ -286,7 +273,7 @@ def get_graph_blocks() -> Sequence[dict[Any, Any]]:
path="/blocks/{block_id}/execute",
summary="Execute graph block",
tags=["blocks"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def execute_graph_block(block_id: str, data: BlockInput) -> CompletedBlockOutput:
obj = get_block(block_id)
@@ -303,10 +290,10 @@ async def execute_graph_block(block_id: str, data: BlockInput) -> CompletedBlock
path="/files/upload",
summary="Upload file to cloud storage",
tags=["files"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def upload_file(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
file: UploadFile = File(...),
provider: str = "gcs",
expiration_hours: int = 24,
@@ -394,10 +381,10 @@ async def upload_file(
path="/credits",
tags=["credits"],
summary="Get user credits",
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_user_credits(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> dict[str, int]:
return {"credits": await _user_credit_model.get_credits(user_id)}
@@ -406,10 +393,10 @@ async def get_user_credits(
path="/credits",
summary="Request credit top up",
tags=["credits"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def request_top_up(
request: RequestTopUp, user_id: Annotated[str, Depends(get_user_id)]
request: RequestTopUp, user_id: Annotated[str, Security(get_user_id)]
):
checkout_url = await _user_credit_model.top_up_intent(
user_id, request.credit_amount
@@ -421,10 +408,10 @@ async def request_top_up(
path="/credits/{transaction_key}/refund",
summary="Refund credit transaction",
tags=["credits"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def refund_top_up(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
transaction_key: str,
metadata: dict[str, str],
) -> int:
@@ -435,9 +422,9 @@ async def refund_top_up(
path="/credits",
summary="Fulfill checkout session",
tags=["credits"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def fulfill_checkout(user_id: Annotated[str, Depends(get_user_id)]):
async def fulfill_checkout(user_id: Annotated[str, Security(get_user_id)]):
await _user_credit_model.fulfill_checkout(user_id=user_id)
return Response(status_code=200)
@@ -446,10 +433,10 @@ async def fulfill_checkout(user_id: Annotated[str, Depends(get_user_id)]):
path="/credits/auto-top-up",
summary="Configure auto top up",
tags=["credits"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def configure_user_auto_top_up(
request: AutoTopUpConfig, user_id: Annotated[str, Depends(get_user_id)]
request: AutoTopUpConfig, user_id: Annotated[str, Security(get_user_id)]
) -> str:
if request.threshold < 0:
raise ValueError("Threshold must be greater than 0")
@@ -475,10 +462,10 @@ async def configure_user_auto_top_up(
path="/credits/auto-top-up",
summary="Get auto top up",
tags=["credits"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_user_auto_top_up(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> AutoTopUpConfig:
return await get_auto_top_up(user_id)
@@ -528,10 +515,10 @@ async def stripe_webhook(request: Request):
path="/credits/manage",
tags=["credits"],
summary="Manage payment methods",
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def manage_payment_method(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> dict[str, str]:
return {"url": await _user_credit_model.create_billing_portal_session(user_id)}
@@ -540,10 +527,10 @@ async def manage_payment_method(
path="/credits/transactions",
tags=["credits"],
summary="Get credit history",
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_credit_history(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
transaction_time: datetime | None = None,
transaction_type: str | None = None,
transaction_count_limit: int = 100,
@@ -563,10 +550,10 @@ async def get_credit_history(
path="/credits/refunds",
tags=["credits"],
summary="Get refund requests",
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_refund_requests(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> list[RefundRequest]:
return await _user_credit_model.get_refund_requests(user_id)
@@ -584,10 +571,10 @@ class DeleteGraphResponse(TypedDict):
path="/graphs",
summary="List user graphs",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def list_graphs(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> Sequence[graph_db.GraphMeta]:
return await graph_db.list_graphs(filter_by="active", user_id=user_id)
@@ -596,17 +583,17 @@ async def list_graphs(
path="/graphs/{graph_id}",
summary="Get specific graph",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
@v1_router.get(
path="/graphs/{graph_id}/versions/{version}",
summary="Get graph version",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_graph(
graph_id: str,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
version: int | None = None,
for_export: bool = False,
) -> graph_db.GraphModel:
@@ -626,10 +613,10 @@ async def get_graph(
path="/graphs/{graph_id}/versions",
summary="Get all graph versions",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_graph_all_versions(
graph_id: str, user_id: Annotated[str, Depends(get_user_id)]
graph_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> Sequence[graph_db.GraphModel]:
graphs = await graph_db.get_graph_all_versions(graph_id, user_id=user_id)
if not graphs:
@@ -641,11 +628,11 @@ async def get_graph_all_versions(
path="/graphs",
summary="Create new graph",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def create_new_graph(
create_graph: CreateGraph,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> graph_db.GraphModel:
graph = graph_db.make_graph_model(create_graph.graph, user_id)
graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
@@ -662,10 +649,10 @@ async def create_new_graph(
path="/graphs/{graph_id}",
summary="Delete graph permanently",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def delete_graph(
graph_id: str, user_id: Annotated[str, Depends(get_user_id)]
graph_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> DeleteGraphResponse:
if active_version := await graph_db.get_graph(graph_id, user_id=user_id):
await on_graph_deactivate(active_version, user_id=user_id)
@@ -677,12 +664,12 @@ async def delete_graph(
path="/graphs/{graph_id}",
summary="Update graph version",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def update_graph(
graph_id: str,
graph: graph_db.Graph,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> graph_db.GraphModel:
# Sanity check
if graph.id and graph.id != graph_id:
@@ -733,12 +720,12 @@ async def update_graph(
path="/graphs/{graph_id}/versions/active",
summary="Set active graph version",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def set_graph_active_version(
graph_id: str,
request_body: SetGraphActiveVersion,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
):
new_active_version = request_body.active_graph_version
new_active_graph = await graph_db.get_graph(
@@ -772,18 +759,18 @@ async def set_graph_active_version(
path="/graphs/{graph_id}/execute/{graph_version}",
summary="Execute graph agent",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def execute_graph(
graph_id: str,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
inputs: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
credentials_inputs: Annotated[
dict[str, CredentialsMetaInput], Body(..., embed=True, default_factory=dict)
],
graph_version: Optional[int] = None,
preset_id: Optional[str] = None,
) -> ExecuteGraphResponse:
) -> execution_db.GraphExecutionMeta:
current_balance = await _user_credit_model.get_credits(user_id)
if current_balance <= 0:
raise HTTPException(
@@ -792,7 +779,7 @@ async def execute_graph(
)
try:
graph_exec = await execution_utils.add_graph_execution(
return await execution_utils.add_graph_execution(
graph_id=graph_id,
user_id=user_id,
inputs=inputs,
@@ -800,7 +787,6 @@ async def execute_graph(
graph_version=graph_version,
graph_credentials_inputs=credentials_inputs,
)
return ExecuteGraphResponse(graph_exec_id=graph_exec.id)
except GraphValidationError as e:
# Return structured validation errors that the frontend can parse
raise HTTPException(
@@ -818,10 +804,10 @@ async def execute_graph(
path="/graphs/{graph_id}/executions/{graph_exec_id}/stop",
summary="Stop graph execution",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def stop_graph_run(
graph_id: str, graph_exec_id: str, user_id: Annotated[str, Depends(get_user_id)]
graph_id: str, graph_exec_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> execution_db.GraphExecutionMeta | None:
res = await _stop_graph_run(
user_id=user_id,
@@ -860,10 +846,10 @@ async def _stop_graph_run(
path="/executions",
summary="List all executions",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def list_graphs_executions(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> list[execution_db.GraphExecutionMeta]:
return await execution_db.get_graph_executions(user_id=user_id)
@@ -872,11 +858,11 @@ async def list_graphs_executions(
path="/graphs/{graph_id}/executions",
summary="List graph executions",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def list_graph_executions(
graph_id: str,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
page: int = Query(1, ge=1, description="Page number (1-indexed)"),
page_size: int = Query(
25, ge=1, le=100, description="Number of executions per page"
@@ -894,12 +880,12 @@ async def list_graph_executions(
path="/graphs/{graph_id}/executions/{graph_exec_id}",
summary="Get execution details",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_graph_execution(
graph_id: str,
graph_exec_id: str,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> execution_db.GraphExecution | execution_db.GraphExecutionWithNodes:
graph = await graph_db.get_graph(graph_id=graph_id, user_id=user_id)
if not graph:
@@ -924,12 +910,12 @@ async def get_graph_execution(
path="/executions/{graph_exec_id}",
summary="Delete graph execution",
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
status_code=HTTP_204_NO_CONTENT,
)
async def delete_graph_execution(
graph_exec_id: str,
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> None:
await execution_db.delete_graph_execution(
graph_exec_id=graph_exec_id, user_id=user_id
@@ -953,10 +939,10 @@ class ScheduleCreationRequest(pydantic.BaseModel):
path="/graphs/{graph_id}/schedules",
summary="Create execution schedule",
tags=["schedules"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def create_graph_execution_schedule(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
graph_id: str = Path(..., description="ID of the graph to schedule"),
schedule_params: ScheduleCreationRequest = Body(),
) -> scheduler.GraphExecutionJobInfo:
@@ -1006,10 +992,10 @@ async def create_graph_execution_schedule(
path="/graphs/{graph_id}/schedules",
summary="List execution schedules for a graph",
tags=["schedules"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def list_graph_execution_schedules(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
graph_id: str = Path(),
) -> list[scheduler.GraphExecutionJobInfo]:
schedules = await get_scheduler_client().get_execution_schedules(
@@ -1035,10 +1021,10 @@ async def list_graph_execution_schedules(
path="/schedules",
summary="List execution schedules for a user",
tags=["schedules"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def list_all_graphs_execution_schedules(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
) -> list[scheduler.GraphExecutionJobInfo]:
schedules = await get_scheduler_client().get_execution_schedules(user_id=user_id)
@@ -1060,10 +1046,10 @@ async def list_all_graphs_execution_schedules(
path="/schedules/{schedule_id}",
summary="Delete execution schedule",
tags=["schedules"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def delete_graph_execution_schedule(
user_id: Annotated[str, Depends(get_user_id)],
user_id: Annotated[str, Security(get_user_id)],
schedule_id: str = Path(..., description="ID of the schedule to delete"),
) -> dict[str, Any]:
try:
@@ -1084,159 +1070,89 @@ async def delete_graph_execution_schedule(
@v1_router.post(
"/api-keys",
summary="Create new API key",
response_model=CreateAPIKeyResponse,
tags=["api-keys"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def create_api_key(
request: CreateAPIKeyRequest, user_id: Annotated[str, Depends(get_user_id)]
request: CreateAPIKeyRequest, user_id: Annotated[str, Security(get_user_id)]
) -> CreateAPIKeyResponse:
"""Create a new API key"""
try:
api_key, plain_text = await generate_api_key(
name=request.name,
user_id=user_id,
permissions=request.permissions,
description=request.description,
)
return CreateAPIKeyResponse(api_key=api_key, plain_text_key=plain_text)
except APIKeyError as e:
logger.error(
"Could not create API key for user %s: %s. Review input and permissions.",
user_id,
e,
)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Verify request payload and try again."},
)
api_key_info, plain_text_key = await api_key_db.create_api_key(
name=request.name,
user_id=user_id,
permissions=request.permissions,
description=request.description,
)
return CreateAPIKeyResponse(api_key=api_key_info, plain_text_key=plain_text_key)
@v1_router.get(
"/api-keys",
summary="List user API keys",
response_model=list[APIKeyWithoutHash] | dict[str, str],
tags=["api-keys"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_api_keys(
user_id: Annotated[str, Depends(get_user_id)],
) -> list[APIKeyWithoutHash]:
user_id: Annotated[str, Security(get_user_id)],
) -> list[api_key_db.APIKeyInfo]:
"""List all API keys for the user"""
try:
return await list_user_api_keys(user_id)
except APIKeyError as e:
logger.error("Failed to list API keys for user %s: %s", user_id, e)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Check API key service availability."},
)
return await api_key_db.list_user_api_keys(user_id)
@v1_router.get(
"/api-keys/{key_id}",
summary="Get specific API key",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def get_api_key(
key_id: str, user_id: Annotated[str, Depends(get_user_id)]
) -> APIKeyWithoutHash:
key_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> api_key_db.APIKeyInfo:
"""Get a specific API key"""
try:
api_key = await get_api_key_by_id(key_id, user_id)
if not api_key:
raise HTTPException(status_code=404, detail="API key not found")
return api_key
except APIKeyError as e:
logger.error("Error retrieving API key %s for user %s: %s", key_id, user_id, e)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Ensure the key ID is correct."},
)
api_key = await api_key_db.get_api_key_by_id(key_id, user_id)
if not api_key:
raise HTTPException(status_code=404, detail="API key not found")
return api_key
@v1_router.delete(
"/api-keys/{key_id}",
summary="Revoke API key",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def delete_api_key(
key_id: str, user_id: Annotated[str, Depends(get_user_id)]
) -> Optional[APIKeyWithoutHash]:
key_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> api_key_db.APIKeyInfo:
"""Revoke an API key"""
try:
return await revoke_api_key(key_id, user_id)
except APIKeyNotFoundError:
raise HTTPException(status_code=404, detail="API key not found")
except APIKeyPermissionError:
raise HTTPException(status_code=403, detail="Permission denied")
except APIKeyError as e:
logger.error("Failed to revoke API key %s for user %s: %s", key_id, user_id, e)
raise HTTPException(
status_code=400,
detail={
"message": str(e),
"hint": "Verify permissions or try again later.",
},
)
return await api_key_db.revoke_api_key(key_id, user_id)
@v1_router.post(
"/api-keys/{key_id}/suspend",
summary="Suspend API key",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def suspend_key(
key_id: str, user_id: Annotated[str, Depends(get_user_id)]
) -> Optional[APIKeyWithoutHash]:
key_id: str, user_id: Annotated[str, Security(get_user_id)]
) -> api_key_db.APIKeyInfo:
"""Suspend an API key"""
try:
return await suspend_api_key(key_id, user_id)
except APIKeyNotFoundError:
raise HTTPException(status_code=404, detail="API key not found")
except APIKeyPermissionError:
raise HTTPException(status_code=403, detail="Permission denied")
except APIKeyError as e:
logger.error("Failed to suspend API key %s for user %s: %s", key_id, user_id, e)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Check user permissions and retry."},
)
return await api_key_db.suspend_api_key(key_id, user_id)
@v1_router.put(
"/api-keys/{key_id}/permissions",
summary="Update key permissions",
response_model=APIKeyWithoutHash,
tags=["api-keys"],
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
)
async def update_permissions(
key_id: str,
request: UpdatePermissionsRequest,
user_id: Annotated[str, Depends(get_user_id)],
) -> Optional[APIKeyWithoutHash]:
user_id: Annotated[str, Security(get_user_id)],
) -> api_key_db.APIKeyInfo:
"""Update API key permissions"""
try:
return await update_api_key_permissions(key_id, user_id, request.permissions)
except APIKeyNotFoundError:
raise HTTPException(status_code=404, detail="API key not found")
except APIKeyPermissionError:
raise HTTPException(status_code=403, detail="Permission denied")
except APIKeyError as e:
logger.error(
"Failed to update permissions for API key %s of user %s: %s",
key_id,
user_id,
e,
)
raise HTTPException(
status_code=400,
detail={"message": str(e), "hint": "Ensure permissions list is valid."},
)
return await api_key_db.update_api_key_permissions(
key_id, user_id, request.permissions
)

View File

@@ -2,7 +2,6 @@ import json
from io import BytesIO
from unittest.mock import AsyncMock, Mock, patch
import autogpt_libs.auth.depends
import fastapi
import fastapi.testclient
import pytest
@@ -14,9 +13,7 @@ from pytest_snapshot.plugin import Snapshot
import backend.server.routers.v1 as v1_routes
from backend.data.credit import AutoTopUpConfig
from backend.data.graph import GraphModel
from backend.server.conftest import TEST_USER_ID
from backend.server.routers.v1 import upload_file
from backend.server.utils import get_user_id
app = fastapi.FastAPI()
app.include_router(v1_routes.v1_router)
@@ -24,31 +21,26 @@ app.include_router(v1_routes.v1_router)
client = fastapi.testclient.TestClient(app)
def override_auth_middleware(request: fastapi.Request) -> dict[str, str]:
"""Override auth middleware for testing"""
return {"sub": TEST_USER_ID, "role": "user", "email": "test@example.com"}
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return TEST_USER_ID
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
override_auth_middleware
)
app.dependency_overrides[get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
# Auth endpoints tests
def test_get_or_create_user_route(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test get or create user endpoint"""
mock_user = Mock()
mock_user.model_dump.return_value = {
"id": TEST_USER_ID,
"id": test_user_id,
"email": "test@example.com",
"name": "Test User",
}
@@ -263,6 +255,7 @@ def test_get_auto_top_up(
def test_get_graphs(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test get graphs endpoint"""
mock_graph = GraphModel(
@@ -271,7 +264,7 @@ def test_get_graphs(
is_active=True,
name="Test Graph",
description="A test graph",
user_id="test-user-id",
user_id=test_user_id,
)
mocker.patch(
@@ -296,6 +289,7 @@ def test_get_graphs(
def test_get_graph(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test get single graph endpoint"""
mock_graph = GraphModel(
@@ -304,7 +298,7 @@ def test_get_graph(
is_active=True,
name="Test Graph",
description="A test graph",
user_id="test-user-id",
user_id=test_user_id,
)
mocker.patch(
@@ -343,6 +337,7 @@ def test_get_graph_not_found(
def test_delete_graph(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
"""Test delete graph endpoint"""
# Mock active graph for deactivation
@@ -352,7 +347,7 @@ def test_delete_graph(
is_active=True,
name="Test Graph",
description="A test graph",
user_id="test-user-id",
user_id=test_user_id,
)
mocker.patch(
@@ -399,7 +394,7 @@ def test_missing_required_field() -> None:
@pytest.mark.asyncio
async def test_upload_file_success():
async def test_upload_file_success(test_user_id: str):
"""Test successful file upload."""
# Create mock upload file
file_content = b"test file content"
@@ -425,7 +420,7 @@ async def test_upload_file_success():
result = await upload_file(
file=upload_file_mock,
user_id="test-user-123",
user_id=test_user_id,
provider="gcs",
expiration_hours=24,
)
@@ -446,12 +441,12 @@ async def test_upload_file_success():
filename="test.txt",
provider="gcs",
expiration_hours=24,
user_id="test-user-123",
user_id=test_user_id,
)
@pytest.mark.asyncio
async def test_upload_file_no_filename():
async def test_upload_file_no_filename(test_user_id: str):
"""Test file upload without filename."""
file_content = b"test content"
file_obj = BytesIO(file_content)
@@ -476,7 +471,7 @@ async def test_upload_file_no_filename():
upload_file_mock.read = AsyncMock(return_value=file_content)
result = await upload_file(file=upload_file_mock, user_id="test-user-123")
result = await upload_file(file=upload_file_mock, user_id=test_user_id)
assert result.file_name == "uploaded_file"
assert result.content_type == "application/octet-stream"
@@ -486,7 +481,7 @@ async def test_upload_file_no_filename():
@pytest.mark.asyncio
async def test_upload_file_invalid_expiration():
async def test_upload_file_invalid_expiration(test_user_id: str):
"""Test file upload with invalid expiration hours."""
file_obj = BytesIO(b"content")
upload_file_mock = UploadFile(
@@ -498,7 +493,7 @@ async def test_upload_file_invalid_expiration():
# Test expiration too short
with pytest.raises(HTTPException) as exc_info:
await upload_file(
file=upload_file_mock, user_id="test-user-123", expiration_hours=0
file=upload_file_mock, user_id=test_user_id, expiration_hours=0
)
assert exc_info.value.status_code == 400
assert "between 1 and 48" in exc_info.value.detail
@@ -506,14 +501,14 @@ async def test_upload_file_invalid_expiration():
# Test expiration too long
with pytest.raises(HTTPException) as exc_info:
await upload_file(
file=upload_file_mock, user_id="test-user-123", expiration_hours=49
file=upload_file_mock, user_id=test_user_id, expiration_hours=49
)
assert exc_info.value.status_code == 400
assert "between 1 and 48" in exc_info.value.detail
@pytest.mark.asyncio
async def test_upload_file_virus_scan_failure():
async def test_upload_file_virus_scan_failure(test_user_id: str):
"""Test file upload when virus scan fails."""
file_content = b"malicious content"
file_obj = BytesIO(file_content)
@@ -530,11 +525,11 @@ async def test_upload_file_virus_scan_failure():
upload_file_mock.read = AsyncMock(return_value=file_content)
with pytest.raises(RuntimeError, match="Virus detected!"):
await upload_file(file=upload_file_mock, user_id="test-user-123")
await upload_file(file=upload_file_mock, user_id=test_user_id)
@pytest.mark.asyncio
async def test_upload_file_cloud_storage_failure():
async def test_upload_file_cloud_storage_failure(test_user_id: str):
"""Test file upload when cloud storage fails."""
file_content = b"test content"
file_obj = BytesIO(file_content)
@@ -556,11 +551,11 @@ async def test_upload_file_cloud_storage_failure():
upload_file_mock.read = AsyncMock(return_value=file_content)
with pytest.raises(RuntimeError, match="Storage error!"):
await upload_file(file=upload_file_mock, user_id="test-user-123")
await upload_file(file=upload_file_mock, user_id=test_user_id)
@pytest.mark.asyncio
async def test_upload_file_size_limit_exceeded():
async def test_upload_file_size_limit_exceeded(test_user_id: str):
"""Test file upload when file size exceeds the limit."""
# Create a file that exceeds the default 256MB limit
large_file_content = b"x" * (257 * 1024 * 1024) # 257MB
@@ -574,14 +569,14 @@ async def test_upload_file_size_limit_exceeded():
upload_file_mock.read = AsyncMock(return_value=large_file_content)
with pytest.raises(HTTPException) as exc_info:
await upload_file(file=upload_file_mock, user_id="test-user-123")
await upload_file(file=upload_file_mock, user_id=test_user_id)
assert exc_info.value.status_code == 400
assert "exceeds the maximum allowed size of 256MB" in exc_info.value.detail
@pytest.mark.asyncio
async def test_upload_file_gcs_not_configured_fallback():
async def test_upload_file_gcs_not_configured_fallback(test_user_id: str):
"""Test file upload fallback to base64 when GCS is not configured."""
file_content = b"test file content"
file_obj = BytesIO(file_content)
@@ -602,7 +597,7 @@ async def test_upload_file_gcs_not_configured_fallback():
upload_file_mock.read = AsyncMock(return_value=file_content)
result = await upload_file(file=upload_file_mock, user_id="test-user-123")
result = await upload_file(file=upload_file_mock, user_id=test_user_id)
# Verify fallback behavior
assert result.file_name == "test.txt"

View File

@@ -1,144 +0,0 @@
"""Common test fixtures with proper setup and teardown."""
from contextlib import asynccontextmanager
from typing import AsyncGenerator
from unittest.mock import Mock, patch
import pytest
from prisma import Prisma
@pytest.fixture
async def test_db_connection() -> AsyncGenerator[Prisma, None]:
"""Provide a test database connection with proper cleanup.
This fixture ensures the database connection is properly
closed after the test, even if the test fails.
"""
db = Prisma()
try:
await db.connect()
yield db
finally:
await db.disconnect()
@pytest.fixture
def mock_transaction():
"""Mock database transaction with proper async context manager."""
@asynccontextmanager
async def mock_context(*args, **kwargs):
yield None
with patch("backend.data.db.locked_transaction", side_effect=mock_context) as mock:
yield mock
@pytest.fixture
def isolated_app_state():
"""Fixture that ensures app state is isolated between tests."""
# Example: Save original state
# from backend.server.app import app
# original_overrides = app.dependency_overrides.copy()
# try:
# yield app
# finally:
# # Restore original state
# app.dependency_overrides = original_overrides
# For now, just yield None as this is an example
yield None
@pytest.fixture
def cleanup_files():
"""Fixture to track and cleanup files created during tests."""
created_files = []
def track_file(filepath: str):
created_files.append(filepath)
yield track_file
# Cleanup
import os
for filepath in created_files:
try:
if os.path.exists(filepath):
os.remove(filepath)
except Exception as e:
print(f"Warning: Failed to cleanup {filepath}: {e}")
@pytest.fixture
async def async_mock_with_cleanup():
"""Create async mocks that are properly cleaned up."""
mocks = []
def create_mock(**kwargs):
mock = Mock(**kwargs)
mocks.append(mock)
return mock
yield create_mock
# Reset all mocks
for mock in mocks:
mock.reset_mock()
class TestDatabaseIsolation:
"""Example of proper test isolation with database operations."""
@pytest.fixture(autouse=True)
async def setup_and_teardown(self, test_db_connection):
"""Setup and teardown for each test method."""
# Setup: Clear test data
await test_db_connection.user.delete_many(
where={"email": {"contains": "@test.example"}}
)
yield
# Teardown: Clear test data again
await test_db_connection.user.delete_many(
where={"email": {"contains": "@test.example"}}
)
@pytest.fixture(scope="session")
async def test_create_user(self, test_db_connection):
"""Test that demonstrates proper isolation."""
# This test has access to a clean database
user = await test_db_connection.user.create(
data={
"id": "test-user-id",
"email": "test@test.example",
"name": "Test User",
}
)
assert user.email == "test@test.example"
# User will be cleaned up automatically
@pytest.fixture(scope="function") # Explicitly use function scope
def reset_singleton_state():
"""Reset singleton state between tests."""
# Example: Reset a singleton instance
# from backend.data.some_singleton import SingletonClass
# # Save original state
# original_instance = getattr(SingletonClass, "_instance", None)
# try:
# # Clear singleton
# SingletonClass._instance = None
# yield
# finally:
# # Restore original state
# SingletonClass._instance = original_instance
# For now, just yield None as this is an example
yield None

View File

@@ -1,74 +0,0 @@
"""Common test utilities and constants for server tests."""
from typing import Any, Dict
from unittest.mock import Mock
import pytest
# Test ID constants
TEST_USER_ID = "test-user-id"
ADMIN_USER_ID = "admin-user-id"
TARGET_USER_ID = "target-user-id"
# Common test data constants
FIXED_TIMESTAMP = "2024-01-01T00:00:00Z"
TRANSACTION_UUID = "transaction-123-uuid"
METRIC_UUID = "metric-123-uuid"
ANALYTICS_UUID = "analytics-123-uuid"
def create_mock_with_id(mock_id: str) -> Mock:
"""Create a mock object with an id attribute.
Args:
mock_id: The ID value to set on the mock
Returns:
Mock object with id attribute set
"""
return Mock(id=mock_id)
def assert_status_and_parse_json(
response: Any, expected_status: int = 200
) -> Dict[str, Any]:
"""Assert response status and return parsed JSON.
Args:
response: The HTTP response object
expected_status: Expected status code (default: 200)
Returns:
Parsed JSON response data
Raises:
AssertionError: If status code doesn't match expected
"""
assert (
response.status_code == expected_status
), f"Expected status {expected_status}, got {response.status_code}: {response.text}"
return response.json()
@pytest.mark.parametrize(
"metric_value,metric_name,data_string",
[
(100, "api_calls_count", "external_api"),
(0, "error_count", "no_errors"),
(-5.2, "temperature_delta", "cooling"),
(1.23456789, "precision_test", "float_precision"),
(999999999, "large_number", "max_value"),
],
)
def parametrized_metric_values_decorator(func):
"""Decorator for parametrized metric value tests."""
return pytest.mark.parametrize(
"metric_value,metric_name,data_string",
[
(100, "api_calls_count", "external_api"),
(0, "error_count", "no_errors"),
(-5.2, "temperature_delta", "cooling"),
(1.23456789, "precision_test", "float_precision"),
(999999999, "large_number", "max_value"),
],
)(func)

View File

@@ -1,11 +0,0 @@
from autogpt_libs.auth.depends import requires_user
from autogpt_libs.auth.models import User
from fastapi import Depends
from backend.util.settings import Settings
settings = Settings()
def get_user_id(user: User = Depends(requires_user)) -> str:
return user.user_id

View File

@@ -0,0 +1,120 @@
"""
API Key authentication utilities for FastAPI applications.
"""
import inspect
import logging
import secrets
from typing import Any, Awaitable, Callable, Optional
from fastapi import HTTPException, Request
from fastapi.security import APIKeyHeader
from starlette.status import HTTP_401_UNAUTHORIZED
from backend.util.exceptions import MissingConfigError
logger = logging.getLogger(__name__)
class APIKeyAuthenticator(APIKeyHeader):
"""
Configurable API key authenticator for FastAPI applications,
with support for custom validation functions.
This class provides a flexible way to implement API key authentication with optional
custom validation logic. It can be used for simple token matching
or more complex validation scenarios like database lookups.
Examples:
Simple token validation:
```python
api_key_auth = APIKeyAuthenticator(
header_name="X-API-Key",
expected_token="your-secret-token"
)
@app.get("/protected", dependencies=[Security(api_key_auth)])
def protected_endpoint():
return {"message": "Access granted"}
```
Custom validation with database lookup:
```python
async def validate_with_db(api_key: str):
api_key_obj = await db.get_api_key(api_key)
return api_key_obj if api_key_obj and api_key_obj.is_active else None
api_key_auth = APIKeyAuthenticator(
header_name="X-API-Key",
validator=validate_with_db
)
```
Args:
header_name (str): The name of the header containing the API key
expected_token (Optional[str]): The expected API key value for simple token matching
validator (Optional[Callable]): Custom validation function that takes an API key
string and returns a truthy value if and only if the passed string is a
valid API key. Can be async.
status_if_missing (int): HTTP status code to use for validation errors
message_if_invalid (str): Error message to return when validation fails
"""
def __init__(
self,
header_name: str,
expected_token: Optional[str] = None,
validator: Optional[
Callable[[str], Any] | Callable[[str], Awaitable[Any]]
] = None,
status_if_missing: int = HTTP_401_UNAUTHORIZED,
message_if_invalid: str = "Invalid API key",
):
super().__init__(
name=header_name,
scheme_name=f"{__class__.__name__}-{header_name}",
auto_error=False,
)
self.expected_token = expected_token
self.custom_validator = validator
self.status_if_missing = status_if_missing
self.message_if_invalid = message_if_invalid
async def __call__(self, request: Request) -> Any:
api_key = await super().__call__(request)
if api_key is None:
raise HTTPException(
status_code=self.status_if_missing, detail="No API key in request"
)
# Use custom validation if provided, otherwise use default equality check
validator = self.custom_validator or self.default_validator
result = (
await validator(api_key)
if inspect.iscoroutinefunction(validator)
else validator(api_key)
)
if not result:
raise HTTPException(
status_code=self.status_if_missing, detail=self.message_if_invalid
)
# Store validation result in request state if it's not just a boolean
if result is not True:
request.state.api_key = result
return result
async def default_validator(self, api_key: str) -> bool:
if not self.expected_token:
raise MissingConfigError(
f"{self.__class__.__name__}.expected_token is not set; "
"either specify it or provide a custom validator"
)
try:
return secrets.compare_digest(api_key, self.expected_token)
except TypeError as e:
# If value is not an ASCII string, compare_digest raises a TypeError
logger.warning(f"{self.model.name} API key check failed: {e}")
return False

View File

@@ -0,0 +1,537 @@
"""
Unit tests for APIKeyAuthenticator class.
"""
from unittest.mock import Mock, patch
import pytest
from fastapi import HTTPException, Request
from starlette.status import HTTP_401_UNAUTHORIZED, HTTP_403_FORBIDDEN
from backend.server.utils.api_key_auth import APIKeyAuthenticator
from backend.util.exceptions import MissingConfigError
@pytest.fixture
def mock_request():
"""Create a mock request object."""
request = Mock(spec=Request)
request.state = Mock()
request.headers = {}
return request
@pytest.fixture
def api_key_auth():
"""Create a basic APIKeyAuthenticator instance."""
return APIKeyAuthenticator(
header_name="X-API-Key", expected_token="test-secret-token"
)
@pytest.fixture
def api_key_auth_custom_validator():
"""Create APIKeyAuthenticator with custom validator."""
def custom_validator(api_key: str) -> bool:
return api_key == "custom-valid-key"
return APIKeyAuthenticator(header_name="X-API-Key", validator=custom_validator)
@pytest.fixture
def api_key_auth_async_validator():
"""Create APIKeyAuthenticator with async custom validator."""
async def async_validator(api_key: str) -> bool:
return api_key == "async-valid-key"
return APIKeyAuthenticator(header_name="X-API-Key", validator=async_validator)
@pytest.fixture
def api_key_auth_object_validator():
"""Create APIKeyAuthenticator that returns objects from validator."""
async def object_validator(api_key: str):
if api_key == "user-key":
return {"user_id": "123", "permissions": ["read", "write"]}
return None
return APIKeyAuthenticator(header_name="X-API-Key", validator=object_validator)
# ========== Basic Initialization Tests ========== #
def test_init_with_expected_token():
"""Test initialization with expected token."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="test-token")
assert auth.model.name == "X-API-Key"
assert auth.expected_token == "test-token"
assert auth.custom_validator is None
assert auth.status_if_missing == HTTP_401_UNAUTHORIZED
assert auth.message_if_invalid == "Invalid API key"
def test_init_with_custom_validator():
"""Test initialization with custom validator."""
def validator(key: str) -> bool:
return True
auth = APIKeyAuthenticator(header_name="Authorization", validator=validator)
assert auth.model.name == "Authorization"
assert auth.expected_token is None
assert auth.custom_validator == validator
assert auth.status_if_missing == HTTP_401_UNAUTHORIZED
assert auth.message_if_invalid == "Invalid API key"
def test_init_with_custom_parameters():
"""Test initialization with custom status and message."""
auth = APIKeyAuthenticator(
header_name="X-Custom-Key",
expected_token="token",
status_if_missing=HTTP_403_FORBIDDEN,
message_if_invalid="Access denied",
)
assert auth.model.name == "X-Custom-Key"
assert auth.status_if_missing == HTTP_403_FORBIDDEN
assert auth.message_if_invalid == "Access denied"
def test_scheme_name_generation():
"""Test that scheme_name is generated correctly."""
auth = APIKeyAuthenticator(header_name="X-Custom-Header", expected_token="token")
assert auth.scheme_name == "APIKeyAuthenticator-X-Custom-Header"
# ========== Authentication Flow Tests ========== #
@pytest.mark.asyncio
async def test_api_key_missing(api_key_auth, mock_request):
"""Test behavior when API key is missing from request."""
# Mock the parent class method to return None (no API key)
with pytest.raises(HTTPException) as exc_info:
await api_key_auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "No API key in request"
@pytest.mark.asyncio
async def test_api_key_valid(api_key_auth, mock_request):
"""Test behavior with valid API key."""
# Mock the parent class to return the API key
with patch.object(
api_key_auth.__class__.__bases__[0],
"__call__",
return_value="test-secret-token",
):
result = await api_key_auth(mock_request)
assert result is True
@pytest.mark.asyncio
async def test_api_key_invalid(api_key_auth, mock_request):
"""Test behavior with invalid API key."""
# Mock the parent class to return an invalid API key
with patch.object(
api_key_auth.__class__.__bases__[0], "__call__", return_value="invalid-token"
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
# ========== Custom Validator Tests ========== #
@pytest.mark.asyncio
async def test_custom_status_and_message(mock_request):
"""Test custom status code and message."""
auth = APIKeyAuthenticator(
header_name="X-API-Key",
expected_token="valid-token",
status_if_missing=HTTP_403_FORBIDDEN,
message_if_invalid="Access forbidden",
)
# Test missing key
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_403_FORBIDDEN
assert exc_info.value.detail == "No API key in request"
# Test invalid key
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value="invalid-token"
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_403_FORBIDDEN
assert exc_info.value.detail == "Access forbidden"
@pytest.mark.asyncio
async def test_custom_sync_validator(api_key_auth_custom_validator, mock_request):
"""Test with custom synchronous validator."""
# Mock the parent class to return the API key
with patch.object(
api_key_auth_custom_validator.__class__.__bases__[0],
"__call__",
return_value="custom-valid-key",
):
result = await api_key_auth_custom_validator(mock_request)
assert result is True
@pytest.mark.asyncio
async def test_custom_sync_validator_invalid(
api_key_auth_custom_validator, mock_request
):
"""Test custom synchronous validator with invalid key."""
with patch.object(
api_key_auth_custom_validator.__class__.__bases__[0],
"__call__",
return_value="invalid-key",
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth_custom_validator(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_custom_async_validator(api_key_auth_async_validator, mock_request):
"""Test with custom async validator."""
with patch.object(
api_key_auth_async_validator.__class__.__bases__[0],
"__call__",
return_value="async-valid-key",
):
result = await api_key_auth_async_validator(mock_request)
assert result is True
@pytest.mark.asyncio
async def test_custom_async_validator_invalid(
api_key_auth_async_validator, mock_request
):
"""Test custom async validator with invalid key."""
with patch.object(
api_key_auth_async_validator.__class__.__bases__[0],
"__call__",
return_value="invalid-key",
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth_async_validator(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_validator_returns_object(api_key_auth_object_validator, mock_request):
"""Test validator that returns an object instead of boolean."""
with patch.object(
api_key_auth_object_validator.__class__.__bases__[0],
"__call__",
return_value="user-key",
):
result = await api_key_auth_object_validator(mock_request)
expected_result = {"user_id": "123", "permissions": ["read", "write"]}
assert result == expected_result
# Verify the object is stored in request state
assert mock_request.state.api_key == expected_result
@pytest.mark.asyncio
async def test_validator_returns_none(api_key_auth_object_validator, mock_request):
"""Test validator that returns None (falsy)."""
with patch.object(
api_key_auth_object_validator.__class__.__bases__[0],
"__call__",
return_value="invalid-key",
):
with pytest.raises(HTTPException) as exc_info:
await api_key_auth_object_validator(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_validator_database_lookup_simulation(mock_request):
"""Test simulation of database lookup validator."""
# Simulate database records
valid_api_keys = {
"key123": {"user_id": "user1", "active": True},
"key456": {"user_id": "user2", "active": False},
}
async def db_validator(api_key: str):
record = valid_api_keys.get(api_key)
return record if record and record["active"] else None
auth = APIKeyAuthenticator(header_name="X-API-Key", validator=db_validator)
# Test valid active key
with patch.object(auth.__class__.__bases__[0], "__call__", return_value="key123"):
result = await auth(mock_request)
assert result == {"user_id": "user1", "active": True}
assert mock_request.state.api_key == {"user_id": "user1", "active": True}
# Test inactive key
mock_request.state = Mock() # Reset state
with patch.object(auth.__class__.__bases__[0], "__call__", return_value="key456"):
with pytest.raises(HTTPException):
await auth(mock_request)
# Test non-existent key
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value="nonexistent"
):
with pytest.raises(HTTPException):
await auth(mock_request)
# ========== Default Validator Tests ========== #
@pytest.mark.asyncio
async def test_default_validator_key_valid(api_key_auth):
"""Test default validator with valid token."""
result = await api_key_auth.default_validator("test-secret-token")
assert result is True
@pytest.mark.asyncio
async def test_default_validator_key_invalid(api_key_auth):
"""Test default validator with invalid token."""
result = await api_key_auth.default_validator("wrong-token")
assert result is False
@pytest.mark.asyncio
async def test_default_validator_missing_expected_token():
"""Test default validator when expected_token is not set."""
auth = APIKeyAuthenticator(header_name="X-API-Key")
with pytest.raises(MissingConfigError) as exc_info:
await auth.default_validator("any-token")
assert "expected_token is not set" in str(exc_info.value)
assert "either specify it or provide a custom validator" in str(exc_info.value)
@pytest.mark.asyncio
async def test_default_validator_uses_constant_time_comparison(api_key_auth):
"""
Test that default validator uses secrets.compare_digest for timing attack protection
"""
with patch("secrets.compare_digest") as mock_compare:
mock_compare.return_value = True
await api_key_auth.default_validator("test-token")
mock_compare.assert_called_once_with("test-token", "test-secret-token")
@pytest.mark.asyncio
async def test_api_key_empty(mock_request):
"""Test behavior with empty string API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
with patch.object(auth.__class__.__bases__[0], "__call__", return_value=""):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_whitespace_only(mock_request):
"""Test behavior with whitespace-only API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=" \t\n "
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_very_long(mock_request):
"""Test behavior with extremely long API key (potential DoS protection)."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# Create a very long API key (10MB)
long_api_key = "a" * (10 * 1024 * 1024)
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=long_api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_null_bytes(mock_request):
"""Test behavior with API key containing null bytes."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
api_key_with_null = "valid\x00token"
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=api_key_with_null
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_control_characters(mock_request):
"""Test behavior with API key containing control characters."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# API key with various control characters
api_key_with_control = "valid\r\n\t\x1b[31mtoken"
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=api_key_with_control
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_unicode_characters(mock_request):
"""Test behavior with Unicode characters in API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# API key with Unicode characters
unicode_api_key = "validтокен🔑"
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=unicode_api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_unicode_characters_normalization_attack(mock_request):
"""Test that Unicode normalization doesn't bypass validation."""
# Create auth with composed Unicode character
auth = APIKeyAuthenticator(
header_name="X-API-Key", expected_token="café" # é is composed
)
# Try with decomposed version (c + a + f + e + ´)
decomposed_key = "cafe\u0301" # é as combining character
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=decomposed_key
):
# Should fail because secrets.compare_digest doesn't normalize
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_binary_data(mock_request):
"""Test behavior with binary data in API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# Binary data that might cause encoding issues
binary_api_key = bytes([0xFF, 0xFE, 0xFD, 0xFC, 0x80, 0x81]).decode(
"latin1", errors="ignore"
)
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=binary_api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_key_with_regex_dos_attack_pattern(mock_request):
"""Test behavior with API key of repeated characters (pattern attack)."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
# Pattern that might cause regex DoS in poorly implemented validators
repeated_key = "a" * 1000 + "b" * 1000 + "c" * 1000
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=repeated_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"
@pytest.mark.asyncio
async def test_api_keys_with_newline_variations(mock_request):
"""Test different newline characters in API key."""
auth = APIKeyAuthenticator(header_name="X-API-Key", expected_token="valid-token")
newline_variations = [
"valid\ntoken", # Unix newline
"valid\r\ntoken", # Windows newline
"valid\rtoken", # Mac newline
"valid\x85token", # NEL (Next Line)
"valid\x0Btoken", # Vertical Tab
"valid\x0Ctoken", # Form Feed
]
for api_key in newline_variations:
with patch.object(
auth.__class__.__bases__[0], "__call__", return_value=api_key
):
with pytest.raises(HTTPException) as exc_info:
await auth(mock_request)
assert exc_info.value.status_code == HTTP_401_UNAUTHORIZED
assert exc_info.value.detail == "Invalid API key"

View File

@@ -1,9 +1,8 @@
import logging
import typing
from autogpt_libs.auth import requires_admin_user
from autogpt_libs.auth.depends import get_user_id
from fastapi import APIRouter, Body, Depends
from autogpt_libs.auth import get_user_id, requires_admin_user
from fastapi import APIRouter, Body, Security
from prisma.enums import CreditTransactionType
from backend.data.credit import admin_get_user_history, get_user_credit_model
@@ -18,7 +17,7 @@ _user_credit_model = get_user_credit_model()
router = APIRouter(
prefix="/admin",
tags=["credits", "admin"],
dependencies=[Depends(requires_admin_user)],
dependencies=[Security(requires_admin_user)],
)
@@ -29,18 +28,16 @@ async def add_user_credits(
user_id: typing.Annotated[str, Body()],
amount: typing.Annotated[int, Body()],
comments: typing.Annotated[str, Body()],
admin_user: typing.Annotated[
str,
Depends(get_user_id),
],
admin_user_id: str = Security(get_user_id),
):
""" """
logger.info(f"Admin user {admin_user} is adding {amount} credits to user {user_id}")
logger.info(
f"Admin user {admin_user_id} is adding {amount} credits to user {user_id}"
)
new_balance, transaction_key = await _user_credit_model._add_transaction(
user_id,
amount,
transaction_type=CreditTransactionType.GRANT,
metadata=SafeJson({"admin_id": admin_user, "reason": comments}),
metadata=SafeJson({"admin_id": admin_user_id, "reason": comments}),
)
return {
"new_balance": new_balance,
@@ -54,17 +51,14 @@ async def add_user_credits(
summary="Get All Users History",
)
async def admin_get_all_user_history(
admin_user: typing.Annotated[
str,
Depends(get_user_id),
],
admin_user_id: str = Security(get_user_id),
search: typing.Optional[str] = None,
page: int = 1,
page_size: int = 20,
transaction_filter: typing.Optional[CreditTransactionType] = None,
):
""" """
logger.info(f"Admin user {admin_user} is getting grant history")
logger.info(f"Admin user {admin_user_id} is getting grant history")
try:
resp = await admin_get_user_history(
@@ -73,7 +67,7 @@ async def admin_get_all_user_history(
search=search,
transaction_filter=transaction_filter,
)
logger.info(f"Admin user {admin_user} got {len(resp.history)} grant history")
logger.info(f"Admin user {admin_user_id} got {len(resp.history)} grant history")
return resp
except Exception as e:
logger.exception(f"Error getting grant history: {e}")

View File

@@ -1,19 +1,18 @@
import json
from unittest.mock import AsyncMock
import autogpt_libs.auth
import autogpt_libs.auth.depends
import fastapi
import fastapi.testclient
import prisma.enums
import pytest
import pytest_mock
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from prisma import Json
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.admin.credit_admin_routes as credit_admin_routes
import backend.server.v2.admin.model as admin_model
from backend.data.model import UserTransaction
from backend.server.conftest import ADMIN_USER_ID, TARGET_USER_ID
from backend.util.models import Pagination
app = fastapi.FastAPI()
@@ -22,25 +21,19 @@ app.include_router(credit_admin_routes.router)
client = fastapi.testclient.TestClient(app)
def override_requires_admin_user() -> dict[str, str]:
"""Override admin user check for testing"""
return {"sub": ADMIN_USER_ID, "role": "admin"}
def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return ADMIN_USER_ID
app.dependency_overrides[autogpt_libs.auth.requires_admin_user] = (
override_requires_admin_user
)
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = override_get_user_id
@pytest.fixture(autouse=True)
def setup_app_admin_auth(mock_jwt_admin):
"""Setup admin auth overrides for all tests in this module"""
app.dependency_overrides[get_jwt_payload] = mock_jwt_admin["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def test_add_user_credits_success(
mocker: pytest_mock.MockFixture,
configured_snapshot: Snapshot,
admin_user_id: str,
target_user_id: str,
) -> None:
"""Test successful credit addition by admin"""
# Mock the credit model
@@ -52,7 +45,7 @@ def test_add_user_credits_success(
)
request_data = {
"user_id": TARGET_USER_ID,
"user_id": target_user_id,
"amount": 500,
"comments": "Test credit grant for debugging",
}
@@ -67,12 +60,12 @@ def test_add_user_credits_success(
# Verify the function was called with correct parameters
mock_credit_model._add_transaction.assert_called_once()
call_args = mock_credit_model._add_transaction.call_args
assert call_args[0] == (TARGET_USER_ID, 500)
assert call_args[0] == (target_user_id, 500)
assert call_args[1]["transaction_type"] == prisma.enums.CreditTransactionType.GRANT
# Check that metadata is a Json object with the expected content
assert isinstance(call_args[1]["metadata"], Json)
assert call_args[1]["metadata"] == Json(
{"admin_id": ADMIN_USER_ID, "reason": "Test credit grant for debugging"}
{"admin_id": admin_user_id, "reason": "Test credit grant for debugging"}
)
# Snapshot test the response
@@ -290,18 +283,10 @@ def test_add_credits_invalid_request() -> None:
assert response.status_code == 422
def test_admin_endpoints_require_admin_role(mocker: pytest_mock.MockFixture) -> None:
def test_admin_endpoints_require_admin_role(mock_jwt_user) -> None:
"""Test that admin endpoints require admin role"""
# Clear the admin override to test authorization
app.dependency_overrides.clear()
# Mock requires_admin_user to raise an exception
mocker.patch(
"autogpt_libs.auth.requires_admin_user",
side_effect=fastapi.HTTPException(
status_code=403, detail="Admin access required"
),
)
# Simulate regular non-admin user
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
# Test add_credits endpoint
response = client.post(
@@ -312,20 +297,8 @@ def test_admin_endpoints_require_admin_role(mocker: pytest_mock.MockFixture) ->
"comments": "test",
},
)
assert (
response.status_code == 401
) # Auth middleware returns 401 when auth is disabled
assert response.status_code == 403
# Test users_history endpoint
response = client.get("/admin/users_history")
assert (
response.status_code == 401
) # Auth middleware returns 401 when auth is disabled
# Restore the override
app.dependency_overrides[autogpt_libs.auth.requires_admin_user] = (
override_requires_admin_user
)
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = (
override_get_user_id
)
assert response.status_code == 403

View File

@@ -2,26 +2,28 @@ import logging
import tempfile
import typing
import autogpt_libs.auth.depends
import autogpt_libs.auth
import fastapi
import fastapi.responses
import prisma.enums
import backend.server.v2.store.db
import backend.server.v2.store.exceptions
import backend.server.v2.store.model
import backend.util.json
logger = logging.getLogger(__name__)
router = fastapi.APIRouter(prefix="/admin", tags=["store", "admin"])
router = fastapi.APIRouter(
prefix="/admin",
tags=["store", "admin"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_admin_user)],
)
@router.get(
"/listings",
summary="Get Admin Listings History",
response_model=backend.server.v2.store.model.StoreListingsWithVersionsResponse,
dependencies=[fastapi.Depends(autogpt_libs.auth.depends.requires_admin_user)],
)
async def get_admin_listings_with_versions(
status: typing.Optional[prisma.enums.SubmissionStatus] = None,
@@ -66,15 +68,11 @@ async def get_admin_listings_with_versions(
"/submissions/{store_listing_version_id}/review",
summary="Review Store Submission",
response_model=backend.server.v2.store.model.StoreSubmission,
dependencies=[fastapi.Depends(autogpt_libs.auth.depends.requires_admin_user)],
)
async def review_submission(
store_listing_version_id: str,
request: backend.server.v2.store.model.ReviewSubmissionRequest,
user: typing.Annotated[
autogpt_libs.auth.models.User,
fastapi.Depends(autogpt_libs.auth.depends.requires_admin_user),
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Review a store listing submission.
@@ -82,7 +80,7 @@ async def review_submission(
Args:
store_listing_version_id: ID of the submission to review
request: Review details including approval status and comments
user: Authenticated admin user performing the review
user_id: Authenticated admin user performing the review
Returns:
StoreSubmission with updated review information
@@ -93,7 +91,7 @@ async def review_submission(
is_approved=request.is_approved,
external_comments=request.comments,
internal_comments=request.internal_comments or "",
reviewer_id=user.user_id,
reviewer_id=user_id,
)
return submission
except Exception as e:
@@ -108,13 +106,9 @@ async def review_submission(
"/submissions/download/{store_listing_version_id}",
summary="Admin Download Agent File",
tags=["store", "admin"],
dependencies=[fastapi.Depends(autogpt_libs.auth.depends.requires_admin_user)],
)
async def admin_download_agent_file(
user: typing.Annotated[
autogpt_libs.auth.models.User,
fastapi.Depends(autogpt_libs.auth.depends.requires_admin_user),
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
store_listing_version_id: str = fastapi.Path(
..., description="The ID of the agent to download"
),
@@ -132,7 +126,7 @@ async def admin_download_agent_file(
HTTPException: If the agent is not found or an unexpected error occurs.
"""
graph_data = await backend.server.v2.store.db.get_agent_as_admin(
user_id=user.user_id,
user_id=user_id,
store_listing_version_id=store_listing_version_id,
)
file_name = f"agent_{graph_data.id}_v{graph_data.version or 'latest'}.json"

View File

@@ -7,12 +7,10 @@ import prisma
import backend.data.block
from backend.blocks import load_all_blocks
from backend.blocks.llm import LlmModel
from backend.data.block import Block, BlockCategory, BlockSchema
from backend.data.credit import get_block_costs
from backend.data.block import Block, BlockCategory, BlockInfo, BlockSchema
from backend.integrations.providers import ProviderName
from backend.server.v2.builder.model import (
BlockCategoryResponse,
BlockData,
BlockResponse,
BlockType,
CountResponse,
@@ -25,7 +23,7 @@ from backend.util.models import Pagination
logger = logging.getLogger(__name__)
llm_models = [name.name.lower().replace("_", " ") for name in LlmModel]
_static_counts_cache: dict | None = None
_suggested_blocks: list[BlockData] | None = None
_suggested_blocks: list[BlockInfo] | None = None
def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse]:
@@ -53,7 +51,7 @@ def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse
# Append if the category has less than the specified number of blocks
if len(categories[category].blocks) < category_blocks:
categories[category].blocks.append(block.to_dict())
categories[category].blocks.append(block.get_info())
# Sort categories by name
return sorted(categories.values(), key=lambda x: x.name)
@@ -109,10 +107,8 @@ def get_blocks(
take -= 1
blocks.append(block)
costs = get_block_costs()
return BlockResponse(
blocks=[{**b.to_dict(), "costs": costs.get(b.id, [])} for b in blocks],
blocks=[b.get_info() for b in blocks],
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
@@ -174,11 +170,9 @@ def search_blocks(
take -= 1
blocks.append(block)
costs = get_block_costs()
return SearchBlocksResponse(
blocks=BlockResponse(
blocks=[{**b.to_dict(), "costs": costs.get(b.id, [])} for b in blocks],
blocks=[b.get_info() for b in blocks],
pagination=Pagination(
total_items=total,
total_pages=(total + page_size - 1) // page_size,
@@ -323,7 +317,7 @@ def _get_all_providers() -> dict[ProviderName, Provider]:
return providers
async def get_suggested_blocks(count: int = 5) -> list[BlockData]:
async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]:
global _suggested_blocks
if _suggested_blocks is not None and len(_suggested_blocks) >= count:
@@ -351,7 +345,7 @@ async def get_suggested_blocks(count: int = 5) -> list[BlockData]:
# Get the top blocks based on execution count
# But ignore Input and Output blocks
blocks: list[tuple[BlockData, int]] = []
blocks: list[tuple[BlockInfo, int]] = []
for block_type in load_all_blocks().values():
block: Block[BlockSchema, BlockSchema] = block_type()
@@ -366,7 +360,7 @@ async def get_suggested_blocks(count: int = 5) -> list[BlockData]:
(row["execution_count"] for row in results if row["block_id"] == block.id),
0,
)
blocks.append((block.to_dict(), execution_count))
blocks.append((block.get_info(), execution_count))
# Sort blocks by execution count
blocks.sort(key=lambda x: x[1], reverse=True)

View File

@@ -1,9 +1,10 @@
from typing import Any, Literal
from typing import Literal
from pydantic import BaseModel
import backend.server.v2.library.model as library_model
import backend.server.v2.store.model as store_model
from backend.data.block import BlockInfo
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
@@ -16,29 +17,27 @@ FilterType = Literal[
BlockType = Literal["all", "input", "action", "output"]
BlockData = dict[str, Any]
# Suggestions
class SuggestionsResponse(BaseModel):
otto_suggestions: list[str]
recent_searches: list[str]
providers: list[ProviderName]
top_blocks: list[BlockData]
top_blocks: list[BlockInfo]
# All blocks
class BlockCategoryResponse(BaseModel):
name: str
total_blocks: int
blocks: list[BlockData]
blocks: list[BlockInfo]
model_config = {"use_enum_values": False} # <== use enum names like "AI"
# Input/Action/Output and see all for block categories
class BlockResponse(BaseModel):
blocks: list[BlockData]
blocks: list[BlockInfo]
pagination: Pagination
@@ -71,7 +70,7 @@ class SearchBlocksResponse(BaseModel):
class SearchResponse(BaseModel):
items: list[BlockData | library_model.LibraryAgent | store_model.StoreAgent]
items: list[BlockInfo | library_model.LibraryAgent | store_model.StoreAgent]
total_items: dict[FilterType, int]
page: int
more_pages: bool

View File

@@ -2,7 +2,7 @@ import logging
from typing import Annotated, Sequence
import fastapi
from autogpt_libs.auth.depends import auth_middleware, get_user_id
from autogpt_libs.auth.dependencies import get_user_id, requires_user
import backend.server.v2.builder.db as builder_db
import backend.server.v2.builder.model as builder_model
@@ -15,7 +15,9 @@ from backend.util.models import Pagination
logger = logging.getLogger(__name__)
router = fastapi.APIRouter()
router = fastapi.APIRouter(
dependencies=[fastapi.Security(requires_user)],
)
# Taken from backend/server/v2/store/db.py
@@ -41,12 +43,9 @@ def sanitize_query(query: str | None) -> str | None:
@router.get(
"/suggestions",
summary="Get Builder suggestions",
dependencies=[fastapi.Depends(auth_middleware)],
response_model=builder_model.SuggestionsResponse,
)
async def get_suggestions(
user_id: Annotated[str, fastapi.Depends(get_user_id)],
) -> builder_model.SuggestionsResponse:
async def get_suggestions() -> builder_model.SuggestionsResponse:
"""
Get all suggestions for the Blocks Menu.
"""
@@ -76,7 +75,6 @@ async def get_suggestions(
@router.get(
"/categories",
summary="Get Builder block categories",
dependencies=[fastapi.Depends(auth_middleware)],
response_model=Sequence[builder_model.BlockCategoryResponse],
)
async def get_block_categories(
@@ -91,7 +89,6 @@ async def get_block_categories(
@router.get(
"/blocks",
summary="Get Builder blocks",
dependencies=[fastapi.Depends(auth_middleware)],
response_model=builder_model.BlockResponse,
)
async def get_blocks(
@@ -116,7 +113,6 @@ async def get_blocks(
@router.get(
"/providers",
summary="Get Builder integration providers",
dependencies=[fastapi.Depends(auth_middleware)],
response_model=builder_model.ProviderResponse,
)
async def get_providers(
@@ -136,12 +132,11 @@ async def get_providers(
"/search",
summary="Builder search",
tags=["store", "private"],
dependencies=[fastapi.Depends(auth_middleware)],
response_model=builder_model.SearchResponse,
)
async def search(
options: builder_model.SearchRequest,
user_id: Annotated[str, fastapi.Depends(get_user_id)],
user_id: Annotated[str, fastapi.Security(get_user_id)],
) -> builder_model.SearchResponse:
"""
Search for blocks (including integrations), marketplace agents, and user library agents.
@@ -227,11 +222,10 @@ async def search(
@router.get(
"/counts",
summary="Get Builder item counts",
dependencies=[fastapi.Depends(auth_middleware)],
response_model=builder_model.CountResponse,
)
async def get_counts(
user_id: Annotated[str, fastapi.Depends(get_user_id)],
user_id: Annotated[str, fastapi.Security(get_user_id)],
) -> builder_model.CountResponse:
"""
Get item counts for the menu categories in the Blocks Menu.

View File

@@ -16,7 +16,7 @@ import backend.server.v2.store.media as store_media
from backend.data.block import BlockInput
from backend.data.db import transaction
from backend.data.execution import get_graph_execution
from backend.data.includes import library_agent_include
from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks.graph_lifecycle_hooks import on_graph_activate
@@ -617,7 +617,7 @@ async def list_presets(
where=query_filter,
skip=(page - 1) * page_size,
take=page_size,
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
total_items = await prisma.models.AgentPreset.prisma().count(where=query_filter)
total_pages = (total_items + page_size - 1) // page_size
@@ -662,7 +662,7 @@ async def get_preset(
try:
preset = await prisma.models.AgentPreset.prisma().find_unique(
where={"id": preset_id},
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not preset or preset.userId != user_id or preset.isDeleted:
return None
@@ -717,7 +717,7 @@ async def create_preset(
]
},
),
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
return library_model.LibraryAgentPreset.from_db(new_preset)
except prisma.errors.PrismaError as e:
@@ -747,6 +747,25 @@ async def create_preset_from_graph_execution(
if not graph_execution:
raise NotFoundError(f"Graph execution #{graph_exec_id} not found")
# Sanity check: credential inputs must be available if required for this preset
if graph_execution.credential_inputs is None:
graph = await graph_db.get_graph(
graph_id=graph_execution.graph_id,
version=graph_execution.graph_version,
user_id=graph_execution.user_id,
include_subgraphs=True,
)
if not graph:
raise NotFoundError(
f"Graph #{graph_execution.graph_id} not found or accessible"
)
elif len(graph.aggregate_credentials_inputs()) > 0:
raise ValueError(
f"Graph execution #{graph_exec_id} can't be turned into a preset "
"because it was run before this feature existed "
"and so the input credentials were not saved."
)
logger.debug(
f"Creating preset for user #{user_id} from graph execution #{graph_exec_id}",
)
@@ -754,7 +773,7 @@ async def create_preset_from_graph_execution(
user_id=user_id,
preset=library_model.LibraryAgentPresetCreatable(
inputs=graph_execution.inputs,
credentials={}, # FIXME
credentials=graph_execution.credential_inputs or {},
graph_id=graph_execution.graph_id,
graph_version=graph_execution.graph_version,
name=create_request.name,
@@ -834,7 +853,7 @@ async def update_preset(
updated = await prisma.models.AgentPreset.prisma(tx).update(
where={"id": preset_id},
data=update_data,
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not updated:
raise RuntimeError(f"AgentPreset #{preset_id} vanished while updating")
@@ -849,7 +868,7 @@ async def set_preset_webhook(
) -> library_model.LibraryAgentPreset:
current = await prisma.models.AgentPreset.prisma().find_unique(
where={"id": preset_id},
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not current or current.userId != user_id:
raise NotFoundError(f"Preset #{preset_id} not found")
@@ -861,7 +880,7 @@ async def set_preset_webhook(
if webhook_id
else {"Webhook": {"disconnect": True}}
),
include={"InputPresets": True},
include=AGENT_PRESET_INCLUDE,
)
if not updated:
raise RuntimeError(f"AgentPreset #{preset_id} vanished while updating")

View File

@@ -1,6 +1,6 @@
import datetime
from enum import Enum
from typing import Any, Optional
from typing import TYPE_CHECKING, Any, Optional
import prisma.enums
import prisma.models
@@ -9,9 +9,11 @@ import pydantic
import backend.data.block as block_model
import backend.data.graph as graph_model
from backend.data.model import CredentialsMetaInput, is_credentials_field_name
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
if TYPE_CHECKING:
from backend.data.integrations import Webhook
class LibraryAgentStatus(str, Enum):
COMPLETED = "COMPLETED" # All runs completed
@@ -20,14 +22,6 @@ class LibraryAgentStatus(str, Enum):
ERROR = "ERROR" # Agent is in an error state
class LibraryAgentTriggerInfo(pydantic.BaseModel):
provider: ProviderName
config_schema: dict[str, Any] = pydantic.Field(
description="Input schema for the trigger block"
)
credentials_input_name: Optional[str]
class LibraryAgent(pydantic.BaseModel):
"""
Represents an agent in the library, including metadata for display and
@@ -59,7 +53,7 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node"
)
trigger_setup_info: Optional[LibraryAgentTriggerInfo] = None
trigger_setup_info: Optional[graph_model.GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
new_output: bool
@@ -70,6 +64,9 @@ class LibraryAgent(pydantic.BaseModel):
# Indicates if this agent is the latest version
is_latest_version: bool
# Recommended schedule cron (from marketplace agents)
recommended_schedule_cron: str | None = None
@staticmethod
def from_db(
agent: prisma.models.LibraryAgent,
@@ -132,33 +129,11 @@ class LibraryAgent(pydantic.BaseModel):
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
trigger_setup_info=(
LibraryAgentTriggerInfo(
provider=trigger_block.webhook_config.provider,
config_schema={
**(json_schema := trigger_block.input_schema.jsonschema()),
"properties": {
pn: sub_schema
for pn, sub_schema in json_schema["properties"].items()
if not is_credentials_field_name(pn)
},
"required": [
pn
for pn in json_schema.get("required", [])
if not is_credentials_field_name(pn)
],
},
credentials_input_name=next(
iter(trigger_block.input_schema.get_credentials_fields()), None
),
)
if graph.webhook_input_node
and (trigger_block := graph.webhook_input_node.block).webhook_config
else None
),
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
can_access_graph=can_access_graph,
is_latest_version=is_latest_version,
recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron,
)
@@ -284,10 +259,18 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable):
user_id: str
updated_at: datetime.datetime
webhook: "Webhook | None"
@classmethod
def from_db(cls, preset: prisma.models.AgentPreset) -> "LibraryAgentPreset":
from backend.data.integrations import Webhook
if preset.InputPresets is None:
raise ValueError("InputPresets must be included in AgentPreset query")
if preset.webhookId and preset.Webhook is None:
raise ValueError(
"Webhook must be included in AgentPreset query when webhookId is set"
)
input_data: block_model.BlockInput = {}
input_credentials: dict[str, CredentialsMetaInput] = {}
@@ -312,6 +295,7 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable):
inputs=input_data,
credentials=input_credentials,
webhook_id=preset.webhookId,
webhook=Webhook.from_db(preset.Webhook) if preset.Webhook else None,
)

View File

@@ -1,6 +1,5 @@
import datetime
import prisma.fields
import prisma.models
import pytest
@@ -8,7 +7,7 @@ import backend.server.v2.library.model as library_model
@pytest.mark.asyncio
async def test_agent_preset_from_db():
async def test_agent_preset_from_db(test_user_id: str):
# Create mock DB agent
db_agent = prisma.models.AgentPreset(
id="test-agent-123",
@@ -19,7 +18,7 @@ async def test_agent_preset_from_db():
name="Test Agent",
description="Test agent description",
isActive=True,
userId="test-user-123",
userId=test_user_id,
isDeleted=False,
InputPresets=[
prisma.models.AgentNodeExecutionInputOutput.model_validate(

View File

@@ -2,7 +2,7 @@ import logging
from typing import Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, Depends, HTTPException, Query, status
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
from fastapi.responses import Response
import backend.server.v2.library.db as library_db
@@ -15,7 +15,7 @@ logger = logging.getLogger(__name__)
router = APIRouter(
prefix="/agents",
tags=["library", "private"],
dependencies=[Depends(autogpt_auth_lib.auth_middleware)],
dependencies=[Security(autogpt_auth_lib.requires_user)],
)
@@ -27,7 +27,7 @@ router = APIRouter(
},
)
async def list_library_agents(
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
search_term: Optional[str] = Query(
None, description="Search term to filter agents"
),
@@ -82,7 +82,7 @@ async def list_library_agents(
@router.get("/{library_agent_id}", summary="Get Library Agent")
async def get_library_agent(
library_agent_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent:
return await library_db.get_library_agent(id=library_agent_id, user_id=user_id)
@@ -91,7 +91,7 @@ async def get_library_agent(
async def get_library_agent_by_graph_id(
graph_id: str,
version: Optional[int] = Query(default=None),
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent:
library_agent = await library_db.get_library_agent_by_graph_id(
user_id, graph_id, version
@@ -111,7 +111,7 @@ async def get_library_agent_by_graph_id(
)
async def get_library_agent_by_store_listing_version_id(
store_listing_version_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent | None:
"""
Get Library Agent from Store Listing Version ID.
@@ -145,7 +145,7 @@ async def get_library_agent_by_store_listing_version_id(
)
async def add_marketplace_agent_to_library(
store_listing_version_id: str = Body(embed=True),
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent:
"""
Add an agent from the marketplace to the user's library.
@@ -201,7 +201,7 @@ async def add_marketplace_agent_to_library(
async def update_library_agent(
library_agent_id: str,
payload: library_model.LibraryAgentUpdateRequest,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent:
"""
Update the library agent with the given fields.
@@ -252,7 +252,7 @@ async def update_library_agent(
)
async def delete_library_agent(
library_agent_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> Response:
"""
Soft-delete the specified library agent.
@@ -283,7 +283,7 @@ async def delete_library_agent(
@router.post("/{library_agent_id}/fork", summary="Fork Library Agent")
async def fork_library_agent(
library_agent_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> library_model.LibraryAgent:
return await library_db.fork_library_agent(
library_agent_id=library_agent_id,

View File

@@ -2,12 +2,14 @@ import logging
from typing import Any, Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, Depends, HTTPException, Query, status
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
import backend.server.v2.library.db as db
import backend.server.v2.library.model as models
from backend.data.execution import GraphExecutionMeta
from backend.data.graph import get_graph
from backend.data.integrations import get_webhook
from backend.data.model import CredentialsMetaInput
from backend.executor.utils import add_graph_execution, make_node_credentials_input_map
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks import get_webhook_manager
@@ -17,7 +19,10 @@ from backend.util.exceptions import NotFoundError
logger = logging.getLogger(__name__)
credentials_manager = IntegrationCredentialsManager()
router = APIRouter(tags=["presets"])
router = APIRouter(
tags=["presets"],
dependencies=[Security(autogpt_auth_lib.requires_user)],
)
@router.get(
@@ -26,7 +31,7 @@ router = APIRouter(tags=["presets"])
description="Retrieve a paginated list of presets for the current user.",
)
async def list_presets(
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
page: int = Query(default=1, ge=1),
page_size: int = Query(default=10, ge=1),
graph_id: Optional[str] = Query(
@@ -66,7 +71,7 @@ async def list_presets(
)
async def get_preset(
preset_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> models.LibraryAgentPreset:
"""
Retrieve details for a specific preset by its ID.
@@ -109,7 +114,7 @@ async def create_preset(
models.LibraryAgentPresetCreatable
| models.LibraryAgentPresetCreatableFromGraphExecution
),
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> models.LibraryAgentPreset:
"""
Create a new library agent preset. Automatically corrects node_input format if needed.
@@ -141,7 +146,7 @@ async def create_preset(
@router.post("/presets/setup-trigger")
async def setup_trigger(
params: models.TriggeredPresetSetupRequest = Body(),
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> models.LibraryAgentPreset:
"""
Sets up a webhook-triggered `LibraryAgentPreset` for a `LibraryAgent`.
@@ -206,7 +211,7 @@ async def setup_trigger(
async def update_preset(
preset_id: str,
preset: models.LibraryAgentPresetUpdatable,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> models.LibraryAgentPreset:
"""
Update an existing library agent preset.
@@ -310,7 +315,7 @@ async def update_preset(
)
async def delete_preset(
preset_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
) -> None:
"""
Delete a preset by its ID. Returns 204 No Content on success.
@@ -364,50 +369,43 @@ async def delete_preset(
)
async def execute_preset(
preset_id: str,
user_id: str = Depends(autogpt_auth_lib.depends.get_user_id),
user_id: str = Security(autogpt_auth_lib.get_user_id),
inputs: dict[str, Any] = Body(..., embed=True, default_factory=dict),
) -> dict[str, Any]: # FIXME: add proper return type
credential_inputs: dict[str, CredentialsMetaInput] = Body(
..., embed=True, default_factory=dict
),
) -> GraphExecutionMeta:
"""
Execute a preset given graph parameters, returning the execution ID on success.
Args:
preset_id (str): ID of the preset to execute.
user_id (str): ID of the authenticated user.
inputs (dict[str, Any]): Optionally, additional input data for the graph execution.
preset_id: ID of the preset to execute.
user_id: ID of the authenticated user.
inputs: Optionally, inputs to override the preset for execution.
credential_inputs: Optionally, credentials to override the preset for execution.
Returns:
{id: graph_exec_id}: A response containing the execution ID.
GraphExecutionMeta: Object representing the created execution.
Raises:
HTTPException: If the preset is not found or an error occurs while executing the preset.
"""
try:
preset = await db.get_preset(user_id, preset_id)
if not preset:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Preset #{preset_id} not found",
)
# Merge input overrides with preset inputs
merged_node_input = preset.inputs | inputs
execution = await add_graph_execution(
user_id=user_id,
graph_id=preset.graph_id,
graph_version=preset.graph_version,
preset_id=preset_id,
inputs=merged_node_input,
)
logger.debug(f"Execution added: {execution} with input: {merged_node_input}")
return {"id": execution.id}
except HTTPException:
raise
except Exception as e:
logger.exception("Preset execution failed for user %s: %s", user_id, e)
preset = await db.get_preset(user_id, preset_id)
if not preset:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e),
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Preset #{preset_id} not found",
)
# Merge input overrides with preset inputs
merged_node_input = preset.inputs | inputs
merged_credential_inputs = preset.credentials | credential_inputs
return await add_graph_execution(
user_id=user_id,
graph_id=preset.graph_id,
graph_version=preset.graph_version,
preset_id=preset_id,
inputs=merged_node_input,
graph_credentials_inputs=merged_credential_inputs,
)

View File

@@ -1,7 +1,6 @@
import datetime
import json
import autogpt_libs.auth as autogpt_auth_lib
import fastapi.testclient
import pytest
import pytest_mock
@@ -19,24 +18,21 @@ client = fastapi.testclient.TestClient(app)
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0)
def override_auth_middleware():
"""Override auth middleware for testing"""
return {"sub": "test-user-id"}
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
def override_get_user_id():
"""Override get_user_id for testing"""
return "test-user-id"
app.dependency_overrides[autogpt_auth_lib.auth_middleware] = override_auth_middleware
app.dependency_overrides[autogpt_auth_lib.depends.get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
@pytest.mark.asyncio
async def test_get_library_agents_success(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
mocked_value = library_model.LibraryAgentResponse(
agents=[
@@ -54,6 +50,7 @@ async def test_get_library_agents_success(
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
can_access_graph=True,
is_latest_version=True,
@@ -73,6 +70,7 @@ async def test_get_library_agents_success(
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
can_access_graph=False,
is_latest_version=True,
@@ -100,7 +98,7 @@ async def test_get_library_agents_success(
snapshot.assert_match(json.dumps(response.json(), indent=2), "lib_agts_search")
mock_db_call.assert_called_once_with(
user_id="test-user-id",
user_id=test_user_id,
search_term="test",
sort_by=library_model.LibraryAgentSort.UPDATED_AT,
page=1,
@@ -108,14 +106,14 @@ async def test_get_library_agents_success(
)
def test_get_library_agents_error(mocker: pytest_mock.MockFixture):
def test_get_library_agents_error(mocker: pytest_mock.MockFixture, test_user_id: str):
mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents")
mock_db_call.side_effect = Exception("Test error")
response = client.get("/agents?search_term=test")
assert response.status_code == 500
mock_db_call.assert_called_once_with(
user_id="test-user-id",
user_id=test_user_id,
search_term="test",
sort_by=library_model.LibraryAgentSort.UPDATED_AT,
page=1,
@@ -123,7 +121,9 @@ def test_get_library_agents_error(mocker: pytest_mock.MockFixture):
)
def test_add_agent_to_library_success(mocker: pytest_mock.MockFixture):
def test_add_agent_to_library_success(
mocker: pytest_mock.MockFixture, test_user_id: str
):
mock_library_agent = library_model.LibraryAgent(
id="test-library-agent-id",
graph_id="test-agent-1",
@@ -160,11 +160,11 @@ def test_add_agent_to_library_success(mocker: pytest_mock.MockFixture):
assert data.graph_id == "test-agent-1"
mock_db_call.assert_called_once_with(
store_listing_version_id="test-version-id", user_id="test-user-id"
store_listing_version_id="test-version-id", user_id=test_user_id
)
def test_add_agent_to_library_error(mocker: pytest_mock.MockFixture):
def test_add_agent_to_library_error(mocker: pytest_mock.MockFixture, test_user_id: str):
mock_db_call = mocker.patch(
"backend.server.v2.library.db.add_store_agent_to_library"
)
@@ -176,5 +176,5 @@ def test_add_agent_to_library_error(mocker: pytest_mock.MockFixture):
assert response.status_code == 500
assert "detail" in response.json() # Verify error response structure
mock_db_call.assert_called_once_with(
store_listing_version_id="test-version-id", user_id="test-user-id"
store_listing_version_id="test-version-id", user_id=test_user_id
)

View File

@@ -1,9 +1,7 @@
import logging
from autogpt_libs.auth.middleware import auth_middleware
from fastapi import APIRouter, Depends, HTTPException
from backend.server.utils import get_user_id
from autogpt_libs.auth import get_user_id, requires_user
from fastapi import APIRouter, HTTPException, Security
from .models import ApiResponse, ChatRequest
from .service import OttoService
@@ -16,11 +14,11 @@ router = APIRouter()
@router.post(
"/ask",
response_model=ApiResponse,
dependencies=[Depends(auth_middleware)],
dependencies=[Security(requires_user)],
summary="Proxy Otto Chat Request",
)
async def proxy_otto_request(
request: ChatRequest, user_id: str = Depends(get_user_id)
request: ChatRequest, user_id: str = Security(get_user_id)
) -> ApiResponse:
"""
Proxy requests to Otto API while adding necessary security headers and logging.

View File

@@ -1,15 +1,13 @@
import json
import autogpt_libs.auth.depends
import autogpt_libs.auth.middleware
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
import backend.server.v2.otto.models as otto_models
import backend.server.v2.otto.routes as otto_routes
from backend.server.utils import get_user_id
from backend.server.v2.otto.service import OttoService
app = fastapi.FastAPI()
@@ -18,20 +16,14 @@ app.include_router(otto_routes.router)
client = fastapi.testclient.TestClient(app)
def override_auth_middleware():
"""Override auth middleware for testing"""
return {"sub": "test-user-id"}
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
def override_get_user_id():
"""Override get_user_id for testing"""
return "test-user-id"
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
override_auth_middleware
)
app.dependency_overrides[get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def test_ask_otto_success(
@@ -241,31 +233,14 @@ def test_ask_otto_invalid_request() -> None:
assert response.status_code == 422
def test_ask_otto_unauthenticated(mocker: pytest_mock.MockFixture) -> None:
"""Test Otto API request without authentication"""
# Remove the auth override to test unauthenticated access
app.dependency_overrides.clear()
# Mock auth_middleware to raise an exception
mocker.patch(
"autogpt_libs.auth.middleware.auth_middleware",
side_effect=fastapi.HTTPException(status_code=401, detail="Unauthorized"),
)
def test_ask_otto_unconfigured() -> None:
"""Test Otto API request without configuration"""
request_data = {
"query": "Test",
"conversation_history": [],
"message_id": "123",
}
# When Otto API URL is not configured, we get 502
response = client.post("/ask", json=request_data)
# When auth is disabled and Otto API URL is not configured, we get 502 (wrapped from 503)
assert response.status_code == 502
# Restore the override
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
override_auth_middleware
)
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = (
override_get_user_id
)

View File

@@ -1,3 +1,4 @@
import asyncio
import logging
from datetime import datetime, timezone
@@ -9,6 +10,7 @@ import prisma.types
import backend.server.v2.store.exceptions
import backend.server.v2.store.model
from backend.data.db import transaction
from backend.data.graph import (
GraphMeta,
GraphModel,
@@ -70,7 +72,7 @@ async def get_store_agents(
)
sanitized_query = sanitize_query(search_query)
where_clause = {}
where_clause: prisma.types.StoreAgentWhereInput = {"is_available": True}
if featured:
where_clause["featured"] = featured
if creators:
@@ -94,15 +96,13 @@ async def get_store_agents(
try:
agents = await prisma.models.StoreAgent.prisma().find_many(
where=prisma.types.StoreAgentWhereInput(**where_clause),
where=where_clause,
order=order_by,
skip=(page - 1) * page_size,
take=page_size,
)
total = await prisma.models.StoreAgent.prisma().count(
where=prisma.types.StoreAgentWhereInput(**where_clause)
)
total = await prisma.models.StoreAgent.prisma().count(where=where_clause)
total_pages = (total + page_size - 1) // page_size
store_agents: list[backend.server.v2.store.model.StoreAgent] = []
@@ -183,6 +183,13 @@ async def get_store_agent_details(
store_listing.hasApprovedVersion if store_listing else False
)
if store_listing and store_listing.ActiveVersion:
recommended_schedule_cron = (
store_listing.ActiveVersion.recommendedScheduleCron
)
else:
recommended_schedule_cron = None
logger.debug(f"Found agent details for {username}/{agent_name}")
return backend.server.v2.store.model.StoreAgentDetails(
store_listing_version_id=agent.storeListingVersionId,
@@ -190,8 +197,8 @@ async def get_store_agent_details(
agent_name=agent.agent_name,
agent_video=agent.agent_video or "",
agent_image=agent.agent_image,
creator=agent.creator_username,
creator_avatar=agent.creator_avatar,
creator=agent.creator_username or "",
creator_avatar=agent.creator_avatar or "",
sub_heading=agent.sub_heading,
description=agent.description,
categories=agent.categories,
@@ -201,6 +208,7 @@ async def get_store_agent_details(
last_updated=agent.updated_at,
active_version_id=active_version_id,
has_approved_version=has_approved_version,
recommended_schedule_cron=recommended_schedule_cron,
)
except backend.server.v2.store.exceptions.AgentNotFoundError:
raise
@@ -263,8 +271,8 @@ async def get_store_agent_by_version_id(
agent_name=agent.agent_name,
agent_video=agent.agent_video or "",
agent_image=agent.agent_image,
creator=agent.creator_username,
creator_avatar=agent.creator_avatar,
creator=agent.creator_username or "",
creator_avatar=agent.creator_avatar or "",
sub_heading=agent.sub_heading,
description=agent.description,
categories=agent.categories,
@@ -562,6 +570,7 @@ async def create_store_submission(
sub_heading: str = "",
categories: list[str] = [],
changes_summary: str | None = "Initial Submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
"""
Create the first (and only) store listing and thus submission as a normal user
@@ -655,6 +664,7 @@ async def create_store_submission(
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
submittedAt=datetime.now(tz=timezone.utc),
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
)
]
},
@@ -710,6 +720,7 @@ async def edit_store_submission(
sub_heading: str = "",
categories: list[str] = [],
changes_summary: str | None = "Update submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
"""
Edit an existing store listing submission.
@@ -789,6 +800,7 @@ async def edit_store_submission(
sub_heading=sub_heading,
categories=categories,
changes_summary=changes_summary,
recommended_schedule_cron=recommended_schedule_cron,
)
# For PENDING submissions, we can update the existing version
@@ -804,6 +816,7 @@ async def edit_store_submission(
categories=categories,
subHeading=sub_heading,
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
),
)
@@ -866,6 +879,7 @@ async def create_store_version(
sub_heading: str = "",
categories: list[str] = [],
changes_summary: str | None = "Initial submission",
recommended_schedule_cron: str | None = None,
) -> backend.server.v2.store.model.StoreSubmission:
"""
Create a new version for an existing store listing
@@ -935,6 +949,7 @@ async def create_store_version(
submissionStatus=prisma.enums.SubmissionStatus.PENDING,
submittedAt=datetime.now(),
changesSummary=changes_summary,
recommendedScheduleCron=recommended_schedule_cron,
storeListingId=store_listing_id,
)
)
@@ -1150,6 +1165,7 @@ async def get_my_agents(
last_edited=graph.updatedAt or graph.createdAt,
description=graph.description or "",
agent_image=library_agent.imageUrl,
recommended_schedule_cron=graph.recommendedScheduleCron,
)
for library_agent in library_agents
if (graph := library_agent.AgentGraph)
@@ -1171,10 +1187,7 @@ async def get_my_agents(
) from e
async def get_agent(
user_id: str | None,
store_listing_version_id: str,
) -> GraphModel:
async def get_agent(store_listing_version_id: str) -> GraphModel:
"""Get agent using the version ID and store listing version ID."""
store_listing_version = (
await prisma.models.StoreListingVersion.prisma().find_unique(
@@ -1186,7 +1199,6 @@ async def get_agent(
raise ValueError(f"Store listing version {store_listing_version_id} not found")
graph = await get_graph(
user_id=user_id,
graph_id=store_listing_version.agentGraphId,
version=store_listing_version.agentGraphVersion,
for_export=True,
@@ -1204,40 +1216,103 @@ async def get_agent(
#####################################################
async def _get_missing_sub_store_listing(
graph: prisma.models.AgentGraph,
) -> list[prisma.models.AgentGraph]:
"""
Agent graph can have sub-graphs, and those sub-graphs also need to be store listed.
This method fetches the sub-graphs, and returns the ones not listed in the store.
"""
sub_graphs = await get_sub_graphs(graph)
if not sub_graphs:
return []
async def _approve_sub_agent(
tx,
sub_graph: prisma.models.AgentGraph,
main_agent_name: str,
main_agent_version: int,
main_agent_user_id: str,
) -> None:
"""Approve a single sub-agent by creating/updating store listings as needed"""
heading = f"Sub-agent of {main_agent_name} v{main_agent_version}"
# Fetch all the sub-graphs that are listed, and return the ones missing.
store_listed_sub_graphs = {
(listing.agentGraphId, listing.agentGraphVersion)
for listing in await prisma.models.StoreListingVersion.prisma().find_many(
where={
"OR": [
{
"agentGraphId": sub_graph.id,
"agentGraphVersion": sub_graph.version,
}
for sub_graph in sub_graphs
],
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
"isDeleted": False,
}
# Find existing listing for this sub-agent
listing = await prisma.models.StoreListing.prisma(tx).find_first(
where={"agentGraphId": sub_graph.id, "isDeleted": False},
include={"Versions": True},
)
# Early return: Create new listing if none exists
if not listing:
await prisma.models.StoreListing.prisma(tx).create(
data=prisma.types.StoreListingCreateInput(
slug=f"sub-agent-{sub_graph.id[:8]}",
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
owningUserId=main_agent_user_id,
hasApprovedVersion=True,
Versions={
"create": [
_create_sub_agent_version_data(
sub_graph, heading, main_agent_name
)
]
},
)
)
}
return
return [
sub_graph
for sub_graph in sub_graphs
if (sub_graph.id, sub_graph.version) not in store_listed_sub_graphs
]
# Find version matching this sub-graph
matching_version = next(
(
v
for v in listing.Versions or []
if v.agentGraphId == sub_graph.id
and v.agentGraphVersion == sub_graph.version
),
None,
)
# Early return: Approve existing version if found and not already approved
if matching_version:
if matching_version.submissionStatus == prisma.enums.SubmissionStatus.APPROVED:
return # Already approved, nothing to do
await prisma.models.StoreListingVersion.prisma(tx).update(
where={"id": matching_version.id},
data={
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
"reviewedAt": datetime.now(tz=timezone.utc),
},
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": listing.id}, data={"hasApprovedVersion": True}
)
return
# Create new version if no matching version found
next_version = max((v.version for v in listing.Versions or []), default=0) + 1
await prisma.models.StoreListingVersion.prisma(tx).create(
data={
**_create_sub_agent_version_data(sub_graph, heading, main_agent_name),
"version": next_version,
"storeListingId": listing.id,
}
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": listing.id}, data={"hasApprovedVersion": True}
)
def _create_sub_agent_version_data(
sub_graph: prisma.models.AgentGraph, heading: str, main_agent_name: str
) -> prisma.types.StoreListingVersionCreateInput:
"""Create store listing version data for a sub-agent"""
return prisma.types.StoreListingVersionCreateInput(
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
name=sub_graph.name or heading,
submissionStatus=prisma.enums.SubmissionStatus.APPROVED,
subHeading=heading,
description=(
f"{heading}: {sub_graph.description}" if sub_graph.description else heading
),
changesSummary=f"Auto-approved as sub-agent of {main_agent_name}",
isAvailable=False,
submittedAt=datetime.now(tz=timezone.utc),
imageUrls=[], # Sub-agents don't need images
categories=[], # Sub-agents don't need categories
)
async def review_store_submission(
@@ -1275,33 +1350,45 @@ async def review_store_submission(
# If approving, update the listing to indicate it has an approved version
if is_approved and store_listing_version.AgentGraph:
heading = f"Sub-graph of {store_listing_version.name}v{store_listing_version.agentGraphVersion}"
sub_store_listing_versions = [
prisma.types.StoreListingVersionCreateWithoutRelationsInput(
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
name=sub_graph.name or heading,
submissionStatus=prisma.enums.SubmissionStatus.APPROVED,
subHeading=heading,
description=f"{heading}: {sub_graph.description}",
changesSummary=f"This listing is added as a {heading} / #{store_listing_version.agentGraphId}.",
isAvailable=False, # Hide sub-graphs from the store by default.
submittedAt=datetime.now(tz=timezone.utc),
async with transaction() as tx:
# Handle sub-agent approvals in transaction
await asyncio.gather(
*[
_approve_sub_agent(
tx,
sub_graph,
store_listing_version.name,
store_listing_version.agentGraphVersion,
store_listing_version.StoreListing.owningUserId,
)
for sub_graph in await get_sub_graphs(
store_listing_version.AgentGraph
)
]
)
for sub_graph in await _get_missing_sub_store_listing(
store_listing_version.AgentGraph
)
]
await prisma.models.StoreListing.prisma().update(
where={"id": store_listing_version.StoreListing.id},
data={
"hasApprovedVersion": True,
"ActiveVersion": {"connect": {"id": store_listing_version_id}},
"Versions": {"create": sub_store_listing_versions},
},
)
# Update the AgentGraph with store listing data
await prisma.models.AgentGraph.prisma().update(
where={
"graphVersionId": {
"id": store_listing_version.agentGraphId,
"version": store_listing_version.agentGraphVersion,
}
},
data={
"name": store_listing_version.name,
"description": store_listing_version.description,
"recommendedScheduleCron": store_listing_version.recommendedScheduleCron,
},
)
await prisma.models.StoreListing.prisma(tx).update(
where={"id": store_listing_version.StoreListing.id},
data={
"hasApprovedVersion": True,
"ActiveVersion": {"connect": {"id": store_listing_version_id}},
},
)
# If rejecting an approved agent, update the StoreListing accordingly
if is_rejecting_approved:

View File

@@ -41,6 +41,7 @@ async def test_get_store_agents(mocker):
rating=4.5,
versions=["1.0"],
updated_at=datetime.now(),
is_available=False,
)
]
@@ -82,12 +83,15 @@ async def test_get_store_agent_details(mocker):
rating=4.5,
versions=["1.0"],
updated_at=datetime.now(),
is_available=False,
)
# Create a mock StoreListing result
mock_store_listing = mocker.MagicMock()
mock_store_listing.activeVersionId = "active-version-id"
mock_store_listing.hasApprovedVersion = True
mock_store_listing.ActiveVersion = mocker.MagicMock()
mock_store_listing.ActiveVersion.recommendedScheduleCron = None
# Mock StoreAgent prisma call
mock_store_agent = mocker.patch("prisma.models.StoreAgent.prisma")

View File

@@ -82,7 +82,7 @@ async def generate_agent_image_v2(graph: BaseGraph | AgentGraph) -> io.BytesIO:
type=ideogram_credentials.type,
),
prompt=prompt,
ideogram_model_name=IdeogramModelName.V2,
ideogram_model_name=IdeogramModelName.V3,
aspect_ratio=AspectRatio.ASPECT_16_9,
magic_prompt_option=MagicPromptOption.OFF,
style_type=StyleType.AUTO,

View File

@@ -14,6 +14,7 @@ class MyAgent(pydantic.BaseModel):
agent_image: str | None = None
description: str
last_edited: datetime.datetime
recommended_schedule_cron: str | None = None
class MyAgentsResponse(pydantic.BaseModel):
@@ -53,6 +54,7 @@ class StoreAgentDetails(pydantic.BaseModel):
rating: float
versions: list[str]
last_updated: datetime.datetime
recommended_schedule_cron: str | None = None
active_version_id: str | None = None
has_approved_version: bool = False
@@ -157,6 +159,7 @@ class StoreSubmissionRequest(pydantic.BaseModel):
description: str = ""
categories: list[str] = []
changes_summary: str | None = None
recommended_schedule_cron: str | None = None
class StoreSubmissionEditRequest(pydantic.BaseModel):
@@ -167,6 +170,7 @@ class StoreSubmissionEditRequest(pydantic.BaseModel):
description: str = ""
categories: list[str] = []
changes_summary: str | None = None
recommended_schedule_cron: str | None = None
class ProfileDetails(pydantic.BaseModel):

View File

@@ -3,13 +3,10 @@ import tempfile
import typing
import urllib.parse
import autogpt_libs.auth.depends
import autogpt_libs.auth.middleware
import autogpt_libs.auth
import fastapi
import fastapi.responses
from autogpt_libs.auth.depends import auth_middleware, get_user_id
import backend.data.block
import backend.data.graph
import backend.server.v2.store.db
import backend.server.v2.store.exceptions
@@ -32,12 +29,11 @@ router = fastapi.APIRouter()
"/profile",
summary="Get user profile",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.ProfileDetails,
)
async def get_profile(
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Get the profile details for the authenticated user.
@@ -65,14 +61,12 @@ async def get_profile(
"/profile",
summary="Update user profile",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.CreatorDetails,
)
async def update_or_create_profile(
profile: backend.server.v2.store.model.Profile,
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Update the store profile for the authenticated user.
@@ -215,11 +209,9 @@ async def get_agent(username: str, agent_name: str):
"/graph/{store_listing_version_id}",
summary="Get agent graph",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
)
async def get_graph_meta_by_store_listing_version_id(
store_listing_version_id: str,
_: typing.Annotated[str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)],
):
async def get_graph_meta_by_store_listing_version_id(store_listing_version_id: str):
"""
Get Agent Graph from Store Listing Version ID.
"""
@@ -240,12 +232,10 @@ async def get_graph_meta_by_store_listing_version_id(
"/agents/{store_listing_version_id}",
summary="Get agent by version",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreAgentDetails,
)
async def get_store_agent(
store_listing_version_id: str,
_: typing.Annotated[str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)],
):
async def get_store_agent(store_listing_version_id: str):
"""
Get Store Agent Details from Store Listing Version ID.
"""
@@ -266,16 +256,14 @@ async def get_store_agent(
"/agents/{username}/{agent_name}/review",
summary="Create agent review",
tags=["store"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreReview,
)
async def create_review(
username: str,
agent_name: str,
review: backend.server.v2.store.model.StoreReviewCreate,
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Create a review for a store agent.
@@ -402,13 +390,11 @@ async def get_creator(
"/myagents",
summary="Get my agents",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.MyAgentsResponse,
)
async def get_my_agents(
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
page: typing.Annotated[int, fastapi.Query(ge=1)] = 1,
page_size: typing.Annotated[int, fastapi.Query(ge=1)] = 20,
):
@@ -429,14 +415,12 @@ async def get_my_agents(
"/submissions/{submission_id}",
summary="Delete store submission",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=bool,
)
async def delete_submission(
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
submission_id: str,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Delete a store listing submission.
@@ -466,13 +450,11 @@ async def delete_submission(
"/submissions",
summary="List my submissions",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreSubmissionsResponse,
)
async def get_submissions(
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
page: int = 1,
page_size: int = 20,
):
@@ -520,14 +502,12 @@ async def get_submissions(
"/submissions",
summary="Create store submission",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreSubmission,
)
async def create_submission(
submission_request: backend.server.v2.store.model.StoreSubmissionRequest,
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Create a new store listing submission.
@@ -555,6 +535,7 @@ async def create_submission(
sub_heading=submission_request.sub_heading,
categories=submission_request.categories,
changes_summary=submission_request.changes_summary or "Initial Submission",
recommended_schedule_cron=submission_request.recommended_schedule_cron,
)
except Exception:
logger.exception("Exception occurred whilst creating store submission")
@@ -568,15 +549,13 @@ async def create_submission(
"/submissions/{store_listing_version_id}",
summary="Edit store submission",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
response_model=backend.server.v2.store.model.StoreSubmission,
)
async def edit_submission(
store_listing_version_id: str,
submission_request: backend.server.v2.store.model.StoreSubmissionEditRequest,
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Edit an existing store listing submission.
@@ -602,6 +581,7 @@ async def edit_submission(
sub_heading=submission_request.sub_heading,
categories=submission_request.categories,
changes_summary=submission_request.changes_summary,
recommended_schedule_cron=submission_request.recommended_schedule_cron,
)
@@ -609,13 +589,11 @@ async def edit_submission(
"/submissions/media",
summary="Upload submission media",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
)
async def upload_submission_media(
file: fastapi.UploadFile,
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
):
"""
Upload media (images/videos) for a store listing submission.
@@ -666,13 +644,11 @@ async def upload_submission_media(
"/submissions/generate_image",
summary="Generate submission image",
tags=["store", "private"],
dependencies=[fastapi.Depends(autogpt_libs.auth.middleware.auth_middleware)],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
)
async def generate_image(
agent_id: str,
user_id: typing.Annotated[
str, fastapi.Depends(autogpt_libs.auth.depends.get_user_id)
],
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
) -> fastapi.responses.Response:
"""
Generate an image for a store listing submission.
@@ -730,7 +706,6 @@ async def generate_image(
tags=["store", "public"],
)
async def download_agent_file(
request: fastapi.Request,
store_listing_version_id: str = fastapi.Path(
..., description="The ID of the agent to download"
),
@@ -747,15 +722,7 @@ async def download_agent_file(
Raises:
HTTPException: If the agent is not found or an unexpected error occurs.
"""
try:
user_id = get_user_id(await auth_middleware(request))
except fastapi.HTTPException:
user_id = None
graph_data = await backend.server.v2.store.db.get_agent(
user_id=user_id,
store_listing_version_id=store_listing_version_id,
)
graph_data = await backend.server.v2.store.db.get_agent(store_listing_version_id)
file_name = f"agent_{graph_data.id}_v{graph_data.version or 'latest'}.json"
# Sending graph as a stream (similar to marketplace v1)

View File

@@ -1,11 +1,10 @@
import datetime
import json
import autogpt_libs.auth.depends
import autogpt_libs.auth.middleware
import fastapi
import fastapi.testclient
import prisma.enums
import pytest
import pytest_mock
from pytest_snapshot.plugin import Snapshot
@@ -22,20 +21,14 @@ app.include_router(backend.server.v2.store.routes.router)
client = fastapi.testclient.TestClient(app)
def override_auth_middleware() -> dict[str, str]:
"""Override auth middleware for testing"""
return {"sub": "test-user-id"}
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
def override_get_user_id() -> str:
"""Override get_user_id for testing"""
return "test-user-id"
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
override_auth_middleware
)
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = override_get_user_id
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def test_get_agents_defaults(
@@ -536,6 +529,7 @@ def test_get_creator_details(
def test_get_submissions_success(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse(
submissions=[
@@ -576,12 +570,13 @@ def test_get_submissions_success(
assert data.pagination.current_page == 1
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(json.dumps(response.json(), indent=2), "sub_success")
mock_db_call.assert_called_once_with(user_id="test-user-id", page=1, page_size=20)
mock_db_call.assert_called_once_with(user_id=test_user_id, page=1, page_size=20)
def test_get_submissions_pagination(
mocker: pytest_mock.MockFixture,
snapshot: Snapshot,
test_user_id: str,
) -> None:
mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse(
submissions=[],
@@ -605,7 +600,7 @@ def test_get_submissions_pagination(
assert data.pagination.page_size == 5
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(json.dumps(response.json(), indent=2), "sub_pagination")
mock_db_call.assert_called_once_with(user_id="test-user-id", page=2, page_size=5)
mock_db_call.assert_called_once_with(user_id=test_user_id, page=2, page_size=5)
def test_get_submissions_malformed_request(mocker: pytest_mock.MockFixture):

Some files were not shown because too many files have changed in this diff Show More