Compare commits

...

154 Commits

Author SHA1 Message Date
Zamil Majdy
47a996e75d lint 2024-12-09 15:17:35 -06:00
Zamil Majdy
2ff033aeb2 Address comments 2024-12-09 15:13:34 -06:00
abhi
95b1beecaf allowing default empty string to pass the test 2024-12-08 11:01:12 +05:30
abhi1992002
cb5452f7ae fix(graph): formatting 2024-12-03 11:32:14 +05:30
abhi1992002
02a60966d4 feat(validation): Enhance input validation for dependencies in GraphModel
- Updated GraphModel to validate field dependencies and required fields in input data.
2024-12-03 11:25:28 +05:30
abhi1992002
f28112f5e2 refactor(validation): Remove InputValidationBlock class
- Deleted the InputValidationBlock class from checking_input_validation.py, which was previously used for input validation with required and dependent fields.
2024-12-02 22:39:47 +05:30
abhi1992002
20f018b2e5 feat(validation): Enhance input validation with 'depends_on' support
- Updated InputValidationBlock to include default values for required fields.
- Implemented validation logic in manager.py to ensure dependent fields are validated based on their dependencies.
2024-12-02 22:36:11 +05:30
Abhimanyu Yadav
6f144ef77a Update useAgentGraph.ts 2024-12-02 11:15:28 +05:30
abhi1992002
bd1fa8f6c2 feat(validation): Enhance schema validation with 'depends_on' support
- Added 'depends_on' parameter to SchemaField in model.py to specify field dependencies.
- Updated useAgentGraph hook to validate input fields based on their dependencies, ensuring required fields are set when dependent fields are filled.
- Modified BlockIOSubSchemaMeta to include 'depends_on' as an optional property.
2024-12-02 11:02:00 +05:30
Zamil Majdy
2121ffd06b chore(platform): Bump version to v0.3.4 2024-12-02 09:08:19 +07:00
Zamil Majdy
0c2940353f hotfix(backend): Fix month credit calculation on December (#8851) 2024-12-02 09:00:01 +07:00
Zamil Majdy
d26105d382 hotfix(backend): Fix month credit calculation on December (#8851)
When calculating the next month, we are not rolling the month number
causing an error on credits.

### Changes 🏗️

Add modulo while calculating next month.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2024-12-02 08:47:40 +07:00
dependabot[bot]
7d48eebc78 build(deps): bump pydantic from 2.9.2 to 2.10.2 in /autogpt_platform/autogpt_libs in the production-dependencies group across 1 directory (#8787)
Bumps the production-dependencies group with 1 update in the
/autogpt_platform/autogpt_libs directory:
[pydantic](https://github.com/pydantic/pydantic).

Updates `pydantic` from 2.9.2 to 2.10.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.10.2 2024-11-26</h2>
<h2>What's Changed</h2>
<h3>Fixes</h3>
<ul>
<li>Only evaluate <code>FieldInfo</code> annotations if required during
schema building by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10769">#10769</a></li>
<li>Do not evaluate annotations for private fields by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10962">#10962</a></li>
<li>Support serialization as any for <code>Secret</code> types and
<code>Url</code> types by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10947">#10947</a></li>
<li>Fix type hint of <code>Field.default</code> to be compatible with
Python 3.8 and 3.9 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10972">#10972</a></li>
<li>Add hashing support for URL types by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10975">#10975</a></li>
<li>Hide <code>BaseModel.__replace__</code> definition from type
checkers by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10979">10979</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.10.1...v2.10.2">https://github.com/pydantic/pydantic/compare/v2.10.1...v2.10.2</a></p>
<h2>v2.10.1 2024-11-21</h2>
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Bump <code>pydantic-core</code> version to <code>v2.27.1</code> by
<a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10938">#10938</a></li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Use the correct frame when instantiating a parametrized
<code>TypeAdapter</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10893">#10893</a></li>
<li>Relax check for validated data in <code>default_factory</code> utils
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10909">#10909</a></li>
<li>Fix type checking issue with <code>model_fields</code> and
<code>model_computed_fields</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10911">#10911</a></li>
<li>Use the parent configuration during schema generation for stdlib
<code>dataclass</code>es by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10928">#10928</a></li>
<li>Use the <code>globals</code> of the function when evaluating the
return type of serializers and <code>computed_field</code>s by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10929">#10929</a></li>
<li>Fix URL constraint application by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10922">#10922</a></li>
<li>Fix URL equality with different validation methods by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10934">#10934</a></li>
<li>Fix JSON schema title when specified as <code>''</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10936">#10936</a></li>
<li>Fix <code>python</code> mode serialization for <code>complex</code>
inference by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1549">pydantic-core#1549</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.10.0...v2.10.1">https://github.com/pydantic/pydantic/compare/v2.10.0...v2.10.1</a></p>
<h2>v2.10.0 2024-11-20</h2>
<p>The code released in v2.10.0 is practically identical to that of
v2.10.0b2.
See the <a
href="https://pydantic.dev/articles/pydantic-v2-10-release">v2.10
release blog post</a> for the highlights!</p>
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Bump <code>pydantic-core</code> to <code>v2.27.0</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10825">#10825</a></li>
<li>Replaced pdm with uv by <a
href="https://github.com/frfahim"><code>@​frfahim</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10727">#10727</a></li>
</ul>
<h3>New Features</h3>
<ul>
<li>Support <code>fractions.Fraction</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10318">#10318</a></li>
<li>Support <code>Hashable</code> for json validation by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10324">#10324</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.10.2 (2024-11-25)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.10.2">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Fixes</h4>
<ul>
<li>Only evaluate FieldInfo annotations if required during schema
building by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10769">#10769</a></li>
<li>Do not evaluate annotations for private fields by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10962">#10962</a></li>
<li>Support serialization as any for <code>Secret</code> types and
<code>Url</code> types by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10947">#10947</a></li>
<li>Fix type hint of <code>Field.default</code> to be compatible with
Python 3.8 and 3.9 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10972">#10972</a></li>
<li>Add hashing support for URL types by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10975">#10975</a></li>
<li>Hide <code>BaseModel.__replace__</code> definition from type
checkers by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10979">10979</a></li>
</ul>
<h2>v2.10.1 (2024-11-21)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.10.1">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<ul>
<li>Bump <code>pydantic-core</code> version to <code>v2.27.1</code> by
<a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10938">#10938</a></li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Use the correct frame when instantiating a parametrized
<code>TypeAdapter</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10893">#10893</a></li>
<li>Relax check for validated data in <code>default_factory</code> utils
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10909">#10909</a></li>
<li>Fix type checking issue with <code>model_fields</code> and
<code>model_computed_fields</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10911">#10911</a></li>
<li>Use the parent configuration during schema generation for stdlib
<code>dataclass</code>es by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10928">#10928</a></li>
<li>Use the <code>globals</code> of the function when evaluating the
return type of serializers and <code>computed_field</code>s by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10929">#10929</a></li>
<li>Fix URL constraint application by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10922">#10922</a></li>
<li>Fix URL equality with different validation methods by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10934">#10934</a></li>
<li>Fix JSON schema title when specified as <code>''</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10936">#10936</a></li>
<li>Fix <code>python</code> mode serialization for <code>complex</code>
inference by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1549">pydantic-core#1549</a></li>
</ul>
<h3>New Contributors</h3>
<h2>v2.10.0 (2024-11-20)</h2>
<p>The code released in v2.10.0 is practically identical to that of
v2.10.0b2.</p>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.10.0">GitHub
release</a></p>
<p>See the <a
href="https://pydantic.dev/articles/pydantic-v2-10-release">v2.10
release blog post</a> for the highlights!</p>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fe32515498"><code>fe32515</code></a>
Prepare for v2.10.2 release (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10982">#10982</a>)</li>
<li><a
href="226cfaf62b"><code>226cfaf</code></a>
Hide <code>BaseModel.__replace__</code> definition from type checkers
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/10979">#10979</a>)</li>
<li><a
href="02229a6ab1"><code>02229a6</code></a>
hashing support for urls (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10975">#10975</a>)</li>
<li><a
href="a9cf39c32a"><code>a9cf39c</code></a>
Fix type hint of <code>Field.default</code> to be compatible with Python
3.8 and 3.9 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/1">#1</a>...</li>
<li><a
href="869eafd70b"><code>869eafd</code></a>
Support serialization as any for <code>Secret</code> types and
<code>Url</code> types (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10947">#10947</a>)</li>
<li><a
href="7c0ed72aa2"><code>7c0ed72</code></a>
Do not evaluate annotations for private fields (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10962">#10962</a>)</li>
<li><a
href="d6fc7fce7d"><code>d6fc7fc</code></a>
Only evaluate <code>FieldInfo</code> annotations if required during
schema building (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10">#10</a>...</li>
<li><a
href="17e60fafd6"><code>17e60fa</code></a>
spacing</li>
<li><a
href="369b355dba"><code>369b355</code></a>
remove typo</li>
<li><a
href="4c75404d6b"><code>4c75404</code></a>
Prepare for v2.10.1 release (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10939">#10939</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.9.2...v2.10.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pydantic&package-manager=pip&previous-version=2.9.2&new-version=2.10.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-29 13:41:00 +00:00
dependabot[bot]
c6b36fbad7 build(deps): bump the production-dependencies group in /autogpt_platform/market with 2 updates (#8757)
Bumps the production-dependencies group in /autogpt_platform/market with
2 updates: [uvicorn](https://github.com/encode/uvicorn) and
[sentry-sdk](https://github.com/getsentry/sentry-python).

Updates `uvicorn` from 0.32.0 to 0.32.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/encode/uvicorn/releases">uvicorn's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.32.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Enable httptools lenient data by <a
href="https://github.com/vvanglro"><code>@​vvanglro</code></a> in <a
href="https://redirect.github.com/encode/uvicorn/pull/2488">encode/uvicorn#2488</a></li>
<li>Drop ASGI spec version to 2.3 on HTTP scope by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/encode/uvicorn/pull/2513">encode/uvicorn#2513</a></li>
</ul>
<hr />
<p><strong>Full Changelog</strong>: <a
href="https://github.com/encode/uvicorn/compare/0.32.0...0.32.1">https://github.com/encode/uvicorn/compare/0.32.0...0.32.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/encode/uvicorn/blob/master/CHANGELOG.md">uvicorn's
changelog</a>.</em></p>
<blockquote>
<h2>0.32.1 (2024-11-20)</h2>
<h3>Fixed</h3>
<ul>
<li>Drop ASGI spec version to 2.3 on HTTP scope <a
href="https://redirect.github.com/encode/uvicorn/pull/2513">#2513</a></li>
<li>Enable httptools lenient data on <code>httptools &gt;= 0.6.3</code>
<a
href="https://redirect.github.com/encode/uvicorn/pull/2488">#2488</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5279296e62"><code>5279296</code></a>
Upgrade upload/download GitHub Actions (<a
href="https://redirect.github.com/encode/uvicorn/issues/2517">#2517</a>)</li>
<li><a
href="8c3402dd22"><code>8c3402d</code></a>
Update <code>publish.yaml</code> with latest PyPI recommendations (<a
href="https://redirect.github.com/encode/uvicorn/issues/2516">#2516</a>)</li>
<li><a
href="04c6320f39"><code>04c6320</code></a>
Version 0.32.1 (<a
href="https://redirect.github.com/encode/uvicorn/issues/2515">#2515</a>)</li>
<li><a
href="fc6c51b8bb"><code>fc6c51b</code></a>
Drop ASGI spec version to 2.3 on HTTP (<a
href="https://redirect.github.com/encode/uvicorn/issues/2513">#2513</a>)</li>
<li><a
href="2aea8354ea"><code>2aea835</code></a>
fix(http): enable httptools lenient data (<a
href="https://redirect.github.com/encode/uvicorn/issues/2488">#2488</a>)</li>
<li>See full diff in <a
href="https://github.com/encode/uvicorn/compare/0.32.0...0.32.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `sentry-sdk` from 2.18.0 to 2.19.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/releases">sentry-sdk's
releases</a>.</em></p>
<blockquote>
<h2>2.19.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>New: introduce <code>rust_tracing</code> integration. See <a
href="https://docs.sentry.io/platforms/python/integrations/rust_tracing/">https://docs.sentry.io/platforms/python/integrations/rust_tracing/</a>
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3717">#3717</a>)
by <a
href="https://github.com/matt-codecov"><code>@​matt-codecov</code></a></li>
<li>Auto enable Litestar integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3540">#3540</a>)
by <a
href="https://github.com/provinzkraut"><code>@​provinzkraut</code></a></li>
<li>Deprecate <code>sentry_sdk.init</code> context manager (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3729">#3729</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>feat(spotlight): Send PII to Spotlight when no DSN is set (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3804">#3804</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>feat(spotlight): Add info logs when Sentry is enabled (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3735">#3735</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>feat(spotlight): Inject Spotlight button on Django (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3751">#3751</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>feat(spotlight): Auto enable cache_spans for Spotlight on DEBUG (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3791">#3791</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>fix(logging): Handle parameter <code>stack_info</code> for the
<code>LoggingIntegration</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3745">#3745</a>)
by <a
href="https://github.com/gmcrocetti"><code>@​gmcrocetti</code></a></li>
<li>fix(pure-eval): Make sentry-sdk[pure-eval] installable with
pip==24.0 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3757">#3757</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>fix(rust_tracing): include_tracing_fields arg to control unvetted
data in rust_tracing integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3780">#3780</a>)
by <a
href="https://github.com/matt-codecov"><code>@​matt-codecov</code></a></li>
<li>fix(aws) Fix aws lambda tests (by reducing event size) (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3770">#3770</a>)
by <a
href="https://github.com/antonpirker"><code>@​antonpirker</code></a></li>
<li>fix(arq): fix integration with Worker settings as a dict (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3742">#3742</a>)
by <a
href="https://github.com/saber-solooki"><code>@​saber-solooki</code></a></li>
<li>fix(httpx): Prevent Sentry baggage duplication (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3728">#3728</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>fix(falcon): Don't exhaust request body stream (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3768">#3768</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>fix(integrations): Check <code>retries_left</code> before capturing
exception (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3803">#3803</a>)
by <a
href="https://github.com/malkovro"><code>@​malkovro</code></a></li>
<li>fix(openai): Use name instead of description (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3807">#3807</a>)
by <a
href="https://github.com/sourceful-rob"><code>@​sourceful-rob</code></a></li>
<li>test(gcp): Only run GCP tests when they should (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3721">#3721</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>chore: Shorten CI workflow names (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3805">#3805</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>chore: Test with pyspark prerelease (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3760">#3760</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>build(deps): bump codecov/codecov-action from 4.6.0 to 5.0.2 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3792">#3792</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
<li>build(deps): bump actions/checkout from 4.2.1 to 4.2.2 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3691">#3691</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md">sentry-sdk's
changelog</a>.</em></p>
<blockquote>
<h2>2.19.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>New: introduce <code>rust_tracing</code> integration. See <a
href="https://docs.sentry.io/platforms/python/integrations/rust_tracing/">https://docs.sentry.io/platforms/python/integrations/rust_tracing/</a>
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3717">#3717</a>)
by <a
href="https://github.com/matt-codecov"><code>@​matt-codecov</code></a></li>
<li>Auto enable Litestar integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3540">#3540</a>)
by <a
href="https://github.com/provinzkraut"><code>@​provinzkraut</code></a></li>
<li>Deprecate <code>sentry_sdk.init</code> context manager (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3729">#3729</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>feat(spotlight): Send PII to Spotlight when no DSN is set (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3804">#3804</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>feat(spotlight): Add info logs when Sentry is enabled (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3735">#3735</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>feat(spotlight): Inject Spotlight button on Django (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3751">#3751</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>feat(spotlight): Auto enable cache_spans for Spotlight on DEBUG (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3791">#3791</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>fix(logging): Handle parameter <code>stack_info</code> for the
<code>LoggingIntegration</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3745">#3745</a>)
by <a
href="https://github.com/gmcrocetti"><code>@​gmcrocetti</code></a></li>
<li>fix(pure-eval): Make sentry-sdk[pure-eval] installable with
pip==24.0 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3757">#3757</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>fix(rust_tracing): include_tracing_fields arg to control unvetted
data in rust_tracing integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3780">#3780</a>)
by <a
href="https://github.com/matt-codecov"><code>@​matt-codecov</code></a></li>
<li>fix(aws) Fix aws lambda tests (by reducing event size) (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3770">#3770</a>)
by <a
href="https://github.com/antonpirker"><code>@​antonpirker</code></a></li>
<li>fix(arq): fix integration with Worker settings as a dict (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3742">#3742</a>)
by <a
href="https://github.com/saber-solooki"><code>@​saber-solooki</code></a></li>
<li>fix(httpx): Prevent Sentry baggage duplication (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3728">#3728</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>fix(falcon): Don't exhaust request body stream (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3768">#3768</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>fix(integrations): Check <code>retries_left</code> before capturing
exception (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3803">#3803</a>)
by <a
href="https://github.com/malkovro"><code>@​malkovro</code></a></li>
<li>fix(openai): Use name instead of description (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3807">#3807</a>)
by <a
href="https://github.com/sourceful-rob"><code>@​sourceful-rob</code></a></li>
<li>test(gcp): Only run GCP tests when they should (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3721">#3721</a>)
by <a
href="https://github.com/szokeasaurusrex"><code>@​szokeasaurusrex</code></a></li>
<li>chore: Shorten CI workflow names (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3805">#3805</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>chore: Test with pyspark prerelease (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3760">#3760</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>build(deps): bump codecov/codecov-action from 4.6.0 to 5.0.2 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3792">#3792</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
<li>build(deps): bump actions/checkout from 4.2.1 to 4.2.2 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3691">#3691</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="039c220bcb"><code>039c220</code></a>
Updated changelog</li>
<li><a
href="c83e7428f4"><code>c83e742</code></a>
release: 2.19.0</li>
<li><a
href="8fe5bb4b19"><code>8fe5bb4</code></a>
feat: Send PII to Spotlight when no DSN is set (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3804">#3804</a>)</li>
<li><a
href="295dd8d50f"><code>295dd8d</code></a>
Auto enable Litestar integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3540">#3540</a>)</li>
<li><a
href="bd50c38652"><code>bd50c38</code></a>
fix(httpx): Prevent Sentry baggage duplication (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3728">#3728</a>)</li>
<li><a
href="e9ec6c1812"><code>e9ec6c1</code></a>
test(gcp): Only run GCP tests when they should (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3721">#3721</a>)</li>
<li><a
href="aa6e8fd05c"><code>aa6e8fd</code></a>
fix(falcon): Don't exhaust request body stream (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3768">#3768</a>)</li>
<li><a
href="3e2885322a"><code>3e28853</code></a>
fix(integrations): Check retries_left before capturing exception (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3803">#3803</a>)</li>
<li><a
href="01146bd3ad"><code>01146bd</code></a>
fix(openai): Use name instead of description (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3807">#3807</a>)</li>
<li><a
href="d894fc2320"><code>d894fc2</code></a>
Shorten CI workflow names (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3805">#3805</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-python/compare/2.18.0...2.19.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-29 13:38:30 +00:00
Toran Bruce Richards
4aa5f53710 feat(block): Add AI video generator block with Fal txt 2 vid (#8528)
### Background

Implements an AI Video Generator Block for text to image models hosted
on Fal


![image](https://github.com/user-attachments/assets/9cb70015-4174-4419-8c1a-4144f324442f)

---------

Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
Co-authored-by: Aarushi <aarushik93@gmail.com>
2024-11-29 13:04:42 +00:00
Nicholas Tindle
75f9b072a6 refactor(backend): Rename & move IntegrationCredentialsStore to backend (#8648)
- Move `autogpt_libs.supabase_integration_credentials_store` into
`backend`
   - `.store` -> `backend.integrations.credentials_store`
   - `.types` -> added to `backend.data.model`
- Rename `SupabaseIntegrationCredentialsStore` to
`IntegrationCredentialsStore`

We wanted to get a few security things in quickly in #8403 and had to
make some compromises to do so. This picks those up and fixes them.

- Resolves #8540

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-29 11:48:04 +00:00
Zamil Majdy
63af42dafb fix(backend): Fix conn_retry decorator possible incorrect behaviour on failed async function (#8836)
This fix is triggered by an error observed on db connection failure on
SupaBase:
```
2024-11-28 07:45:24,724 INFO  [DatabaseManager] Starting...
2024-11-28 07:45:24,726 INFO  [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection started...
2024-11-28 07:45:24,726 INFO  [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection completed successfully.
{"is_panic":false,"message":"Can't reach database server at `...pooler.supabase.com:5432`\n\nPlease make sure your database server is running at `....pooler.supabase.com:5432`.","meta":{"database_host":"...pooler.supabase.com","database_port":5432},"error_code":"P1001"}
2024-11-28 07:45:35,153 INFO  [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection failed: Could not connect to the query engine. Retrying now...
2024-11-28 07:45:36,155 INFO  [PID-18|DatabaseManager|Redis-e14a33de-2d81-4536-b48b-a8aa4b1f4766] Acquiring connection started...
2024-11-28 07:45:36,181 INFO  [PID-18|DatabaseManager|Redis-e14a33de-2d81-4536-b48b-a8aa4b1f4766] Acquiring connection completed successfully.
2024-11-28 07:45:36,183 INFO  [PID-18|DatabaseManager|Pyro-2722cd29-4dbd-4cf9-882f-73842658599d] Starting Pyro Service started...
2024-11-28 07:45:36,189 INFO  [DatabaseManager] Connected to Pyro; URI = PYRO:DatabaseManager@0.0.0.0:8005
2024-11-28 07:46:28,241 ERROR  Error in get_user_integrations: All connection attempts failed
```

Where  even 
```
2024-11-28 07:45:35,153 INFO  [PID-18|DatabaseManager|Prisma-7f32369c-6432-4edb-8e71-ef820332b9e4] Acquiring connection failed: Could not connect to the query engine. Retrying now...
```
is present, the Redis connection is still proceeding without waiting for
the retry to complete. This was likely caused by Tenacity not fully
awaiting the DB connection acquisition command.

### Changes 🏗️

* Add special handling for the async function to explicitly await the
function execution result on each retry.
* Explicitly raise exceptions on `db.connect()` if the db is not
connected even after `prisma.connect()` command.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2024-11-29 09:30:36 +00:00
Aarushi
29f177e70d feat(blocks): Add Hubspot blocks (#8786)
This PR adds the first few Hubspot blocks so we can create _real_ sales
and marketing agents.

### Changes 🏗️

Added Hubspot blocks; 

- Aded auth for hubspot
- Added Company block
- Added Contact block
- Added Engagement block

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2024-11-29 09:26:43 +00:00
Zamil Majdy
eeb5b4aa46 fix(backend): Fix credentials cost filter not able to filter the block cost (#8837)
We've started enabling cost based on the *partial value* of the
`credentials` field. And this logic has never been supported.

### Changes 🏗️

* Add partial object matching on the input data filter for evaluating
the block cost.
* Add missing credentials for `ExtractWebsiteContentBlock`
* Removed fallback cost on LLM blocks.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2024-11-29 08:46:33 +00:00
Zamil Majdy
520b1d7940 feat(frontend): Make Block description searchable on Block list palette (#8839)
Blocks should be easy to search, the name is sometimes not
straightforward, but the description does.

<img width="576" alt="image"
src="https://github.com/user-attachments/assets/0528b019-0ebc-4e6f-8a3c-40323a671b13">


### Changes 🏗️

Make the block description searchable.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2024-11-29 07:41:02 +00:00
Zamil Majdy
f8b00e55d0 Merge branch 'dev' of github.com:Significant-Gravitas/AutoGPT into dev 2024-11-28 13:22:03 +07:00
Aarushi
4b8087c067 feat(platform/featureflags): Setting up feature flagging (#8718)
* feature flag formatting and linting

* add tests

* update poetry lock

* remove unneeded changes

* fix pyproject

* fix formatting and linting

* pydantic settings

* address comments and format

* alphabetize

* fix lockfile

* fix conflicts
2024-11-27 19:29:57 +00:00
Reinier van der Leer
d2f3f53f57 fix(frontend): Fix missing credentials input when no credentials available (#8834)
Fixes breakage from f1414550 (#8772)

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-11-27 18:26:15 +00:00
Reinier van der Leer
ab3643388f ci: Fix workflow status checker script to work with merge_group runs 2024-11-27 19:25:36 +01:00
Reinier van der Leer
845c8c51e5 ci: Add merge_group trigger to status checker 2024-11-27 19:15:28 +01:00
Reinier van der Leer
118fdeeb1d ci: Add merge_group triggers 2024-11-27 17:57:44 +01:00
dependabot[bot]
97d00455ef chore(backend): Update 12 dependencies (#8763)
* build(deps): bump the production-dependencies group across 1 directory with 12 updates

Bumps the production-dependencies group with 12 updates in the /autogpt_platform/backend directory:

| Package | From | To |
| --- | --- | --- |
| [aio-pika](https://github.com/mosquito/aio-pika) | `9.4.3` | `9.5.0` |
| [apscheduler](https://github.com/agronholm/apscheduler) | `3.10.4` | `3.11.0` |
| [fastapi](https://github.com/fastapi/fastapi) | `0.115.4` | `0.115.5` |
| [google-api-python-client](https://github.com/googleapis/google-api-python-client) | `2.151.0` | `2.154.0` |
| [groq](https://github.com/groq/groq-python) | `0.11.0` | `0.12.0` |
| [ollama](https://github.com/ollama/ollama-python) | `0.3.3` | `0.4.1` |
| [openai](https://github.com/openai/openai-python) | `1.54.3` | `1.55.1` |
| [pydantic](https://github.com/pydantic/pydantic) | `2.9.2` | `2.10.1` |
| [sentry-sdk](https://github.com/getsentry/sentry-python) | `2.18.0` | `2.19.0` |
| [uvicorn](https://github.com/encode/uvicorn) | `0.32.0` | `0.32.1` |
| [replicate](https://github.com/replicate/replicate-python) | `1.0.3` | `1.0.4` |
| [pinecone](https://github.com/pinecone-io/pinecone-python-client) | `5.3.1` | `5.4.0` |



Updates `aio-pika` from 9.4.3 to 9.5.0
- [Release notes](https://github.com/mosquito/aio-pika/releases)
- [Changelog](https://github.com/mosquito/aio-pika/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mosquito/aio-pika/compare/9.4.3...9.5.0)

Updates `apscheduler` from 3.10.4 to 3.11.0
- [Release notes](https://github.com/agronholm/apscheduler/releases)
- [Changelog](https://github.com/agronholm/apscheduler/blob/3.11.0/docs/versionhistory.rst)
- [Commits](https://github.com/agronholm/apscheduler/compare/3.10.4...3.11.0)

Updates `fastapi` from 0.115.4 to 0.115.5
- [Release notes](https://github.com/fastapi/fastapi/releases)
- [Commits](https://github.com/fastapi/fastapi/compare/0.115.4...0.115.5)

Updates `google-api-python-client` from 2.151.0 to 2.154.0
- [Release notes](https://github.com/googleapis/google-api-python-client/releases)
- [Commits](https://github.com/googleapis/google-api-python-client/compare/v2.151.0...v2.154.0)

Updates `groq` from 0.11.0 to 0.12.0
- [Release notes](https://github.com/groq/groq-python/releases)
- [Changelog](https://github.com/groq/groq-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/groq/groq-python/compare/v0.11.0...v0.12.0)

Updates `ollama` from 0.3.3 to 0.4.1
- [Release notes](https://github.com/ollama/ollama-python/releases)
- [Commits](https://github.com/ollama/ollama-python/compare/v0.3.3...v0.4.1)

Updates `openai` from 1.54.3 to 1.55.1
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.54.3...v1.55.1)

Updates `pydantic` from 2.9.2 to 2.10.1
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v2.9.2...v2.10.1)

Updates `sentry-sdk` from 2.18.0 to 2.19.0
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.18.0...2.19.0)

Updates `uvicorn` from 0.32.0 to 0.32.1
- [Release notes](https://github.com/encode/uvicorn/releases)
- [Changelog](https://github.com/encode/uvicorn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/encode/uvicorn/compare/0.32.0...0.32.1)

Updates `replicate` from 1.0.3 to 1.0.4
- [Release notes](https://github.com/replicate/replicate-python/releases)
- [Commits](https://github.com/replicate/replicate-python/compare/1.0.3...1.0.4)

Updates `pinecone` from 5.3.1 to 5.4.0
- [Release notes](https://github.com/pinecone-io/pinecone-python-client/releases)
- [Changelog](https://github.com/pinecone-io/pinecone-python-client/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pinecone-io/pinecone-python-client/compare/v5.3.1...v5.4.0)

---
updated-dependencies:
- dependency-name: aio-pika
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: apscheduler
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: fastapi
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: google-api-python-client
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: groq
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: ollama
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: pydantic
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: uvicorn
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: replicate
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: pinecone
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>

* Downgrade pydantic & pinecode

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-27 16:49:23 +00:00
Zamil Majdy
ae9bd87161 fix(backend): Spin-up Database manager on rest.py (#8832) 2024-11-27 16:39:08 +00:00
dependabot[bot]
a556995d1f chore(frontend): Update 5 dependencies (#8755)
build(deps): bump the production-dependencies group

Bumps the production-dependencies group in /autogpt_platform/frontend with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [@sentry/nextjs](https://github.com/getsentry/sentry-javascript) | `8.38.0` | `8.40.0` |
| [cookie](https://github.com/jshttp/cookie) | `1.0.1` | `1.0.2` |
| [react-day-picker](https://github.com/gpbl/react-day-picker) | `9.3.2` | `9.4.0` |
| [react-shepherd](https://github.com/shepherd-pro/shepherd) | `6.1.4` | `6.1.6` |
| [tailwind-merge](https://github.com/dcastil/tailwind-merge) | `2.5.4` | `2.5.5` |


Updates `@sentry/nextjs` from 8.38.0 to 8.40.0
- [Release notes](https://github.com/getsentry/sentry-javascript/releases)
- [Changelog](https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-javascript/compare/8.38.0...8.40.0)

Updates `cookie` from 1.0.1 to 1.0.2
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v1.0.1...v1.0.2)

Updates `react-day-picker` from 9.3.2 to 9.4.0
- [Release notes](https://github.com/gpbl/react-day-picker/releases)
- [Changelog](https://github.com/gpbl/react-day-picker/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gpbl/react-day-picker/compare/v9.3.2...v9.4.0)

Updates `react-shepherd` from 6.1.4 to 6.1.6
- [Release notes](https://github.com/shepherd-pro/shepherd/releases)
- [Changelog](https://github.com/shipshapecode/shepherd/blob/main/CHANGELOG.md)
- [Commits](https://github.com/shepherd-pro/shepherd/commits)

Updates `tailwind-merge` from 2.5.4 to 2.5.5
- [Release notes](https://github.com/dcastil/tailwind-merge/releases)
- [Commits](https://github.com/dcastil/tailwind-merge/compare/v2.5.4...v2.5.5)

---
updated-dependencies:
- dependency-name: "@sentry/nextjs"
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: cookie
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: react-day-picker
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: react-shepherd
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: tailwind-merge
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-27 16:33:21 +00:00
Nicholas Tindle
fd6c1d9f4f feat(blocks): Various block QoL improvements (#8749)
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Resolves #8357
Resolves #8738
2024-11-27 16:18:53 +00:00
dependabot[bot]
14cc21a843 chore(frontend): Update 14 dev dependencies (#8756)
build(deps-dev): bump the development-dependencies group across 1 directory with 16 updates

Bumps the development-dependencies group with 14 updates in the /autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
| [@playwright/test](https://github.com/microsoft/playwright) | `1.48.2` | `1.49.0` |
| [@storybook/addon-essentials](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/essentials) | `8.4.2` | `8.4.5` |
| [@storybook/addon-interactions](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/interactions) | `8.4.2` | `8.4.5` |
| [@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links) | `8.4.2` | `8.4.5` |
| [@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding) | `8.4.2` | `8.4.5` |
| [@storybook/blocks](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/blocks) | `8.4.2` | `8.4.5` |
| [@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs) | `8.4.2` | `8.4.5` |
| [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) | `22.9.0` | `22.9.3` |
| [eslint-plugin-storybook](https://github.com/storybookjs/eslint-plugin-storybook) | `0.11.0` | `0.11.1` |
| [postcss](https://github.com/postcss/postcss) | `8.4.48` | `8.4.49` |
| [prettier-plugin-tailwindcss](https://github.com/tailwindlabs/prettier-plugin-tailwindcss) | `0.6.8` | `0.6.9` |
| [storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/cli) | `8.4.2` | `8.4.5` |
| [tailwindcss](https://github.com/tailwindlabs/tailwindcss) | `3.4.14` | `3.4.15` |
| [typescript](https://github.com/microsoft/TypeScript) | `5.6.3` | `5.7.2` |



Updates `@playwright/test` from 1.48.2 to 1.49.0
- [Release notes](https://github.com/microsoft/playwright/releases)
- [Commits](https://github.com/microsoft/playwright/compare/v1.48.2...v1.49.0)

Updates `@storybook/addon-essentials` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/addons/essentials)

Updates `@storybook/addon-interactions` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/addons/interactions)

Updates `@storybook/addon-links` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/addons/links)

Updates `@storybook/addon-onboarding` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/addons/onboarding)

Updates `@storybook/blocks` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/lib/blocks)

Updates `@storybook/nextjs` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/frameworks/nextjs)

Updates `@storybook/react` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/renderers/react)

Updates `@storybook/test` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/lib/test)

Updates `@types/node` from 22.9.0 to 22.9.3
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Updates `eslint-plugin-storybook` from 0.11.0 to 0.11.1
- [Release notes](https://github.com/storybookjs/eslint-plugin-storybook/releases)
- [Changelog](https://github.com/storybookjs/eslint-plugin-storybook/blob/main/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/eslint-plugin-storybook/compare/v0.11.0...v0.11.1)

Updates `postcss` from 8.4.48 to 8.4.49
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/8.4.48...8.4.49)

Updates `prettier-plugin-tailwindcss` from 0.6.8 to 0.6.9
- [Release notes](https://github.com/tailwindlabs/prettier-plugin-tailwindcss/releases)
- [Changelog](https://github.com/tailwindlabs/prettier-plugin-tailwindcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tailwindlabs/prettier-plugin-tailwindcss/compare/v0.6.8...v0.6.9)

Updates `storybook` from 8.4.2 to 8.4.5
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v8.4.5/code/lib/cli)

Updates `tailwindcss` from 3.4.14 to 3.4.15
- [Release notes](https://github.com/tailwindlabs/tailwindcss/releases)
- [Changelog](https://github.com/tailwindlabs/tailwindcss/blob/v3.4.15/CHANGELOG.md)
- [Commits](https://github.com/tailwindlabs/tailwindcss/compare/v3.4.14...v3.4.15)

Updates `typescript` from 5.6.3 to 5.7.2
- [Release notes](https://github.com/microsoft/TypeScript/releases)
- [Changelog](https://github.com/microsoft/TypeScript/blob/main/azure-pipelines.release.yml)
- [Commits](https://github.com/microsoft/TypeScript/compare/v5.6.3...v5.7.2)

---
updated-dependencies:
- dependency-name: "@playwright/test"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: development-dependencies
- dependency-name: "@storybook/addon-essentials"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/addon-interactions"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/addon-links"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/addon-onboarding"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/blocks"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/nextjs"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/react"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@storybook/test"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: "@types/node"
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: eslint-plugin-storybook
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: postcss
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: prettier-plugin-tailwindcss
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: storybook
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: tailwindcss
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: typescript
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: development-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-27 16:06:25 +00:00
dependabot[bot]
772baff6db build(deps): bump docker/build-push-action from 2 to 6 (#8465)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-27 22:52:58 +07:00
Nicholas Tindle
5dd151b41e feat(tests): add baseline utility for integration testing from frontend ui (#8765) 2024-11-27 09:44:19 +00:00
Zamil Majdy
86fbbae65c fix(frontend): Add text length limit when displaying Graph & Block name with different length in different places (#8746) 2024-11-27 08:37:26 +00:00
Abhimanyu Yadav
6bfe7ff497 fix(frontend): Add integer type definition for node handles (#8803) 2024-11-27 08:16:29 +01:00
dependabot[bot]
effd1e35a3 chore(libs): Update dev dependency Ruff from 0.7.4 to 0.8.0 (#8760)
build(deps-dev): bump ruff

Bumps the development-dependencies group in /autogpt_platform/autogpt_libs with 1 update: [ruff](https://github.com/astral-sh/ruff).


Updates `ruff` from 0.7.4 to 0.8.0
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.7.4...0.8.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: development-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-26 22:43:06 +00:00
Bently
4aae15d769 feat(blocks): Add Word Character Count Block (#8781)
* Adds Word Character Count Block

Co-Authored-By: SerchioSD <69461657+serchiosd@users.noreply.github.com>

* update test_output

---------

Co-authored-by: SerchioSD <69461657+serchiosd@users.noreply.github.com>
2024-11-26 21:38:43 +00:00
oxygen-fragment
f62fa3e1e3 updated URL on README.md (#8767)
* docs(backend): Add `--build` to docker command in Getting Started guide (#8762)

* updated URL on README.md

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-11-26 20:49:47 +00:00
Abhimanyu Yadav
708ed9a91c fix(platform): handle None value in issue body when fetching GitHub issues (#8773)
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-11-26 20:46:13 +00:00
Abhimanyu Yadav
951948d239 fix(platform): allowing condition block to compare 2 strings (#8771)
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Bently <tomnoon9@gmail.com>
2024-11-26 20:40:51 +00:00
Reinier van der Leer
f1414550f9 refactor(platform): Combine per-provider credentials API calls (#8772)
- Add `/integrations/credentials` endpoint which lists all credentials for the authenticated user
- Amend credential fetching logic in front end to fetch all at once instead of per provider

- Resolves #8770
- Resolves (hopefully) #8613
2024-11-26 17:03:06 +00:00
dependabot[bot]
c6e838da37 chore(market): Update Ruff from 0.7.4 to 0.8.0 (#8758) 2024-11-26 08:12:27 +00:00
Reinier van der Leer
06b403f2b0 docs(backend): Add --build to docker command in Getting Started guide (#8762) 2024-11-25 22:51:24 -06:00
dependabot[bot]
03f776681a build(deps-dev): bump the development-dependencies group in /autogpt_platform/backend with 2 updates (#8761)
build(deps-dev): bump the development-dependencies group

Bumps the development-dependencies group in /autogpt_platform/backend with 2 updates: [poethepoet](https://github.com/nat-n/poethepoet) and [ruff](https://github.com/astral-sh/ruff).


Updates `poethepoet` from 0.30.0 to 0.31.0
- [Release notes](https://github.com/nat-n/poethepoet/releases)
- [Commits](https://github.com/nat-n/poethepoet/compare/v0.30.0...v0.31.0)

Updates `ruff` from 0.7.4 to 0.8.0
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.7.4...0.8.0)

---
updated-dependencies:
- dependency-name: poethepoet
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: development-dependencies
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: development-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 22:49:45 +00:00
Reinier van der Leer
3d21d54dab fix(backend): Add missing strenum dependency
Follow-up hotfix for #8358
2024-11-25 18:26:13 +00:00
Reinier van der Leer
eef9bbe991 feat(platform, blocks): Webhook-triggered blocks (#8358)
- feat(blocks): Add GitHub Pull Request Trigger block

## feat(platform): Add support for Webhook-triggered blocks
- ⚠️ Add `PLATFORM_BASE_URL` setting

- Add webhook config option and `BlockType.WEBHOOK` to `Block`
  - Add check to `Block.__init__` to enforce type and shape of webhook event filter
  - Add check to `Block.__init__` to enforce `payload` input on webhook blocks
  - Add check to `Block.__init__` to disable webhook blocks if `PLATFORM_BASE_URL` is not set

- Add `Webhook` model + CRUD functions in `backend.data.integrations` to represent webhooks created by our system
  - Add `IntegrationWebhook` to DB schema + reference `AgentGraphNode.webhook_id`
    - Add `set_node_webhook(..)` in `backend.data.graph`

- Add webhook-related endpoints:
  - `POST /integrations/{provider}/webhooks/{webhook_id}/ingress` endpoint, to receive webhook payloads, and for all associated nodes create graph executions
    - Add `Node.is_triggered_by_event_type(..)` helper method
  - `POST /integrations/{provider}/webhooks/{webhook_id}/ping` endpoint, to allow testing a webhook
  - Add `WebhookEvent` + pub/sub functions in `backend.data.integrations`

- Add `backend.integrations.webhooks` module, including:
  - `graph_lifecycle_hooks`, e.g. `on_graph_activate(..)`, to handle corresponding webhook creation etc.
    - Add calls to these hooks in the graph create/update endpoints
  - `BaseWebhooksManager` + `GithubWebhooksManager` to handle creating + registering, removing + deregistering, and retrieving existing webhooks, and validating incoming payloads

## Other improvements
- fix(blocks): Allow having an input and output pin with the same name
- fix(blocks): Add tooltip with description in places where block inputs are rendered without `NodeHandle`
- feat(blocks): Allow hiding inputs (e.g. `payload`) with `SchemaField(hidden=True)`
- fix(frontend): Fix `MultiSelector` component styling
- feat(frontend): Add `AlertDialog` UI component
- feat(frontend): Add `NodeMultiSelectInput` component
- feat(backend/data): Add `NodeModel` with `graph_id`, `graph_version`; `GraphModel` with `user_id`
  - Add `make_graph_model(..)` helper function in `backend.data.graph`
- refactor(backend/data): Make `RedisEventQueue` generic and move to `backend.data.execution`
- refactor(frontend): Deduplicate & clean up code for different block types in `generateInputHandles(..)` in `CustomNode`
- dx(backend): Add `MissingConfigError`, `NeedConfirmation` exception

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-25 18:42:36 +01:00
Reinier van der Leer
464b5309d7 fix(forge): Fix double model kwarg error in AnthropicProvider.create_chat_completion(..) (#8666) 2024-11-25 15:41:50 +00:00
Zamil Majdy
f00654cb2c fix(backend): Fix .env file read contention on pyro connection setup (#8736) 2024-11-25 16:55:52 +07:00
thecosmicmuffet
bc8ae1f542 docs(platform): Fix url in README.md (#8747) 2024-11-23 02:37:41 +00:00
Reinier van der Leer
f2816f98e9 Merge branch 'master' into dev 2024-11-21 18:06:08 +00:00
Abhimanyu Yadav
5ee8b62d67 fix: hide content except login when not authenticated to prevent errors (#8398)
* fix: hide content except login when not authenticated to prevent errors

* Remove supabase folder from tracking

* Remove supabase folder from Git tracking

* adding git submodule

* adding git submodule

* Discard changes to .gitignore

* only showing AutoGPT logo if user is not present

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-11-21 15:57:35 +00:00
Zamil Majdy
8b4bb27077 fix(backend): Re-work the connection input consumption logic for Agent Executor Block (#8710) 2024-11-21 11:05:41 +00:00
Zamil Majdy
6954f4eb0e fix(backend): Revert non-async routes that are changed to async (#8734) 2024-11-21 11:46:55 +01:00
Zamil Majdy
c14ab0c37a refactor(backend): Remove un-needed join in fix_llm_provider_credentials query (#8728) 2024-11-21 09:35:17 +00:00
Reinier van der Leer
13da8af170 chore(platform): Bump version to v0.3.3 2024-11-20 23:25:14 +00:00
Bently
63e3244e7e feat(platform): Updates to Runner Output UI (#8717)
* feat(platform): Updates to Runner Output UI

* add copy text button to output boxes

* prettier

---------

Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-20 19:35:29 +00:00
Simone Busoli
19095be249 docs: replace docker-compose with docker compose (#8502)
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-20 19:20:46 +00:00
Nicholas Tindle
26a6bd4d10 docs(platform): update docs for security ssrf (#8675) 2024-11-20 15:29:45 +00:00
Nicholas Tindle
92bfbfad57 feat: generate simple auth tests (#8709) 2024-11-20 15:21:16 +00:00
Krzysztof Czerwinski
cf43248ab8 feat(frontend): Show Agent Output on Monitor page (#8501)
* Show Output in Monitor

* Updates

* Updates

* Move hardcoded ids to a dedicated enum

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-20 15:08:22 +00:00
dependabot[bot]
aea6e7caed build(deps): bump the production-dependencies group in /autogpt_platform/frontend with 7 updates (#8703)
build(deps): bump the production-dependencies group

Bumps the production-dependencies group in /autogpt_platform/frontend with 7 updates:

| Package | From | To |
| --- | --- | --- |
| @radix-ui/react-icons | `1.3.1` | `1.3.2` |
| [@radix-ui/react-scroll-area](https://github.com/radix-ui/primitives) | `1.2.0` | `1.2.1` |
| [@radix-ui/react-tooltip](https://github.com/radix-ui/primitives) | `1.1.3` | `1.1.4` |
| [@sentry/nextjs](https://github.com/getsentry/sentry-javascript) | `8.37.1` | `8.38.0` |
| [elliptic](https://github.com/indutny/elliptic) | `6.6.0` | `6.6.1` |
| [lucide-react](https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react) | `0.456.0` | `0.460.0` |
| [react-day-picker](https://github.com/gpbl/react-day-picker) | `9.3.0` | `9.3.2` |


Updates `@radix-ui/react-icons` from 1.3.1 to 1.3.2

Updates `@radix-ui/react-scroll-area` from 1.2.0 to 1.2.1
- [Changelog](https://github.com/radix-ui/primitives/blob/main/release-process.md)
- [Commits](https://github.com/radix-ui/primitives/commits)

Updates `@radix-ui/react-tooltip` from 1.1.3 to 1.1.4
- [Changelog](https://github.com/radix-ui/primitives/blob/main/release-process.md)
- [Commits](https://github.com/radix-ui/primitives/commits)

Updates `@sentry/nextjs` from 8.37.1 to 8.38.0
- [Release notes](https://github.com/getsentry/sentry-javascript/releases)
- [Changelog](https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-javascript/compare/8.37.1...8.38.0)

Updates `elliptic` from 6.6.0 to 6.6.1
- [Commits](https://github.com/indutny/elliptic/compare/v6.6.0...v6.6.1)

Updates `lucide-react` from 0.456.0 to 0.460.0
- [Release notes](https://github.com/lucide-icons/lucide/releases)
- [Commits](https://github.com/lucide-icons/lucide/commits/0.460.0/packages/lucide-react)

Updates `react-day-picker` from 9.3.0 to 9.3.2
- [Release notes](https://github.com/gpbl/react-day-picker/releases)
- [Changelog](https://github.com/gpbl/react-day-picker/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gpbl/react-day-picker/compare/v9.3.0...v9.3.2)

---
updated-dependencies:
- dependency-name: "@radix-ui/react-icons"
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: "@radix-ui/react-scroll-area"
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: "@radix-ui/react-tooltip"
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: "@sentry/nextjs"
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: elliptic
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
- dependency-name: lucide-react
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
- dependency-name: react-day-picker
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-20 17:20:27 +04:00
dependabot[bot]
c84cc292f1 build(deps): bump fastapi from 0.115.4 to 0.115.5 in /autogpt_platform/market in the production-dependencies group (#8705)
build(deps): bump fastapi

Bumps the production-dependencies group in /autogpt_platform/market with 1 update: [fastapi](https://github.com/fastapi/fastapi).


Updates `fastapi` from 0.115.4 to 0.115.5
- [Release notes](https://github.com/fastapi/fastapi/releases)
- [Commits](https://github.com/fastapi/fastapi/compare/0.115.4...0.115.5)

---
updated-dependencies:
- dependency-name: fastapi
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-20 17:08:07 +04:00
Swifty
d84ddfcf1a fix(block): Updated model_version to prevent conflicts with pydantic naming (#8729)
changed model_version name to avoid conflicts
2024-11-20 12:51:06 +00:00
Zamil Majdy
5fa5b7104a fix(frontend): Monitor Page got all the request doubled on each page refresh (#8727)
fix(frontend): Avoid refreshing page on each auth state change event
2024-11-20 10:43:34 +00:00
Reinier van der Leer
33dd2eb919 dx(backend): Fix pre-commit isort step (#8726)
- Set `tool.isort.profile = "black"`
- Explicitly pass the first-party package name in the `isort` jobs in the `pre-commit` config
2024-11-19 19:51:20 -06:00
Aarushi
a5734a57d5 fix(platform): Remove settings endpoint (#8715)
remove settings endpoint
2024-11-19 23:39:40 +00:00
Zamil Majdy
274419d393 fix(backend): Improve typing for blocks StepThroughItemsBlock, CountdownTimerBlock, AddToListBlock, AddToDictionaryBlock (#8713) 2024-11-19 20:53:34 +00:00
Aarushi
520d0ca0e4 delete infra folder (#8555)
* delete infra folder

* remove ci
2024-11-19 14:21:57 +00:00
dependabot[bot]
84076ebee1 build(deps-dev): bump the development-dependencies group in /autogpt_platform/backend with 2 updates (#8699)
build(deps-dev): bump the development-dependencies group

Bumps the development-dependencies group in /autogpt_platform/backend with 2 updates: [ruff](https://github.com/astral-sh/ruff) and [pyright](https://github.com/RobertCraigie/pyright-python).


Updates `ruff` from 0.7.3 to 0.7.4
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.7.3...0.7.4)

Updates `pyright` from 1.1.388 to 1.1.389
- [Release notes](https://github.com/RobertCraigie/pyright-python/releases)
- [Commits](https://github.com/RobertCraigie/pyright-python/compare/v1.1.388...v1.1.389)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: pyright
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 13:28:34 +00:00
dependabot[bot]
2e934dfff3 build(deps-dev): bump the development-dependencies group in /autogpt_platform/market with 2 updates (#8706)
build(deps-dev): bump the development-dependencies group

Bumps the development-dependencies group in /autogpt_platform/market with 2 updates: [ruff](https://github.com/astral-sh/ruff) and [pyright](https://github.com/RobertCraigie/pyright-python).


Updates `ruff` from 0.7.3 to 0.7.4
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.7.3...0.7.4)

Updates `pyright` from 1.1.388 to 1.1.389
- [Release notes](https://github.com/RobertCraigie/pyright-python/releases)
- [Commits](https://github.com/RobertCraigie/pyright-python/compare/v1.1.388...v1.1.389)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
- dependency-name: pyright
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 13:08:08 +00:00
dependabot[bot]
c1c3345bc0 build(deps-dev): bump ruff from 0.7.3 to 0.7.4 in /autogpt_platform/autogpt_libs in the development-dependencies group (#8701)
build(deps-dev): bump ruff

Bumps the development-dependencies group in /autogpt_platform/autogpt_libs with 1 update: [ruff](https://github.com/astral-sh/ruff).


Updates `ruff` from 0.7.3 to 0.7.4
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.7.3...0.7.4)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: development-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 10:10:26 +00:00
dependabot[bot]
fb9a543e35 build(deps): bump pyjwt from 2.9.0 to 2.10.0 in /autogpt_platform/autogpt_libs in the production-dependencies group (#8700)
build(deps): bump pyjwt

Bumps the production-dependencies group in /autogpt_platform/autogpt_libs with 1 update: [pyjwt](https://github.com/jpadilla/pyjwt).


Updates `pyjwt` from 2.9.0 to 2.10.0
- [Release notes](https://github.com/jpadilla/pyjwt/releases)
- [Changelog](https://github.com/jpadilla/pyjwt/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/jpadilla/pyjwt/compare/2.9.0...2.10.0)

---
updated-dependencies:
- dependency-name: pyjwt
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 10:06:05 +00:00
Reinier van der Leer
e81083d9ab fix(frontend): Allow importing agent file with empty description (#8670)
Check structure of agent file with `obj[key] != null` rather than `!!obj[key]` to allow empty strings
2024-11-18 23:57:02 +00:00
Coenraad Loubser
865e3c056d ref(classic): Do not 'rm -rf <unquoted variable>' when removing classic env (#8417)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-11-18 17:09:10 -06:00
Aarushi
8fccf2eed3 fix(platform/builder): Add heartbeat mechanism (#8665)
* add heartbeat mechanism

* formatting data

* import List

* another import fix

* wip

* formatting adn linting
2024-11-18 16:33:15 +00:00
Reinier van der Leer
1f34f78e4e build(frontend): Optimize Docker build time and image size (#8695)
This PR reduces image size by 4.9GB (93%) and reduces uncached build time from ~7m to ~5m20s.

- Use cache mount to prevent Yarn cache from being included in `yarn install` layer
- Leverage Next.js output tracing to generate minimal application w/ tree-shaken dependencies
- Add non-root user following the Next.js reference Dockerfile
2024-11-18 15:07:03 +00:00
Toran Bruce Richards
29cff1bb4e feat(blocks): Add Open Router integration with a large selection of new models (#8653)
* feat: Add Open Router integration credentials

- Added support for Open Router integration credentials in the Supabase integration credentials store.
- Updated the LLM provider field to include "open_router" as a valid provider option.
- Added Open Router API key field to the backend settings.
- Updated the profile page to display the Open Router integration credentials.
- Updated the credentials input and provider components to include Open Router as a provider option.
- Updated the autogpt-server-api types to include "open_router" as a provider name.
- Updated the LLM provider schema to include "open_router" as a valid provider name.

- Added GEMINI_FLASH_1_5_8B as the first Open Router LLM

* Add type ignore to new llm prompt to match the rest of them.

* Update LlmModel with a selection of new OpenRouter models

* format
2024-11-18 14:03:50 +00:00
Zamil Majdy
402789d8cd fix(frontend): avoid displaying long description text for block (#8688)
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-11-18 13:25:21 +00:00
Zamil Majdy
6fa4b8cb11 fix(backend): Add the lower cap of the user credits to zero (#8682)
fix(backend): Add lower cap of the user credits to zero
2024-11-18 13:12:42 +00:00
Zamil Majdy
f36d95aaa8 fix(backend): Avoid falling back to default user unless ENABLED_AUTH is set to False (#8691) 2024-11-18 13:01:21 +00:00
Krzysztof Czerwinski
a660833744 refactor(frontend): Update buttons to edit agents in Monitor (#8687)
Update Monitor buttons

Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-18 12:56:40 +00:00
Reinier van der Leer
e840106949 fix(backend): Resolve Pydantic warning about missing secrets_dir (#8692)
- Remove `secrets_dir` and other references to `get_secrets_path()`
- Remove unused `get_config_path()`

Follow-up to #8521, which removed the `secrets` dir but not the references to it.
2024-11-18 12:41:18 +00:00
Reinier van der Leer
6c109adf0b Merge branch 'master' into dev 2024-11-18 12:32:24 +00:00
Reinier van der Leer
bff0dc3d82 chore(platform): Bump version to v0.3.1 2024-11-18 12:11:07 +01:00
Zamil Majdy
cd7dfbb8b3 fix(frontend): Typing in the NodeKeyValueInput field causes the field to un-focus (#8680)
* fix(frontend): Typing in the "Prompt Values" input field causes the field to un-focus

* Add comment

* Rephrase
2024-11-18 10:51:54 +00:00
Zamil Majdy
a2895a2ca0 fix(backend): Define executionmanager hostname for local docker-mode (#8681) 2024-11-18 09:16:00 +00:00
Craig Perkins
e30dac575d Update links to images in FORGE-QUICKSTART.md (#8517)
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-11-16 21:23:54 -06:00
Reinier van der Leer
918538147c fix(backend): Add migrations to fix credentials inputs with invalid provider "llm" (#8674)
In #8524, the "llm" credentials provider was replaced. There are still entries with `"provider": "llm"` in the system though, and those break if not migrated.

- SQL migration to fix the obvious ones where we know the provider from `credentials.id`
- Non-SQL migration to fix the rest
2024-11-16 01:07:05 +01:00
Reinier van der Leer
1e8a272ac6 fix(backend): Add migrations to fix credentials inputs with invalid provider "llm" (vol. 5)
Five times the charm
2024-11-16 00:41:56 +01:00
Reinier van der Leer
1c6890486f fix(backend): Add migrations to fix credentials inputs with invalid provider "llm" (vol. 4)
Another attempt at unbreaking this raw Prisma query
2024-11-16 00:17:43 +01:00
Reinier van der Leer
29688758c4 fix(backend): Add migrations to fix credentials inputs with invalid provider "llm" (vol. 3)
Fix User table reference in raw SQL queryin non-Prisma migration
2024-11-15 20:56:04 +01:00
Reinier van der Leer
2a66295a92 fix(backend): Add migrations to fix credentials inputs with invalid provider "llm" (vol. 2)
Fix breaking SQL double-casting issue in the SQL migration
2024-11-15 20:32:24 +01:00
Reinier van der Leer
4db8e746d7 fix(backend): Add migrations to fix credentials inputs with invalid provider "llm"(#8674)
In #8524, the "llm" credentials provider was replaced. There are still entries with 	"provider": "llm"	 in the system though, and those break if not migrated.

- SQL migration to fix the obvious ones where we know the provider from `credentials.id`
- Non-SQL migration to fix the rest
2024-11-15 20:18:02 +01:00
Reinier van der Leer
0551bec096 Merge branch 'master' into dev 2024-11-15 15:17:45 +01:00
Toran Bruce Richards
bd2f172e6d tweak(docs): Update Block File Path in Documentation (#8662)
Update new_blocks.md doccumentation
2024-11-15 12:46:31 +00:00
Reinier van der Leer
9a4ff9023d bump version to v0.3.0 2024-11-15 11:54:42 +01:00
Zamil Majdy
8987fdd48c feat(backend): Enable json parsing with typing & conversion (#8578) 2024-11-15 17:28:59 +07:00
Zamil Majdy
6a1cea4c4e fix(backend): Add execution persistence for execution scheduler service (#8649)
* fix(backend): Add execution persistence for execution scheduler service

* scheduler REST API cleanup

* Fix to binary

* Adapt UI with new API

* Remove schedule.py

* Remove unused class

* Fix linting
2024-11-15 11:17:37 +01:00
Zamil Majdy
f27f596f58 fix(frontend): Newly typed text in Input fields vanishes on scroll (#8657) 2024-11-15 08:00:59 +00:00
Nicholas Tindle
ea214d9168 ci: fix classic ci (#8338)
* ci(frontend,backend,classic): update branch from develop to dev

* ci(frontend, infra): enable ci on other tools

* Update classic-autogpt-docker-ci.yml

* fix: don't error if the folder exists

* fix: drop bad test

* Revert "fix: drop bad test"

This reverts commit c478d3cf4c.

* fix: turn off the correct test 👀

* fix: remove more

* Discard changes to .github/workflows/classic-autogpt-ci.yml

* Update classic-autogpt-docker-ci.yml

* Update classic-autogpt-docker-release.yml

* Update classic-autogpts-ci.yml

* Discard changes to .github/workflows/classic-forge-ci.yml

* Discard changes to .github/workflows/classic-autogpts-ci.yml

* Discard changes to .github/workflows/classic-python-checks.yml

* Discard changes to .github/workflows/repo-pr-label.yml

* Discard changes to .github/workflows/platform-backend-ci.yml

* Update classic-benchmark-ci.yml

* Update classic-frontend-ci.yml
2024-11-15 01:48:00 -06:00
Reinier van der Leer
f9633ffb71 Revert "fix(platform): Remove migrate and encrypt function" (#8654)
Reverts c707ee9 (#8646)

The problem analysis that led to #8646 contained some errors, so the migration removed in the PR doesn't seem to have been the cause of the problem we were hunting. Also, this migration is an essential part of the security improvement that we made 2 weeks ago.
2024-11-14 23:34:30 +00:00
Bently
e140873dd4 Feat(Builder/tutorial): Updates to fix tutorial (#8655)
Feat(Builder/tutorial): Updates to fix tutorial
2024-11-14 22:25:32 +00:00
Abhimanyu Yadav
dd0081ab35 feat(platform) : scheduling agent runner (#8634)
* add: ui for scheduling agent

* adding requests and type for schedule endpoints

* feat : monitor schdules on monitor page

* add: Complete monitor page

* fix filter on monitor page

* fix linting

* PR nits

* Added Docker Compose env var

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-14 16:18:56 +01:00
Abhimanyu Yadav
bbbdb5665b feat(platform): Add api generator functions and endpoints (#8597)
* add: api generator functions and endpoints

* Rebase onto dev, refactor API manager location, remove suspended key revoke, and update API code for Prisma compatibility

* add: key_manager

* reversing changes og poetry.lock

* add: changing hash mexhansim in API Manager

* add: changing hash mexhansim in API Manager

* fixing some simple bugs

* fix linting and adding better error handling

---------

Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-14 14:33:27 +00:00
Toran Bruce Richards
e628a25533 feat(blocks): Add AIImageGeneratorBlock (#8525)
* feat(block): Add AIImageGeneratorBlock

This commit adds the AIImageGeneratorBlock class to the backend. The AIImageGeneratorBlock is responsible for generating images using various AI models through a unified interface.

* Remove unsupported inputs and add more styles

* Update autogpt_platform/backend/backend/blocks/ai_image_generator_block.py

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>

* run format

* Add test mock

* mock client run

* Refactor AIImageGeneratorBlock to use a separate function for running the client

* Update Credential description

* Rename ModelProvider to ImageGenModel

* Add missing block run function

* fix mock

* .

* Refactor AIImageGeneratorBlock to move run_client function inside class

* Fix broken reference to run client and tidy code.

* Refactor AIImageGeneratorBlock to improve code structure and error handling

* Move client into run client instantiation function.

* Refactor AIImageGeneratorBlock to handle output as FileOutput and improve error handling

* run format

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-11-14 12:28:24 +00:00
Nicholas Tindle
52b3148196 feat(frontend): check auth before allowing actions to run (#8633) 2024-11-14 09:45:31 +00:00
Toran Bruce Richards
05c76738a4 tweak(frontend): Add jina and unreal to hidden credentials list in frontend (#8642)
Adds missing hidden credentials
2024-11-14 09:33:00 +00:00
vishesh10
639242ac68 Add provision for other languages in Youtube Video Block (#8630)
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-11-14 09:20:46 +00:00
Zamil Majdy
ce667f6287 feat(frontend): Center initial canvas (#8644)
* fix(frontend): Fix client-side validation for Agent Executor Block

* Fix zoom scale calculation

* Fix zoom scale calculation
2024-11-14 09:15:30 +00:00
Zamil Majdy
98ab525e39 fix(frontend): Fix input-field update on empty & default value (#8647)
* fix(frontend): Fix input-field update on empty & default value

* Fix error message

* Revert
2024-11-14 09:03:12 +00:00
Aarushi
c707ee9cb6 fix(platform): Remove migrate and encrypt function (#8646)
remove migrate and encrypt function
2024-11-13 23:09:26 +00:00
Nicholas Tindle
b64c536eca Create SECURITY.md (#8645) 2024-11-13 22:28:52 +00:00
Reinier van der Leer
5c0f979b9c fix(frontend): Remove double title on credentials input (#8638)
- Add condition to hide `credentials` input title in `CustomNode:generateInputHandles`
- Add `title={schema.description}` to `<CredentialsInput>` title element
2024-11-13 19:37:55 +00:00
Nicholas Tindle
b048385091 fix(classic): update docs for security deprecation (#8632)
* Create README.md

* Update README.md

* Update index.md

* Update README.md

* Update index.md

* Update index.md
2024-11-13 18:52:21 +00:00
Zamil Majdy
67244759c7 fix(frontend): Fix client-side validation for Agent Executor Block (#8643)
* feat(frontend): Center initial canvas & add option to open graph on agent executor blok

* Removed unused variable
2024-11-13 18:31:22 +00:00
Kaitlyn Barnard
a3655b8a85 Adding Google Analytics to docs site (#8640)
Adding GA tag to docs site

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2024-11-13 17:03:28 +00:00
Toran Bruce Richards
aafc101224 tweak(backend): Update all block costs (#8639)
* Add support for default credentials to unreal block

* Refactor block cost configuration and add new blocks

This commit refactors the block cost configuration file and adds support for new blocks. The changes include:
- Importing the `AIMusicGeneratorBlock`, `JinaEmbeddingBlock`, and `UnrealTextToSpeechBlock` classes
- Updating the `BLOCK_COSTS` dictionary to include costs for the new blocks

These changes enable the usage of the newly introduced blocks.
2024-11-13 16:05:32 +00:00
Reinier van der Leer
ef3f7aad18 fix(frontend): Unbreak credentials input on single-provider blocks (vol. 2)
Fix bad condition introduced in aaa0b79f (#8636) to resolve #8635
2024-11-13 15:30:36 +01:00
Reinier van der Leer
aaa0b79f08 fix(frontend): Unbreak credentials input on single-provider blocks (#8636)
- Resolves #8635

- fix(frontend): Fix type mismatch of `CredentialsField` schema between frontend and backend
   - Fix usages of `credentialsSchema.credentials_provider`

- refactor(backend): Create `CredentialsFieldSchemaExtra` model in backend so it can be mirrored directly in frontend
   - Add check to enforce multi-provider `CredentialsField` always has `discriminator`

- dx: Add type checking shortcut `yarn type-check` / `npm run type-check` for frontend
2024-11-13 13:48:15 +00:00
Krzysztof Czerwinski
e907ffda6e feat(platform): Simplify Credentials UX (#8524)
- Change `provider` of default credentials to actual provider names (e.g. `anthropic`), remove `llm` provider
- Add `discriminator` and `discriminator_mapping` to `CredentialsField` that allows to filter credentials input to only allow  providers for matching models in `useCredentials` hook (thanks @ntindle for the idea!); e.g. user chooses `GPT4_TURBO` so then only OpenAI credentials are allowed
- Choose credentials automatically and hide credentials input on the node completely if there's only one possible option
- Move `getValue` and `parseKeys` to utils
- Add `ANTHROPIC`, `GROQ` and `OLLAMA` to providers in frontend `types.ts`
- Add `hidden` field to credentials that is used for default system keys to hide them in user profile
- Now `provider` field in `CredentialsField` can accept multiple providers as a list

-----------------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2024-11-12 16:55:48 +01:00
Zamil Majdy
ef7e50403e refactor(backend): Centralize Block Cost into a Single File (#8623) 2024-11-12 14:09:59 +00:00
Zamil Majdy
1e872406ca feat(platform): Introduced Agent Execution Block (#8533) 2024-11-12 13:03:15 +07:00
Abhimanyu Yadav
5ee909f687 fix: show error toast on Run failure when inputs or credentials are i… (#8391)
* fix: show error toast on Run failure when inputs or credentials are invalid

* Remove supabase folder from tracking

* revert supabase

* remove toast

---------

Co-authored-by: Avhimanyu <2023ebcs396@online.bits-pilani.ac.in>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-11 22:43:15 -06:00
Nicholas Tindle
ff1fa2af2d [Snyk] Security upgrade zipp from 3.15.0 to 3.19.1 (#8622)
fix: docs/requirements.txt to reduce vulnerabilities


The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-ZIPP-7430899

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2024-11-11 18:22:03 -06:00
dependabot[bot]
ee3252bdb1 build(deps): bump cookie from 0.7.0 to 1.0.1 in /autogpt_platform/frontend (#8616)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 00:15:45 +00:00
Nicholas Tindle
4a8f3dbbb1 [Snyk] Security upgrade urllib3 from 2.0.7 to 2.2.2 (#8620)
Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2024-11-11 17:45:56 -06:00
dependabot[bot]
4a163e5b54 build(deps-dev): bump the development-dependencies group in /autogpt_platform/backend with 3 updates (#8611)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-11-11 17:31:13 -06:00
dependabot[bot]
ca0b2311e8 build(deps): bump the production-dependencies group in /autogpt_platform/frontend with 8 updates (#8614)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 17:25:25 -06:00
dependabot[bot]
d03fd930c6 build(deps-dev): bump the development-dependencies group in /autogpt_platform/frontend with 12 updates (#8615)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-11-11 22:34:43 +00:00
dependabot[bot]
603fec3467 build(deps): bump the production-dependencies group in /autogpt_platform/backend with 2 updates (#8610)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2024-11-11 22:21:44 +00:00
dependabot[bot]
c53c7f8dd8 build(deps-dev): bump the development-dependencies group in /autogpt_platform/market with 2 updates (#8612)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 22:13:43 +00:00
dependabot[bot]
ce3539ff16 build(deps-dev): bump ruff from 0.7.2 to 0.7.3 in /autogpt_platform/autogpt_libs in the development-dependencies group (#8618)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 16:10:08 -06:00
dependabot[bot]
ea8f164b93 build(deps): bump supabase from 2.9.1 to 2.10.0 in /autogpt_platform/autogpt_libs in the production-dependencies group (#8617)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 21:48:57 +00:00
Nicholas Tindle
3c0dea0017 [Snyk] Security upgrade python from 3.11-slim-buster to 3.11.10-slim-bookworm (#8619)
fix: autogpt_platform/backend/Dockerfile to reduce vulnerabilities

The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-DEBIAN10-GNUTLS28-6159414
- https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-1655739
- https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-1655739
- https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-5421196
- https://snyk.io/vuln/SNYK-DEBIAN10-NCURSES-5421196

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2024-11-11 20:33:39 +00:00
dependabot[bot]
9e4246602d build(deps): bump peter-evans/repository-dispatch from 2 to 3 (#8609)
Bumps [peter-evans/repository-dispatch](https://github.com/peter-evans/repository-dispatch) from 2 to 3.
- [Release notes](https://github.com/peter-evans/repository-dispatch/releases)
- [Commits](https://github.com/peter-evans/repository-dispatch/compare/v2...v3)

---
updated-dependencies:
- dependency-name: peter-evans/repository-dispatch
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 19:59:12 +00:00
Toran Bruce Richards
1f0cbc6500 feat(block): Add AI Music Generator Block with Meta Music Gen (#8532)
* feat(platform): Add AIMusicGeneratorBlock for music generation

* refactor(platform): Refactor AIMusicGeneratorBlock for improved error handling and logging

* refactor(ui): Refactor ContentRenderer to support audio rendering

* format

* Frontend format and lint
2024-11-11 09:36:39 -08:00
Aarushi
e6e47373ac feat(blocks/jina): Add default credentials for Jina (#8603)
add jina defaults
2024-11-10 17:18:53 +00:00
Aarushi
f981a74a10 Merge master into dev 2024-11-09 19:10:03 -06:00
Aarushi
09dd391041 Merge dev into master 2024-11-09 19:09:09 -06:00
Nicholas Tindle
0b5b95eff5 build: add launch.json debugging for vscode (#8496)
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
2024-11-08 20:49:13 +00:00
Aarushi
4adbbc52f2 Merge branch 'master' into dev 2024-11-08 11:52:14 -06:00
Aarushi
359ae8307a feat(backend): Add API key DB table (#8593)
* add api key db tables

* remove uniqueness constraint on prefix

* add postfix
2024-11-08 17:48:37 +00:00
Aarushi
f719c7e70e feat(blocks): Pinecone blocks (#8535)
* update pinecone

* update blocks

* fix linting

* update test

* update requests

* mock funciton
2024-11-08 17:13:55 +00:00
Reinier van der Leer
c960bd870c dx: Clean up PR template (#8541)
* dx: Clean up PR template
2024-11-08 17:44:55 +01:00
Reinier van der Leer
dfb7cf19f7 dx: Fix pre-commit config (#8584)
- fix naming of hooks
- fix `pyright` hooks (b0rked by repo restructure)
- fix `forge` path (b0rked by faulty replace-all when the repo was restructured)
- fix `black` hook to work on all Python versions
- add `poetry install` hooks
- add `ruff`, `isort`, `pyright`, `pytest`, and `prisma generate` hooks for `backend/`
- add `ruff` and `pyright` hooks for `autogpt_libs/`
2024-11-08 16:06:23 +00:00
Aarushi
d6ecf80197 Merge branch 'dev' 2024-11-07 22:43:26 -06:00
Aarushi
c0f77c8e7a ci(platform): Update pipelines to run from infra repo (#8595)
* deploy trigger

* update name

* test path change

* update prod deploy too

* remove envvars

* update prod deploy name
2024-11-07 22:39:59 -06:00
Aarushi
47759f6951 refactor(backend): Remove config.default.json (#8581)
remove config default josn

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2024-11-07 21:20:15 -06:00
jackfromeast
bcaf3241da fix (backend): Patching the SSRF vulnerability in Github/Web Search/Request related blocks (#8531) 2024-11-08 00:29:18 +00:00
Aarushi
9a2664be35 fix(platform): Add local enc key (#8568)
add local enc key
2024-11-05 12:44:59 -08:00
Aarushi
44f73078f7 Update docker-compose.platform.yml (#8560) 2024-11-04 21:38:05 -08:00
SwiftyOS
db44d8c2ec fix(platform): Add ENCRYPTION_KEY Env Var to docker compose file 2024-11-04 10:06:43 -08:00
SwiftyOS
952f6f58ef docs(platform): correct readme 2024-11-01 09:13:55 +00:00
Swifty
151fad5ced docs(platform): Update frontend instructions (#8514)
update readme
2024-10-31 13:41:54 +00:00
308 changed files with 11807 additions and 7330 deletions

View File

@@ -1,36 +1,38 @@
### Background
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request: -->
### Checklist 📋
### Testing 🔍
> [!NOTE]
Only for the new autogpt platform, currently in autogpt_platform/
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<!--
Please make sure your changes have been tested and are in good working condition.
Here is a list of our critical paths, if you need some inspiration on what and how to test:
-->
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
- Create from scratch and execute an agent with at least 3 blocks
- Import an agent from file upload, and confirm it executes correctly
- Upload agent to marketplace
- Import an agent from marketplace and confirm it executes correctly
- Edit an agent from monitor, and confirm it executes correctly
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my changes
- [ ] I have included a list of my configuration changes in the PR description (under **Changes**)
### Configuration Changes 📝
> [!NOTE]
Only for the new autogpt platform, currently in autogpt_platform/
<details>
<summary>Examples of configuration changes</summary>
If you're making configuration or infrastructure changes, please remember to check you've updated the related infrastructure code in the autogpt_platform/infra folder.
Examples of such changes might include:
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>

View File

@@ -5,7 +5,7 @@ on:
- cron: 20 4 * * 1,4
env:
BASE_BRANCH: development
BASE_BRANCH: dev
IMAGE_NAME: auto-gpt
jobs:
@@ -15,46 +15,46 @@ jobs:
matrix:
build-type: [release, dev]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- id: build
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }}
load: true # save to docker images
# use GHA cache as read-only
cache-to: type=gha,scope=autogpt-docker-${{ matrix.build-type }},mode=max
- id: build
name: Build image
uses: docker/build-push-action@v6
with:
context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }}
load: true # save to docker images
# use GHA cache as read-only
cache-to: type=gha,scope=autogpt-docker-${{ matrix.build-type }},mode=max
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.schedule }}
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.schedule }}
build_type: ${{ matrix.build-type }}
build_type: ${{ matrix.build-type }}
prod_branch: master
dev_branch: development
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'development' && 'development' || 'master' }}
prod_branch: master
dev_branch: dev
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'dev' && 'dev' || 'master' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.sha) }}
push_forced_label:
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.sha) }}
push_forced_label:
new_commits_json: ${{ null }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
new_commits_json: ${{ null }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true

View File

@@ -2,7 +2,7 @@ name: Classic - AutoGPT Docker CI
on:
push:
branches: [ master, development ]
branches: [master, dev]
paths:
- '.github/workflows/classic-autogpt-docker-ci.yml'
- 'classic/original_autogpt/**'
@@ -34,58 +34,58 @@ jobs:
matrix:
build-type: [release, dev]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- if: runner.debug
run: |
ls -al
du -hs *
- if: runner.debug
run: |
ls -al
du -hs *
- id: build
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }}
tags: ${{ env.IMAGE_NAME }}
labels: GIT_REVISION=${{ github.sha }}
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=autogpt-docker-${{ matrix.build-type }}
cache-to: type=gha,scope=autogpt-docker-${{ matrix.build-type }},mode=max
- id: build
name: Build image
uses: docker/build-push-action@v6
with:
context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=${{ matrix.build-type }}
tags: ${{ env.IMAGE_NAME }}
labels: GIT_REVISION=${{ github.sha }}
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=autogpt-docker-${{ matrix.build-type }}
cache-to: type=gha,scope=autogpt-docker-${{ matrix.build-type }},mode=max
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
build_type: ${{ matrix.build-type }}
build_type: ${{ matrix.build-type }}
prod_branch: master
dev_branch: development
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'development' && 'development' || 'master' }}
prod_branch: master
dev_branch: dev
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'dev' && 'dev' || 'master' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.event.after }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
push_forced_label: ${{ github.event.forced && '☢️ forced' || '' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.event.after }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
push_forced_label: ${{ github.event.forced && '☢️ forced' || '' }}
new_commits_json: ${{ toJSON(github.event.commits) }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
new_commits_json: ${{ toJSON(github.event.commits) }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true
test:
runs-on: ubuntu-latest
@@ -117,16 +117,16 @@ jobs:
- id: build
name: Build image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: classic/
file: classic/Dockerfile.autogpt
build-args: BUILD_TYPE=dev # include pytest
build-args: BUILD_TYPE=dev # include pytest
tags: >
${{ env.IMAGE_NAME }},
${{ env.DEPLOY_IMAGE_NAME }}:${{ env.DEV_IMAGE_TAG }}
labels: GIT_REVISION=${{ github.sha }}
load: true # save to docker images
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=autogpt-docker-dev
cache-to: type=gha,scope=autogpt-docker-dev,mode=max

View File

@@ -2,7 +2,7 @@ name: Classic - AutoGPT Docker Release
on:
release:
types: [ published, edited ]
types: [published, edited]
workflow_dispatch:
inputs:
@@ -19,69 +19,69 @@ jobs:
if: startsWith(github.ref, 'refs/tags/autogpt-')
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to Docker hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to Docker hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# slashes are not allowed in image tags, but can appear in git branch or tag names
- id: sanitize_tag
name: Sanitize image tag
run: |
tag=${raw_tag//\//-}
echo tag=${tag#autogpt-} >> $GITHUB_OUTPUT
env:
raw_tag: ${{ github.ref_name }}
# slashes are not allowed in image tags, but can appear in git branch or tag names
- id: sanitize_tag
name: Sanitize image tag
run: |
tag=${raw_tag//\//-}
echo tag=${tag#autogpt-} >> $GITHUB_OUTPUT
env:
raw_tag: ${{ github.ref_name }}
- id: build
name: Build image
uses: docker/build-push-action@v5
with:
context: classic/
file: Dockerfile.autogpt
build-args: BUILD_TYPE=release
load: true # save to docker images
# push: true # TODO: uncomment when this issue is fixed: https://github.com/moby/buildkit/issues/1555
tags: >
${{ env.IMAGE_NAME }},
${{ env.DEPLOY_IMAGE_NAME }}:latest,
${{ env.DEPLOY_IMAGE_NAME }}:${{ steps.sanitize_tag.outputs.tag }}
labels: GIT_REVISION=${{ github.sha }}
- id: build
name: Build image
uses: docker/build-push-action@v6
with:
context: classic/
file: Dockerfile.autogpt
build-args: BUILD_TYPE=release
load: true # save to docker images
# push: true # TODO: uncomment when this issue is fixed: https://github.com/moby/buildkit/issues/1555
tags: >
${{ env.IMAGE_NAME }},
${{ env.DEPLOY_IMAGE_NAME }}:latest,
${{ env.DEPLOY_IMAGE_NAME }}:${{ steps.sanitize_tag.outputs.tag }}
labels: GIT_REVISION=${{ github.sha }}
# cache layers in GitHub Actions cache to speed up builds
cache-from: ${{ !inputs.no_cache && 'type=gha' || '' }},scope=autogpt-docker-release
cache-to: type=gha,scope=autogpt-docker-release,mode=max
# cache layers in GitHub Actions cache to speed up builds
cache-from: ${{ !inputs.no_cache && 'type=gha' || '' }},scope=autogpt-docker-release
cache-to: type=gha,scope=autogpt-docker-release,mode=max
- name: Push image to Docker Hub
run: docker push --all-tags ${{ env.DEPLOY_IMAGE_NAME }}
- name: Push image to Docker Hub
run: docker push --all-tags ${{ env.DEPLOY_IMAGE_NAME }}
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
inputs_no_cache: ${{ inputs.no_cache }}
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
inputs_no_cache: ${{ inputs.no_cache }}
prod_branch: master
dev_branch: development
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'development' && 'development' || 'master' }}
prod_branch: master
dev_branch: dev
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'dev' && 'dev' || 'master' }}
ref_type: ${{ github.ref_type }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
ref_type: ${{ github.ref_type }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-release-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true
run: .github/workflows/scripts/docker-release-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true

View File

@@ -102,7 +102,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [ forge ]
agent-name: [forge]
fail-fast: false
timeout-minutes: 20
steps:
@@ -146,23 +146,23 @@ jobs:
echo "Running the following command: poetry run agbenchmark --mock --category=coding"
poetry run agbenchmark --mock --category=coding
echo "Running the following command: poetry run agbenchmark --test=WriteFile"
poetry run agbenchmark --test=WriteFile
# echo "Running the following command: poetry run agbenchmark --test=WriteFile"
# poetry run agbenchmark --test=WriteFile
cd ../benchmark
poetry install
echo "Adding the BUILD_SKILL_TREE environment variable. This will attempt to add new elements in the skill tree. If new elements are added, the CI fails because they should have been pushed"
export BUILD_SKILL_TREE=true
poetry run agbenchmark --mock
# poetry run agbenchmark --mock
CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../classic/frontend/assets)') || echo "No diffs"
if [ ! -z "$CHANGED" ]; then
echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed."
echo "$CHANGED"
exit 1
else
echo "No unstaged changes."
fi
# CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../classic/frontend/assets)') || echo "No diffs"
# if [ ! -z "$CHANGED" ]; then
# echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed."
# echo "$CHANGED"
# exit 1
# else
# echo "No unstaged changes."
# fi
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
TELEMETRY_ENVIRONMENT: autogpt-benchmark-ci

View File

@@ -4,7 +4,7 @@ on:
push:
branches:
- master
- development
- dev
- 'ci-test*' # This will match any branch that starts with "ci-test"
paths:
- 'classic/frontend/**'
@@ -24,37 +24,37 @@ jobs:
BUILD_BRANCH: ${{ format('classic-frontend-build/{0}', github.ref_name) }}
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Checkout Repo
uses: actions/checkout@v4
- name: Setup Flutter
uses: subosito/flutter-action@v2
with:
flutter-version: '3.13.2'
- name: Setup Flutter
uses: subosito/flutter-action@v2
with:
flutter-version: '3.13.2'
- name: Build Flutter to Web
run: |
cd classic/frontend
flutter build web --base-href /app/
- name: Build Flutter to Web
run: |
cd classic/frontend
flutter build web --base-href /app/
# - name: Commit and Push to ${{ env.BUILD_BRANCH }}
# if: github.event_name == 'push'
# run: |
# git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action"
# git add classic/frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }}
# - name: Commit and Push to ${{ env.BUILD_BRANCH }}
# if: github.event_name == 'push'
# run: |
# git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action"
# git add classic/frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }}
- name: Create PR ${{ env.BUILD_BRANCH }} -> ${{ github.ref_name }}
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v7
with:
add-paths: classic/frontend/build/web
base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }}
delete-branch: true
title: "Update frontend build in `${{ github.ref_name }}`"
body: "This PR updates the frontend build based on commit ${{ github.sha }}."
commit-message: "Update frontend build based on commit ${{ github.sha }}"
- name: Create PR ${{ env.BUILD_BRANCH }} -> ${{ github.ref_name }}
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v7
with:
add-paths: classic/frontend/build/web
base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }}
delete-branch: true
title: "Update frontend build in `${{ github.ref_name }}`"
body: "This PR updates the frontend build based on commit ${{ github.sha }}."
commit-message: "Update frontend build based on commit ${{ github.sha }}"

View File

@@ -16,6 +16,7 @@ on:
branches: [ "master", "release-*", "dev" ]
pull_request:
branches: [ "master", "release-*", "dev" ]
merge_group:
schedule:
- cron: '15 4 * * 0'

View File

@@ -1,4 +1,4 @@
name: AutoGPT Platform - Build, Push, and Deploy Prod Environment
name: AutoGPT Platform - Deploy Prod Environment
on:
release:
@@ -8,12 +8,6 @@ permissions:
contents: 'read'
id-token: 'write'
env:
PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
GKE_CLUSTER: prod-gke-cluster
GKE_ZONE: us-central1-a
NAMESPACE: prod-agpt
jobs:
migrate:
environment: production
@@ -48,135 +42,14 @@ jobs:
env:
DATABASE_URL: ${{ secrets.MARKET_DATABASE_URL }}
build-push-deploy:
environment: production
name: Build, Push, and Deploy
trigger:
needs: migrate
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
workload_identity_provider: 'projects/1021527134101/locations/global/workloadIdentityPools/prod-pool/providers/github'
service_account: 'prod-github-actions-sa@agpt-prod.iam.gserviceaccount.com'
token_format: 'access_token'
create_credentials_file: true
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud@v2'
- name: 'Configure Docker'
run: |
gcloud auth configure-docker us-east1-docker.pkg.dev
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Check for changes
id: check_changes
run: |
git fetch origin master
BACKEND_CHANGED=$(git diff --name-only origin/master HEAD | grep "^autogpt_platform/backend/" && echo "true" || echo "false")
FRONTEND_CHANGED=$(git diff --name-only origin/master HEAD | grep "^autogpt_platform/frontend/" && echo "true" || echo "false")
MARKET_CHANGED=$(git diff --name-only origin/master HEAD | grep "^autogpt_platform/market/" && echo "true" || echo "false")
echo "backend_changed=$BACKEND_CHANGED" >> $GITHUB_OUTPUT
echo "frontend_changed=$FRONTEND_CHANGED" >> $GITHUB_OUTPUT
echo "market_changed=$MARKET_CHANGED" >> $GITHUB_OUTPUT
- name: Get GKE credentials
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
- name: Build and Push Backend
if: steps.check_changes.outputs.backend_changed == 'true'
uses: docker/build-push-action@v2
with:
context: .
file: ./autogpt_platform/backend/Dockerfile
push: true
tags: us-east1-docker.pkg.dev/agpt-prod/agpt-backend-prod/agpt-backend-prod:${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Build and Push Frontend
if: steps.check_changes.outputs.frontend_changed == 'true'
uses: docker/build-push-action@v2
with:
context: .
file: ./autogpt_platform/frontend/Dockerfile
push: true
tags: us-east1-docker.pkg.dev/agpt-prod/agpt-frontend-prod/agpt-frontend-prod:${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Build and Push Market
if: steps.check_changes.outputs.market_changed == 'true'
uses: docker/build-push-action@v2
with:
context: .
file: ./autogpt_platform/market/Dockerfile
push: true
tags: us-east1-docker.pkg.dev/agpt-prod/agpt-market-prod/agpt-market-prod:${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Set up Helm
uses: azure/setup-helm@v4
with:
version: v3.4.0
- name: Deploy Backend
if: steps.check_changes.outputs.backend_changed == 'true'
run: |
helm upgrade autogpt-server ./autogpt-server \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-server/values.yaml \
-f autogpt-server/values.prod.yaml \
--set image.tag=${{ github.sha }}
- name: Deploy Websocket
if: steps.check_changes.outputs.backend_changed == 'true'
run: |
helm upgrade autogpt-websocket-server ./autogpt-websocket-server \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-websocket-server/values.yaml \
-f autogpt-websocket-server/values.prod.yaml \
--set image.tag=${{ github.sha }}
- name: Deploy Market
if: steps.check_changes.outputs.market_changed == 'true'
run: |
helm upgrade autogpt-market ./autogpt-market \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-market/values.yaml \
-f autogpt-market/values.prod.yaml \
--set image.tag=${{ github.sha }}
- name: Deploy Frontend
if: steps.check_changes.outputs.frontend_changed == 'true'
run: |
helm upgrade autogpt-builder ./autogpt-builder \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-builder/values.yaml \
-f autogpt-builder/values.prod.yaml \
--set image.tag=${{ github.sha }}
- name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_prod
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'

View File

@@ -0,0 +1,57 @@
name: AutoGPT Platform - Deploy Dev Environment
on:
push:
branches: [ dev ]
paths:
- 'autogpt_platform/**'
permissions:
contents: 'read'
id-token: 'write'
jobs:
migrate:
environment: develop
name: Run migrations for AutoGPT Platform
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install prisma
- name: Run Backend Migrations
working-directory: ./autogpt_platform/backend
run: |
python -m prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
- name: Run Market Migrations
working-directory: ./autogpt_platform/market
run: |
python -m prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.MARKET_DATABASE_URL }}
trigger:
needs: migrate
runs-on: ubuntu-latest
steps:
- name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: build_deploy_dev
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "repository": "${{ github.repository }}"}'

View File

@@ -1,186 +0,0 @@
name: AutoGPT Platform - Build, Push, and Deploy Dev Environment
on:
push:
branches: [ dev ]
paths:
- 'autogpt_platform/backend/**'
- 'autogpt_platform/frontend/**'
- 'autogpt_platform/market/**'
permissions:
contents: 'read'
id-token: 'write'
env:
PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
GKE_CLUSTER: dev-gke-cluster
GKE_ZONE: us-central1-a
NAMESPACE: dev-agpt
jobs:
migrate:
environment: develop
name: Run migrations for AutoGPT Platform
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install prisma
- name: Run Backend Migrations
working-directory: ./autogpt_platform/backend
run: |
python -m prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
- name: Run Market Migrations
working-directory: ./autogpt_platform/market
run: |
python -m prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.MARKET_DATABASE_URL }}
build-push-deploy:
name: Build, Push, and Deploy
needs: migrate
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
workload_identity_provider: 'projects/638488734936/locations/global/workloadIdentityPools/dev-pool/providers/github'
service_account: 'dev-github-actions-sa@agpt-dev.iam.gserviceaccount.com'
token_format: 'access_token'
create_credentials_file: true
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud@v2'
- name: 'Configure Docker'
run: |
gcloud auth configure-docker us-east1-docker.pkg.dev
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Check for changes
id: check_changes
run: |
git fetch origin dev
BACKEND_CHANGED=$(git diff --name-only origin/dev HEAD | grep "^autogpt_platform/backend/" && echo "true" || echo "false")
FRONTEND_CHANGED=$(git diff --name-only origin/dev HEAD | grep "^autogpt_platform/frontend/" && echo "true" || echo "false")
MARKET_CHANGED=$(git diff --name-only origin/dev HEAD | grep "^autogpt_platform/market/" && echo "true" || echo "false")
echo "backend_changed=$BACKEND_CHANGED" >> $GITHUB_OUTPUT
echo "frontend_changed=$FRONTEND_CHANGED" >> $GITHUB_OUTPUT
echo "market_changed=$MARKET_CHANGED" >> $GITHUB_OUTPUT
- name: Get GKE credentials
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
- name: Build and Push Backend
if: steps.check_changes.outputs.backend_changed == 'true'
uses: docker/build-push-action@v2
with:
context: .
file: ./autogpt_platform/backend/Dockerfile
push: true
tags: us-east1-docker.pkg.dev/agpt-dev/agpt-backend-dev/agpt-backend-dev:${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Build and Push Frontend
if: steps.check_changes.outputs.frontend_changed == 'true'
uses: docker/build-push-action@v2
with:
context: .
file: ./autogpt_platform/frontend/Dockerfile
push: true
tags: us-east1-docker.pkg.dev/agpt-dev/agpt-frontend-dev/agpt-frontend-dev:${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Build and Push Market
if: steps.check_changes.outputs.market_changed == 'true'
uses: docker/build-push-action@v2
with:
context: .
file: ./autogpt_platform/market/Dockerfile
push: true
tags: us-east1-docker.pkg.dev/agpt-dev/agpt-market-dev/agpt-market-dev:${{ github.sha }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Set up Helm
uses: azure/setup-helm@v4
with:
version: v3.4.0
- name: Deploy Backend
if: steps.check_changes.outputs.backend_changed == 'true'
run: |
helm upgrade autogpt-server ./autogpt-server \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-server/values.yaml \
-f autogpt-server/values.dev.yaml \
--set image.tag=${{ github.sha }}
- name: Deploy Websocket
if: steps.check_changes.outputs.backend_changed == 'true'
run: |
helm upgrade autogpt-websocket-server ./autogpt-websocket-server \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-websocket-server/values.yaml \
-f autogpt-websocket-server/values.dev.yaml \
--set image.tag=${{ github.sha }}
- name: Deploy Market
if: steps.check_changes.outputs.market_changed == 'true'
run: |
helm upgrade autogpt-market ./autogpt-market \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-market/values.yaml \
-f autogpt-market/values.dev.yaml \
--set image.tag=${{ github.sha }}
- name: Deploy Frontend
if: steps.check_changes.outputs.frontend_changed == 'true'
run: |
helm upgrade autogpt-builder ./autogpt-builder \
--namespace ${{ env.NAMESPACE }} \
-f autogpt-builder/values.yaml \
-f autogpt-builder/values.dev.yaml \
--set image.tag=${{ github.sha }}

View File

@@ -1,56 +0,0 @@
name: AutoGPT Platform - Infra
on:
push:
branches: [ master, dev ]
paths:
- '.github/workflows/platform-autogpt-infra-ci.yml'
- 'autogpt_platform/infra/**'
pull_request:
paths:
- '.github/workflows/platform-autogpt-infra-ci.yml'
- 'autogpt_platform/infra/**'
defaults:
run:
shell: bash
working-directory: autogpt_platform/infra
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: TFLint
uses: pauloconnor/tflint-action@v0.0.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tflint_path: terraform/
tflint_recurse: true
tflint_changed_only: false
- name: Set up Helm
uses: azure/setup-helm@v4
with:
version: v3.14.4
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --target-branch ${{ github.event.repository.default_branch }})
if [[ -n "$changed" ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Run chart-testing (lint)
if: steps.list-changed.outputs.changed == 'true'
run: ct lint --target-branch ${{ github.event.repository.default_branch }}

View File

@@ -11,6 +11,7 @@ on:
paths:
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
merge_group:
concurrency:
group: ${{ format('backend-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}

View File

@@ -10,6 +10,7 @@ on:
paths:
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
merge_group:
defaults:
run:
@@ -69,6 +70,10 @@ jobs:
run: |
cp ../supabase/docker/.env.example ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.example ../backend/.env
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml up -d

View File

@@ -11,6 +11,7 @@ on:
paths:
- ".github/workflows/platform-market-ci.yml"
- "autogpt_platform/market/**"
merge_group:
concurrency:
group: ${{ format('backend-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}

View File

@@ -2,6 +2,7 @@ name: Repo - PR Status Checker
on:
pull_request:
types: [opened, synchronize, reopened]
merge_group:
jobs:
status-check:

View File

@@ -7,13 +7,18 @@ from typing import Dict, List, Tuple
CHECK_INTERVAL = 30
def get_environment_variables() -> Tuple[str, str, str, str, str]:
"""Retrieve and return necessary environment variables."""
try:
with open(os.environ["GITHUB_EVENT_PATH"]) as f:
event = json.load(f)
sha = event["pull_request"]["head"]["sha"]
# Handle both PR and merge group events
if "pull_request" in event:
sha = event["pull_request"]["head"]["sha"]
else:
sha = os.environ["GITHUB_SHA"]
return (
os.environ["GITHUB_API_URL"],

2
.gitignore vendored
View File

@@ -171,3 +171,5 @@ ig*
.github_access_token
LICENSE.rtf
autogpt_platform/backend/settings.py
/.auth
/autogpt_platform/frontend/.auth

View File

@@ -9,7 +9,7 @@ repos:
- id: check-merge-conflict
- id: check-symlinks
- id: debug-statements
- repo: https://github.com/Yelp/detect-secrets
rev: v1.5.0
hooks:
@@ -19,27 +19,117 @@ repos:
files: ^autogpt_platform/
stages: [push]
- repo: local
# For proper type checking, all dependencies need to be up-to-date.
# It's also a good idea to check that poetry.lock is consistent with pyproject.toml.
hooks:
- id: poetry-install
name: Check & Install dependencies - AutoGPT Platform - Backend
alias: poetry-install-platform-backend
entry: poetry -C autogpt_platform/backend install
# include autogpt_libs source (since it's a path dependency)
files: ^autogpt_platform/(backend|autogpt_libs)/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - AutoGPT Platform - Libs
alias: poetry-install-platform-libs
entry: poetry -C autogpt_platform/autogpt_libs install
files: ^autogpt_platform/autogpt_libs/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - AutoGPT
alias: poetry-install-classic-autogpt
entry: poetry -C classic/original_autogpt install
# include forge source (since it's a path dependency)
files: ^classic/(original_autogpt|forge)/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - Forge
alias: poetry-install-classic-forge
entry: poetry -C classic/forge install
files: ^classic/forge/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - Benchmark
alias: poetry-install-classic-benchmark
entry: poetry -C classic/benchmark install
files: ^classic/benchmark/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- repo: local
# For proper type checking, Prisma client must be up-to-date.
hooks:
- id: prisma-generate
name: Prisma Generate - AutoGPT Platform - Backend
alias: prisma-generate-platform-backend
entry: bash -c 'cd autogpt_platform/backend && poetry run prisma generate'
# include everything that triggers poetry install + the prisma schema
files: ^autogpt_platform/((backend|autogpt_libs)/poetry\.lock|backend/schema.prisma)$
types: [file]
language: system
pass_filenames: false
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.7.2
hooks:
- id: ruff
name: Lint (Ruff) - AutoGPT Platform - Backend
alias: ruff-lint-platform-backend
files: ^autogpt_platform/backend/
args: [--fix]
- id: ruff
name: Lint (Ruff) - AutoGPT Platform - Libs
alias: ruff-lint-platform-libs
files: ^autogpt_platform/autogpt_libs/
args: [--fix]
- repo: local
# isort needs the context of which packages are installed to function, so we
# can't use a vendored isort pre-commit hook (which runs in its own isolated venv).
hooks:
- id: isort-autogpt
name: Lint (isort) - AutoGPT
entry: poetry -C classic/original_autogpt run isort
- id: isort
name: Lint (isort) - AutoGPT Platform - Backend
alias: isort-platform-backend
entry: poetry -C autogpt_platform/backend run isort -p backend
files: ^autogpt_platform/backend/
types: [file, python]
language: system
- id: isort
name: Lint (isort) - Classic - AutoGPT
alias: isort-classic-autogpt
entry: poetry -C classic/original_autogpt run isort -p autogpt
files: ^classic/original_autogpt/
types: [file, python]
language: system
- id: isort-forge
name: Lint (isort) - Forge
entry: poetry -C classic/forge run isort
- id: isort
name: Lint (isort) - Classic - Forge
alias: isort-classic-forge
entry: poetry -C classic/forge run isort -p forge
files: ^classic/forge/
types: [file, python]
language: system
- id: isort-benchmark
name: Lint (isort) - Benchmark
entry: poetry -C classic/benchmark run isort
- id: isort
name: Lint (isort) - Classic - Benchmark
alias: isort-classic-benchmark
entry: poetry -C classic/benchmark run isort -p agbenchmark
files: ^classic/benchmark/
types: [file, python]
language: system
@@ -51,7 +141,6 @@ repos:
hooks:
- id: black
name: Lint (Black)
language_version: python3.12
- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
@@ -59,20 +148,20 @@ repos:
# them separately.
hooks:
- id: flake8
name: Lint (Flake8) - AutoGPT
alias: flake8-autogpt
name: Lint (Flake8) - Classic - AutoGPT
alias: flake8-classic-autogpt
files: ^classic/original_autogpt/(autogpt|scripts|tests)/
args: [--config=classic/original_autogpt/.flake8]
- id: flake8
name: Lint (Flake8) - Forge
alias: flake8-forge
name: Lint (Flake8) - Classic - Forge
alias: flake8-classic-forge
files: ^classic/forge/(forge|tests)/
args: [--config=classic/forge/.flake8]
- id: flake8
name: Lint (Flake8) - Benchmark
alias: flake8-benchmark
name: Lint (Flake8) - Classic - Benchmark
alias: flake8-classic-benchmark
files: ^classic/benchmark/(agbenchmark|tests)/((?!reports).)*[/.]
args: [--config=classic/benchmark/.flake8]
@@ -81,31 +170,52 @@ repos:
# project. To trigger on poetry.lock we also reset the file `types` filter.
hooks:
- id: pyright
name: Typecheck - AutoGPT
alias: pyright-autogpt
entry: poetry -C classic/original_autogpt run pyright
args: [-p, autogpt, autogpt]
name: Typecheck - AutoGPT Platform - Backend
alias: pyright-platform-backend
entry: poetry -C autogpt_platform/backend run pyright
args: [-p, autogpt_platform/backend, autogpt_platform/backend]
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|classic/forge/(classic/forge/.*(?<!_test)\.py|poetry\.lock)$)
files: ^autogpt_platform/(backend/((backend|test)/|(\w+\.py|poetry\.lock)$)|autogpt_libs/(autogpt_libs/.*(?<!_test)\.py|poetry\.lock)$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Forge
alias: pyright-forge
name: Typecheck - AutoGPT Platform - Libs
alias: pyright-platform-libs
entry: poetry -C autogpt_platform/autogpt_libs run pyright
args: [-p, autogpt_platform/autogpt_libs, autogpt_platform/autogpt_libs]
files: ^autogpt_platform/autogpt_libs/(autogpt_libs/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Classic - AutoGPT
alias: pyright-classic-autogpt
entry: poetry -C classic/original_autogpt run pyright
args: [-p, classic/original_autogpt, classic/original_autogpt]
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|classic/forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Classic - Forge
alias: pyright-classic-forge
entry: poetry -C classic/forge run pyright
args: [-p, forge, forge]
files: ^classic/forge/(classic/forge/|poetry\.lock$)
args: [-p, classic/forge, classic/forge]
files: ^classic/forge/(forge/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Benchmark
alias: pyright-benchmark
name: Typecheck - Classic - Benchmark
alias: pyright-classic-benchmark
entry: poetry -C classic/benchmark run pyright
args: [-p, benchmark, benchmark]
args: [-p, classic/benchmark, classic/benchmark]
files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
types: [file]
language: system
@@ -113,23 +223,35 @@ repos:
- repo: local
hooks:
- id: pytest-autogpt
name: Run tests - AutoGPT (excl. slow tests)
- id: pytest
name: Run tests - AutoGPT Platform - Backend
alias: pytest-platform-backend
entry: bash -c 'cd autogpt_platform/backend && poetry run pytest'
# include autogpt_libs source (since it's a path dependency) but exclude *_test.py files:
files: ^autogpt_platform/(backend/((backend|test)/|poetry\.lock$)|autogpt_libs/(autogpt_libs/.*(?<!_test)\.py|poetry\.lock)$)
language: system
pass_filenames: false
- id: pytest
name: Run tests - Classic - AutoGPT (excl. slow tests)
alias: pytest-classic-autogpt
entry: bash -c 'cd classic/original_autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration'
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|tests)/|poetry\.lock$)|classic/forge/(classic/forge/.*(?<!_test)\.py|poetry\.lock)$)
files: ^(classic/original_autogpt/((autogpt|tests)/|poetry\.lock$)|classic/forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
language: system
pass_filenames: false
- id: pytest-forge
name: Run tests - Forge (excl. slow tests)
- id: pytest
name: Run tests - Classic - Forge (excl. slow tests)
alias: pytest-classic-forge
entry: bash -c 'cd classic/forge && poetry run pytest --cov=forge -m "not slow"'
files: ^classic/forge/(classic/forge/|tests/|poetry\.lock$)
files: ^classic/forge/(forge/|tests/|poetry\.lock$)
language: system
pass_filenames: false
- id: pytest-benchmark
name: Run tests - Benchmark
- id: pytest
name: Run tests - Classic - Benchmark
alias: pytest-classic-benchmark
entry: bash -c 'cd classic/benchmark && poetry run pytest --cov=benchmark'
files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
language: system

67
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,67 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Frontend: Server Side",
"type": "node-terminal",
"request": "launch",
"cwd": "${workspaceFolder}/autogpt_platform/frontend",
"command": "yarn dev"
},
{
"name": "Frontend: Client Side",
"type": "msedge",
"request": "launch",
"url": "http://localhost:3000"
},
{
"name": "Frontend: Full Stack",
"type": "node-terminal",
"request": "launch",
"command": "yarn dev",
"cwd": "${workspaceFolder}/autogpt_platform/frontend",
"serverReadyAction": {
"pattern": "- Local:.+(https?://.+)",
"uriFormat": "%s",
"action": "debugWithEdge"
}
},
{
"name": "Backend",
"type": "debugpy",
"request": "launch",
"module": "backend.app",
// "env": {
// "ENV": "dev"
// },
"envFile": "${workspaceFolder}/backend/.env",
"justMyCode": false,
"cwd": "${workspaceFolder}/autogpt_platform/backend"
},
{
"name": "Marketplace",
"type": "debugpy",
"request": "launch",
"module": "autogpt_platform.market.main",
"env": {
"ENV": "dev"
},
"envFile": "${workspaceFolder}/market/.env",
"justMyCode": false,
"cwd": "${workspaceFolder}/market"
}
],
"compounds": [
{
"name": "Everything",
"configurations": ["Backend", "Frontend: Full Stack"],
// "preLaunchTask": "${defaultBuildTask}",
"stopAll": true,
"presentation": {
"hidden": false,
"order": 0
}
}
]
}

View File

@@ -35,7 +35,7 @@ The AutoGPT frontend is where users interact with our powerful AI automation pla
**Monitoring and Analytics:** Keep track of your agents' performance and gain insights to continually improve your automation processes.
[Read this guide](https://docs.agpt.co/server/new_blocks/) to learn how to build your own custom blocks.
[Read this guide](https://docs.agpt.co/platform/new_blocks/) to learn how to build your own custom blocks.
### 💽 AutoGPT Server

47
SECURITY.md Normal file
View File

@@ -0,0 +1,47 @@
# Security Policy
## Reporting Security Issues
We take the security of our project seriously. If you believe you have found a security vulnerability, please report it to us privately. **Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
> **Important Note**: Any code within the `classic/` folder is considered legacy, unsupported, and out of scope for security reports. We will not address security vulnerabilities in this deprecated code.
Instead, please report them via:
- [GitHub Security Advisory](https://github.com/Significant-Gravitas/AutoGPT/security/advisories/new)
- [Huntr.dev](https://huntr.com/repos/significant-gravitas/autogpt) - where you may be eligible for a bounty
### Reporting Process
1. **Submit Report**: Use one of the above channels to submit your report
2. **Response Time**: Our team will acknowledge receipt of your report within 14 business days.
3. **Collaboration**: We will collaborate with you to understand and validate the issue
4. **Resolution**: We will work on a fix and coordinate the release process
### Disclosure Policy
- Please provide detailed reports with reproducible steps
- Include the version/commit hash where you discovered the vulnerability
- Allow us a 90-day security fix window before any public disclosure
- Share any potential mitigations or workarounds if known
## Supported Versions
Only the following versions are eligible for security updates:
| Version | Supported |
|---------|-----------|
| Latest release on master branch | ✅ |
| Development commits (pre-master) | ✅ |
| Classic folder (deprecated) | ❌ |
| All other versions | ❌ |
## Security Best Practices
When using this project:
1. Always use the latest stable version
2. Review security advisories before updating
3. Follow our security documentation and guidelines
4. Keep your dependencies up to date
5. Do not use code from the `classic/` folder as it is deprecated and unsupported
## Past Security Advisories
For a list of past security advisories, please visit our [Security Advisory Page](https://github.com/Significant-Gravitas/AutoGPT/security/advisories) and [Huntr Disclosures Page](https://huntr.com/repos/significant-gravitas/autogpt).
---
Last updated: November 2024

View File

@@ -0,0 +1,34 @@
import hashlib
import secrets
from typing import NamedTuple
class APIKeyContainer(NamedTuple):
"""Container for API key parts."""
raw: str
prefix: str
postfix: str
hash: str
class APIKeyManager:
PREFIX: str = "agpt_"
PREFIX_LENGTH: int = 8
POSTFIX_LENGTH: int = 8
def generate_api_key(self) -> APIKeyContainer:
"""Generate a new API key with all its parts."""
raw_key = f"{self.PREFIX}{secrets.token_urlsafe(32)}"
return APIKeyContainer(
raw=raw_key,
prefix=raw_key[: self.PREFIX_LENGTH],
postfix=raw_key[-self.POSTFIX_LENGTH :],
hash=hashlib.sha256(raw_key.encode()).hexdigest(),
)
def verify_api_key(self, provided_key: str, stored_hash: str) -> bool:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
return hashlib.sha256(provided_key.encode()).hexdigest() == stored_hash

View File

@@ -1,7 +1,8 @@
import fastapi
from .config import Settings
from .middleware import auth_middleware
from .models import User
from .models import DEFAULT_USER_ID, User
def requires_user(payload: dict = fastapi.Depends(auth_middleware)) -> User:
@@ -16,8 +17,12 @@ def requires_admin_user(
def verify_user(payload: dict | None, admin_only: bool) -> User:
if not payload:
if Settings.ENABLE_AUTH:
raise fastapi.HTTPException(
status_code=401, detail="Authorization header is missing"
)
# This handles the case when authentication is disabled
payload = {"sub": "3e53486c-cf57-477e-ba2a-cb02dc828e1a", "role": "admin"}
payload = {"sub": DEFAULT_USER_ID, "role": "admin"}
user_id = payload.get("sub")

View File

@@ -1,5 +1,8 @@
from dataclasses import dataclass
DEFAULT_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
DEFAULT_EMAIL = "default@example.com"
# Using dataclass here to avoid adding dependency on pydantic
@dataclass(frozen=True)

View File

@@ -0,0 +1,167 @@
import asyncio
import contextlib
import logging
from functools import wraps
from typing import Any, Awaitable, Callable, Dict, Optional, TypeVar, Union, cast
import ldclient
from fastapi import HTTPException
from ldclient import Context, LDClient
from ldclient.config import Config
from typing_extensions import ParamSpec
from .config import SETTINGS
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG)
P = ParamSpec("P")
T = TypeVar("T")
def get_client() -> LDClient:
"""Get the LaunchDarkly client singleton."""
return ldclient.get()
def initialize_launchdarkly() -> None:
sdk_key = SETTINGS.launch_darkly_sdk_key
logger.debug(
f"Initializing LaunchDarkly with SDK key: {'present' if sdk_key else 'missing'}"
)
if not sdk_key:
logger.warning("LaunchDarkly SDK key not configured")
return
config = Config(sdk_key)
ldclient.set_config(config)
if ldclient.get().is_initialized():
logger.info("LaunchDarkly client initialized successfully")
else:
logger.error("LaunchDarkly client failed to initialize")
def shutdown_launchdarkly() -> None:
"""Shutdown the LaunchDarkly client."""
if ldclient.get().is_initialized():
ldclient.get().close()
logger.info("LaunchDarkly client closed successfully")
def create_context(
user_id: str, additional_attributes: Optional[Dict[str, Any]] = None
) -> Context:
"""Create LaunchDarkly context with optional additional attributes."""
builder = Context.builder(str(user_id)).kind("user")
if additional_attributes:
for key, value in additional_attributes.items():
builder.set(key, value)
return builder.build()
def feature_flag(
flag_key: str,
default: bool = False,
) -> Callable[
[Callable[P, Union[T, Awaitable[T]]]], Callable[P, Union[T, Awaitable[T]]]
]:
"""
Decorator for feature flag protected endpoints.
"""
def decorator(
func: Callable[P, Union[T, Awaitable[T]]]
) -> Callable[P, Union[T, Awaitable[T]]]:
@wraps(func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
context = create_context(str(user_id))
is_enabled = get_client().variation(flag_key, context, default)
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
result = func(*args, **kwargs)
if asyncio.iscoroutine(result):
return await result
return cast(T, result)
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
@wraps(func)
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
try:
user_id = kwargs.get("user_id")
if not user_id:
raise ValueError("user_id is required")
if not get_client().is_initialized():
logger.warning(
f"LaunchDarkly not initialized, using default={default}"
)
is_enabled = default
else:
context = create_context(str(user_id))
is_enabled = get_client().variation(flag_key, context, default)
if not is_enabled:
raise HTTPException(status_code=404, detail="Feature not available")
return cast(T, func(*args, **kwargs))
except Exception as e:
logger.error(f"Error evaluating feature flag {flag_key}: {e}")
raise
return cast(
Callable[P, Union[T, Awaitable[T]]],
async_wrapper if asyncio.iscoroutinefunction(func) else sync_wrapper,
)
return decorator
def percentage_rollout(
flag_key: str,
default: bool = False,
) -> Callable[
[Callable[P, Union[T, Awaitable[T]]]], Callable[P, Union[T, Awaitable[T]]]
]:
"""Decorator for percentage-based rollouts."""
return feature_flag(flag_key, default)
def beta_feature(
flag_key: Optional[str] = None,
unauthorized_response: Any = {"message": "Not available in beta"},
) -> Callable[
[Callable[P, Union[T, Awaitable[T]]]], Callable[P, Union[T, Awaitable[T]]]
]:
"""Decorator for beta features."""
actual_key = f"beta-{flag_key}" if flag_key else "beta"
return feature_flag(actual_key, False)
@contextlib.contextmanager
def mock_flag_variation(flag_key: str, return_value: Any):
"""Context manager for testing feature flags."""
original_variation = get_client().variation
get_client().variation = lambda key, context, default: (
return_value if key == flag_key else original_variation(key, context, default)
)
try:
yield
finally:
get_client().variation = original_variation

View File

@@ -0,0 +1,44 @@
import pytest
from autogpt_libs.feature_flag.client import feature_flag, mock_flag_variation
from ldclient import LDClient
@pytest.fixture
def ld_client(mocker):
client = mocker.Mock(spec=LDClient)
mocker.patch("ldclient.get", return_value=client)
client.is_initialized.return_value = True
return client
@pytest.mark.asyncio
async def test_feature_flag_enabled(ld_client):
ld_client.variation.return_value = True
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == "success"
ld_client.variation.assert_called_once()
@pytest.mark.asyncio
async def test_feature_flag_unauthorized_response(ld_client):
ld_client.variation.return_value = False
@feature_flag("test-flag")
async def test_function(user_id: str):
return "success"
result = test_function(user_id="test-user")
assert result == {"error": "disabled"}
def test_mock_flag_variation(ld_client):
with mock_flag_variation("test-flag", True):
assert ld_client.variation("test-flag", None, False)
with mock_flag_variation("test-flag", False):
assert ld_client.variation("test-flag", None, False)

View File

@@ -0,0 +1,15 @@
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
launch_darkly_sdk_key: str = Field(
default="",
description="The Launch Darkly SDK key",
validation_alias="LAUNCH_DARKLY_SDK_KEY"
)
model_config = SettingsConfigDict(case_sensitive=True, extra="ignore")
SETTINGS = Settings()

View File

@@ -6,6 +6,7 @@ from pathlib import Path
from pydantic import Field, field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
from .filters import BelowLevelFilter
from .formatters import AGPTFormatter, StructuredLoggingFormatter

View File

@@ -1,9 +0,0 @@
from .store import SupabaseIntegrationCredentialsStore
from .types import Credentials, APIKeyCredentials, OAuth2Credentials
__all__ = [
"SupabaseIntegrationCredentialsStore",
"Credentials",
"APIKeyCredentials",
"OAuth2Credentials",
]

View File

@@ -1,75 +0,0 @@
from typing import Annotated, Any, Literal, Optional, TypedDict
from uuid import uuid4
from pydantic import BaseModel, Field, SecretStr, field_serializer
class _BaseCredentials(BaseModel):
id: str = Field(default_factory=lambda: str(uuid4()))
provider: str
title: Optional[str]
@field_serializer("*")
def dump_secret_strings(value: Any, _info):
if isinstance(value, SecretStr):
return value.get_secret_value()
return value
class OAuth2Credentials(_BaseCredentials):
type: Literal["oauth2"] = "oauth2"
username: Optional[str]
"""Username of the third-party service user that these credentials belong to"""
access_token: SecretStr
access_token_expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the access token expires (if at all)"""
refresh_token: Optional[SecretStr]
refresh_token_expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the refresh token expires (if at all)"""
scopes: list[str]
metadata: dict[str, Any] = Field(default_factory=dict)
def bearer(self) -> str:
return f"Bearer {self.access_token.get_secret_value()}"
class APIKeyCredentials(_BaseCredentials):
type: Literal["api_key"] = "api_key"
api_key: SecretStr
expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the API key expires (if at all)"""
def bearer(self) -> str:
return f"Bearer {self.api_key.get_secret_value()}"
Credentials = Annotated[
OAuth2Credentials | APIKeyCredentials,
Field(discriminator="type"),
]
CredentialsType = Literal["api_key", "oauth2"]
class OAuthState(BaseModel):
token: str
provider: str
expires_at: int
scopes: list[str]
"""Unix timestamp (seconds) indicating when this OAuth state expires"""
class UserMetadata(BaseModel):
integration_credentials: list[Credentials] = Field(default_factory=list)
integration_oauth_states: list[OAuthState] = Field(default_factory=list)
class UserMetadataRaw(TypedDict, total=False):
integration_credentials: list[dict]
integration_oauth_states: list[dict]
class UserIntegrations(BaseModel):
credentials: list[Credentials] = Field(default_factory=list)
oauth_states: list[OAuthState] = Field(default_factory=list)

View File

@@ -1,5 +1,5 @@
from typing import Callable, TypeVar, ParamSpec
import threading
from typing import Callable, ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")

View File

@@ -626,13 +626,13 @@ grpc = ["grpcio (>=1.44.0,<2.0.0.dev0)"]
[[package]]
name = "gotrue"
version = "2.9.3"
version = "2.10.0"
description = "Python Client Library for Supabase Auth"
optional = false
python-versions = "<4.0,>=3.9"
files = [
{file = "gotrue-2.9.3-py3-none-any.whl", hash = "sha256:9d2e9c74405d879f4828e0a7b94daf167a6e109c10ae6e5c59a0e21446f6e423"},
{file = "gotrue-2.9.3.tar.gz", hash = "sha256:051551d80e642bdd2ab42cac78207745d89a2a08f429a1512d82624e675d8255"},
{file = "gotrue-2.10.0-py3-none-any.whl", hash = "sha256:768e58207488e5184ffbdc4351b7280d913daf97962f4e9f2cca05c80004b042"},
{file = "gotrue-2.10.0.tar.gz", hash = "sha256:4edf4c251da3535f2b044e23deba221e848ca1210c17d0c7a9b19f79a1e3f3c0"},
]
[package.dependencies]
@@ -854,6 +854,17 @@ doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linke
perf = ["ipython"]
test = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"]
[[package]]
name = "iniconfig"
version = "2.0.0"
description = "brain-dead simple config-ini parsing"
optional = false
python-versions = ">=3.7"
files = [
{file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
]
[[package]]
name = "multidict"
version = "6.1.0"
@@ -984,15 +995,30 @@ files = [
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"},
]
[[package]]
name = "pluggy"
version = "1.5.0"
description = "plugin and hook calling mechanisms for python"
optional = false
python-versions = ">=3.8"
files = [
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
]
[package.extras]
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "postgrest"
version = "0.17.2"
version = "0.18.0"
description = "PostgREST client for Python. This library provides an ORM interface to PostgREST."
optional = false
python-versions = "<4.0,>=3.9"
files = [
{file = "postgrest-0.17.2-py3-none-any.whl", hash = "sha256:f7c4f448e5a5e2d4c1dcf192edae9d1007c4261e9a6fb5116783a0046846ece2"},
{file = "postgrest-0.17.2.tar.gz", hash = "sha256:445cd4e4a191e279492549df0c4e827d32f9d01d0852599bb8a6efb0f07fcf78"},
{file = "postgrest-0.18.0-py3-none-any.whl", hash = "sha256:200baad0d23fee986b3a0ffd3e07bfe0cdd40e09760f11e8e13a6c0c2376d5fa"},
{file = "postgrest-0.18.0.tar.gz", hash = "sha256:29c1a94801a17eb9ad590189993fe5a7a6d8c1bfc11a3c9d0ce7ba146454ebb3"},
]
[package.dependencies]
@@ -1065,22 +1091,19 @@ pyasn1 = ">=0.4.6,<0.7.0"
[[package]]
name = "pydantic"
version = "2.9.2"
version = "2.10.2"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.8"
files = [
{file = "pydantic-2.9.2-py3-none-any.whl", hash = "sha256:f048cec7b26778210e28a0459867920654d48e5e62db0958433636cde4254f12"},
{file = "pydantic-2.9.2.tar.gz", hash = "sha256:d155cef71265d1e9807ed1c32b4c8deec042a44a50a4188b25ac67ecd81a9c0f"},
{file = "pydantic-2.10.2-py3-none-any.whl", hash = "sha256:cfb96e45951117c3024e6b67b25cdc33a3cb7b2fa62e239f7af1378358a1d99e"},
{file = "pydantic-2.10.2.tar.gz", hash = "sha256:2bc2d7f17232e0841cbba4641e65ba1eb6fafb3a08de3a091ff3ce14a197c4fa"},
]
[package.dependencies]
annotated-types = ">=0.6.0"
pydantic-core = "2.23.4"
typing-extensions = [
{version = ">=4.12.2", markers = "python_version >= \"3.13\""},
{version = ">=4.6.1", markers = "python_version < \"3.13\""},
]
pydantic-core = "2.27.1"
typing-extensions = ">=4.12.2"
[package.extras]
email = ["email-validator (>=2.0.0)"]
@@ -1088,100 +1111,111 @@ timezone = ["tzdata"]
[[package]]
name = "pydantic-core"
version = "2.23.4"
version = "2.27.1"
description = "Core functionality for Pydantic validation and serialization"
optional = false
python-versions = ">=3.8"
files = [
{file = "pydantic_core-2.23.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:b10bd51f823d891193d4717448fab065733958bdb6a6b351967bd349d48d5c9b"},
{file = "pydantic_core-2.23.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4fc714bdbfb534f94034efaa6eadd74e5b93c8fa6315565a222f7b6f42ca1166"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63e46b3169866bd62849936de036f901a9356e36376079b05efa83caeaa02ceb"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed1a53de42fbe34853ba90513cea21673481cd81ed1be739f7f2efb931b24916"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cfdd16ab5e59fc31b5e906d1a3f666571abc367598e3e02c83403acabc092e07"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:255a8ef062cbf6674450e668482456abac99a5583bbafb73f9ad469540a3a232"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a7cd62e831afe623fbb7aabbb4fe583212115b3ef38a9f6b71869ba644624a2"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f09e2ff1f17c2b51f2bc76d1cc33da96298f0a036a137f5440ab3ec5360b624f"},
{file = "pydantic_core-2.23.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e38e63e6f3d1cec5a27e0afe90a085af8b6806ee208b33030e65b6516353f1a3"},
{file = "pydantic_core-2.23.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0dbd8dbed2085ed23b5c04afa29d8fd2771674223135dc9bc937f3c09284d071"},
{file = "pydantic_core-2.23.4-cp310-none-win32.whl", hash = "sha256:6531b7ca5f951d663c339002e91aaebda765ec7d61b7d1e3991051906ddde119"},
{file = "pydantic_core-2.23.4-cp310-none-win_amd64.whl", hash = "sha256:7c9129eb40958b3d4500fa2467e6a83356b3b61bfff1b414c7361d9220f9ae8f"},
{file = "pydantic_core-2.23.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:77733e3892bb0a7fa797826361ce8a9184d25c8dffaec60b7ffe928153680ba8"},
{file = "pydantic_core-2.23.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1b84d168f6c48fabd1f2027a3d1bdfe62f92cade1fb273a5d68e621da0e44e6d"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:df49e7a0861a8c36d089c1ed57d308623d60416dab2647a4a17fe050ba85de0e"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ff02b6d461a6de369f07ec15e465a88895f3223eb75073ffea56b84d9331f607"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:996a38a83508c54c78a5f41456b0103c30508fed9abcad0a59b876d7398f25fd"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d97683ddee4723ae8c95d1eddac7c192e8c552da0c73a925a89fa8649bf13eea"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:216f9b2d7713eb98cb83c80b9c794de1f6b7e3145eef40400c62e86cee5f4e1e"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6f783e0ec4803c787bcea93e13e9932edab72068f68ecffdf86a99fd5918878b"},
{file = "pydantic_core-2.23.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d0776dea117cf5272382634bd2a5c1b6eb16767c223c6a5317cd3e2a757c61a0"},
{file = "pydantic_core-2.23.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5f7a395a8cf1621939692dba2a6b6a830efa6b3cee787d82c7de1ad2930de64"},
{file = "pydantic_core-2.23.4-cp311-none-win32.whl", hash = "sha256:74b9127ffea03643e998e0c5ad9bd3811d3dac8c676e47db17b0ee7c3c3bf35f"},
{file = "pydantic_core-2.23.4-cp311-none-win_amd64.whl", hash = "sha256:98d134c954828488b153d88ba1f34e14259284f256180ce659e8d83e9c05eaa3"},
{file = "pydantic_core-2.23.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f3e0da4ebaef65158d4dfd7d3678aad692f7666877df0002b8a522cdf088f231"},
{file = "pydantic_core-2.23.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f69a8e0b033b747bb3e36a44e7732f0c99f7edd5cea723d45bc0d6e95377ffee"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:723314c1d51722ab28bfcd5240d858512ffd3116449c557a1336cbe3919beb87"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb2802e667b7051a1bebbfe93684841cc9351004e2badbd6411bf357ab8d5ac8"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d18ca8148bebe1b0a382a27a8ee60350091a6ddaf475fa05ef50dc35b5df6327"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33e3d65a85a2a4a0dc3b092b938a4062b1a05f3a9abde65ea93b233bca0e03f2"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:128585782e5bfa515c590ccee4b727fb76925dd04a98864182b22e89a4e6ed36"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:68665f4c17edcceecc112dfed5dbe6f92261fb9d6054b47d01bf6371a6196126"},
{file = "pydantic_core-2.23.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:20152074317d9bed6b7a95ade3b7d6054845d70584216160860425f4fbd5ee9e"},
{file = "pydantic_core-2.23.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:9261d3ce84fa1d38ed649c3638feefeae23d32ba9182963e465d58d62203bd24"},
{file = "pydantic_core-2.23.4-cp312-none-win32.whl", hash = "sha256:4ba762ed58e8d68657fc1281e9bb72e1c3e79cc5d464be146e260c541ec12d84"},
{file = "pydantic_core-2.23.4-cp312-none-win_amd64.whl", hash = "sha256:97df63000f4fea395b2824da80e169731088656d1818a11b95f3b173747b6cd9"},
{file = "pydantic_core-2.23.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7530e201d10d7d14abce4fb54cfe5b94a0aefc87da539d0346a484ead376c3cc"},
{file = "pydantic_core-2.23.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:df933278128ea1cd77772673c73954e53a1c95a4fdf41eef97c2b779271bd0bd"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cb3da3fd1b6a5d0279a01877713dbda118a2a4fc6f0d821a57da2e464793f05"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:42c6dcb030aefb668a2b7009c85b27f90e51e6a3b4d5c9bc4c57631292015b0d"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:696dd8d674d6ce621ab9d45b205df149399e4bb9aa34102c970b721554828510"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2971bb5ffe72cc0f555c13e19b23c85b654dd2a8f7ab493c262071377bfce9f6"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8394d940e5d400d04cad4f75c0598665cbb81aecefaca82ca85bd28264af7f9b"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0dff76e0602ca7d4cdaacc1ac4c005e0ce0dcfe095d5b5259163a80d3a10d327"},
{file = "pydantic_core-2.23.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7d32706badfe136888bdea71c0def994644e09fff0bfe47441deaed8e96fdbc6"},
{file = "pydantic_core-2.23.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ed541d70698978a20eb63d8c5d72f2cc6d7079d9d90f6b50bad07826f1320f5f"},
{file = "pydantic_core-2.23.4-cp313-none-win32.whl", hash = "sha256:3d5639516376dce1940ea36edf408c554475369f5da2abd45d44621cb616f769"},
{file = "pydantic_core-2.23.4-cp313-none-win_amd64.whl", hash = "sha256:5a1504ad17ba4210df3a045132a7baeeba5a200e930f57512ee02909fc5c4cb5"},
{file = "pydantic_core-2.23.4-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:d4488a93b071c04dc20f5cecc3631fc78b9789dd72483ba15d423b5b3689b555"},
{file = "pydantic_core-2.23.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:81965a16b675b35e1d09dd14df53f190f9129c0202356ed44ab2728b1c905658"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ffa2ebd4c8530079140dd2d7f794a9d9a73cbb8e9d59ffe24c63436efa8f271"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:61817945f2fe7d166e75fbfb28004034b48e44878177fc54d81688e7b85a3665"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:29d2c342c4bc01b88402d60189f3df065fb0dda3654744d5a165a5288a657368"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5e11661ce0fd30a6790e8bcdf263b9ec5988e95e63cf901972107efc49218b13"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d18368b137c6295db49ce7218b1a9ba15c5bc254c96d7c9f9e924a9bc7825ad"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ec4e55f79b1c4ffb2eecd8a0cfba9955a2588497d96851f4c8f99aa4a1d39b12"},
{file = "pydantic_core-2.23.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:374a5e5049eda9e0a44c696c7ade3ff355f06b1fe0bb945ea3cac2bc336478a2"},
{file = "pydantic_core-2.23.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5c364564d17da23db1106787675fc7af45f2f7b58b4173bfdd105564e132e6fb"},
{file = "pydantic_core-2.23.4-cp38-none-win32.whl", hash = "sha256:d7a80d21d613eec45e3d41eb22f8f94ddc758a6c4720842dc74c0581f54993d6"},
{file = "pydantic_core-2.23.4-cp38-none-win_amd64.whl", hash = "sha256:5f5ff8d839f4566a474a969508fe1c5e59c31c80d9e140566f9a37bba7b8d556"},
{file = "pydantic_core-2.23.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a4fa4fc04dff799089689f4fd502ce7d59de529fc2f40a2c8836886c03e0175a"},
{file = "pydantic_core-2.23.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0a7df63886be5e270da67e0966cf4afbae86069501d35c8c1b3b6c168f42cb36"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dcedcd19a557e182628afa1d553c3895a9f825b936415d0dbd3cd0bbcfd29b4b"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f54b118ce5de9ac21c363d9b3caa6c800341e8c47a508787e5868c6b79c9323"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:86d2f57d3e1379a9525c5ab067b27dbb8a0642fb5d454e17a9ac434f9ce523e3"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:de6d1d1b9e5101508cb37ab0d972357cac5235f5c6533d1071964c47139257df"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1278e0d324f6908e872730c9102b0112477a7f7cf88b308e4fc36ce1bdb6d58c"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9a6b5099eeec78827553827f4c6b8615978bb4b6a88e5d9b93eddf8bb6790f55"},
{file = "pydantic_core-2.23.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:e55541f756f9b3ee346b840103f32779c695a19826a4c442b7954550a0972040"},
{file = "pydantic_core-2.23.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a5c7ba8ffb6d6f8f2ab08743be203654bb1aaa8c9dcb09f82ddd34eadb695605"},
{file = "pydantic_core-2.23.4-cp39-none-win32.whl", hash = "sha256:37b0fe330e4a58d3c58b24d91d1eb102aeec675a3db4c292ec3928ecd892a9a6"},
{file = "pydantic_core-2.23.4-cp39-none-win_amd64.whl", hash = "sha256:1498bec4c05c9c787bde9125cfdcc63a41004ff167f495063191b863399b1a29"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f455ee30a9d61d3e1a15abd5068827773d6e4dc513e795f380cdd59932c782d5"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1e90d2e3bd2c3863d48525d297cd143fe541be8bbf6f579504b9712cb6b643ec"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e203fdf807ac7e12ab59ca2bfcabb38c7cf0b33c41efeb00f8e5da1d86af480"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e08277a400de01bc72436a0ccd02bdf596631411f592ad985dcee21445bd0068"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f220b0eea5965dec25480b6333c788fb72ce5f9129e8759ef876a1d805d00801"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:d06b0c8da4f16d1d1e352134427cb194a0a6e19ad5db9161bf32b2113409e728"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:ba1a0996f6c2773bd83e63f18914c1de3c9dd26d55f4ac302a7efe93fb8e7433"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:9a5bce9d23aac8f0cf0836ecfc033896aa8443b501c58d0602dbfd5bd5b37753"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:78ddaaa81421a29574a682b3179d4cf9e6d405a09b99d93ddcf7e5239c742e21"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:883a91b5dd7d26492ff2f04f40fbb652de40fcc0afe07e8129e8ae779c2110eb"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88ad334a15b32a791ea935af224b9de1bf99bcd62fabf745d5f3442199d86d59"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:233710f069d251feb12a56da21e14cca67994eab08362207785cf8c598e74577"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:19442362866a753485ba5e4be408964644dd6a09123d9416c54cd49171f50744"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:624e278a7d29b6445e4e813af92af37820fafb6dcc55c012c834f9e26f9aaaef"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f5ef8f42bec47f21d07668a043f077d507e5bf4e668d5c6dfe6aaba89de1a5b8"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:aea443fffa9fbe3af1a9ba721a87f926fe548d32cab71d188a6ede77d0ff244e"},
{file = "pydantic_core-2.23.4.tar.gz", hash = "sha256:2584f7cf844ac4d970fba483a717dbe10c1c1c96a969bf65d61ffe94df1b2863"},
{file = "pydantic_core-2.27.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:71a5e35c75c021aaf400ac048dacc855f000bdfed91614b4a726f7432f1f3d6a"},
{file = "pydantic_core-2.27.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f82d068a2d6ecfc6e054726080af69a6764a10015467d7d7b9f66d6ed5afa23b"},
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:121ceb0e822f79163dd4699e4c54f5ad38b157084d97b34de8b232bcaad70278"},
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4603137322c18eaf2e06a4495f426aa8d8388940f3c457e7548145011bb68e05"},
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a33cd6ad9017bbeaa9ed78a2e0752c5e250eafb9534f308e7a5f7849b0b1bfb4"},
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:15cc53a3179ba0fcefe1e3ae50beb2784dede4003ad2dfd24f81bba4b23a454f"},
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45d9c5eb9273aa50999ad6adc6be5e0ecea7e09dbd0d31bd0c65a55a2592ca08"},
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8bf7b66ce12a2ac52d16f776b31d16d91033150266eb796967a7e4621707e4f6"},
{file = "pydantic_core-2.27.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:655d7dd86f26cb15ce8a431036f66ce0318648f8853d709b4167786ec2fa4807"},
{file = "pydantic_core-2.27.1-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:5556470f1a2157031e676f776c2bc20acd34c1990ca5f7e56f1ebf938b9ab57c"},
{file = "pydantic_core-2.27.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f69ed81ab24d5a3bd93861c8c4436f54afdf8e8cc421562b0c7504cf3be58206"},
{file = "pydantic_core-2.27.1-cp310-none-win32.whl", hash = "sha256:f5a823165e6d04ccea61a9f0576f345f8ce40ed533013580e087bd4d7442b52c"},
{file = "pydantic_core-2.27.1-cp310-none-win_amd64.whl", hash = "sha256:57866a76e0b3823e0b56692d1a0bf722bffb324839bb5b7226a7dbd6c9a40b17"},
{file = "pydantic_core-2.27.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:ac3b20653bdbe160febbea8aa6c079d3df19310d50ac314911ed8cc4eb7f8cb8"},
{file = "pydantic_core-2.27.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a5a8e19d7c707c4cadb8c18f5f60c843052ae83c20fa7d44f41594c644a1d330"},
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7f7059ca8d64fea7f238994c97d91f75965216bcbe5f695bb44f354893f11d52"},
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bed0f8a0eeea9fb72937ba118f9db0cb7e90773462af7962d382445f3005e5a4"},
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a3cb37038123447cf0f3ea4c74751f6a9d7afef0eb71aa07bf5f652b5e6a132c"},
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:84286494f6c5d05243456e04223d5a9417d7f443c3b76065e75001beb26f88de"},
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acc07b2cfc5b835444b44a9956846b578d27beeacd4b52e45489e93276241025"},
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4fefee876e07a6e9aad7a8c8c9f85b0cdbe7df52b8a9552307b09050f7512c7e"},
{file = "pydantic_core-2.27.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:258c57abf1188926c774a4c94dd29237e77eda19462e5bb901d88adcab6af919"},
{file = "pydantic_core-2.27.1-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:35c14ac45fcfdf7167ca76cc80b2001205a8d5d16d80524e13508371fb8cdd9c"},
{file = "pydantic_core-2.27.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d1b26e1dff225c31897696cab7d4f0a315d4c0d9e8666dbffdb28216f3b17fdc"},
{file = "pydantic_core-2.27.1-cp311-none-win32.whl", hash = "sha256:2cdf7d86886bc6982354862204ae3b2f7f96f21a3eb0ba5ca0ac42c7b38598b9"},
{file = "pydantic_core-2.27.1-cp311-none-win_amd64.whl", hash = "sha256:3af385b0cee8df3746c3f406f38bcbfdc9041b5c2d5ce3e5fc6637256e60bbc5"},
{file = "pydantic_core-2.27.1-cp311-none-win_arm64.whl", hash = "sha256:81f2ec23ddc1b476ff96563f2e8d723830b06dceae348ce02914a37cb4e74b89"},
{file = "pydantic_core-2.27.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9cbd94fc661d2bab2bc702cddd2d3370bbdcc4cd0f8f57488a81bcce90c7a54f"},
{file = "pydantic_core-2.27.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5f8c4718cd44ec1580e180cb739713ecda2bdee1341084c1467802a417fe0f02"},
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:15aae984e46de8d376df515f00450d1522077254ef6b7ce189b38ecee7c9677c"},
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1ba5e3963344ff25fc8c40da90f44b0afca8cfd89d12964feb79ac1411a260ac"},
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:992cea5f4f3b29d6b4f7f1726ed8ee46c8331c6b4eed6db5b40134c6fe1768bb"},
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0325336f348dbee6550d129b1627cb8f5351a9dc91aad141ffb96d4937bd9529"},
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7597c07fbd11515f654d6ece3d0e4e5093edc30a436c63142d9a4b8e22f19c35"},
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3bbd5d8cc692616d5ef6fbbbd50dbec142c7e6ad9beb66b78a96e9c16729b089"},
{file = "pydantic_core-2.27.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:dc61505e73298a84a2f317255fcc72b710b72980f3a1f670447a21efc88f8381"},
{file = "pydantic_core-2.27.1-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:e1f735dc43da318cad19b4173dd1ffce1d84aafd6c9b782b3abc04a0d5a6f5bb"},
{file = "pydantic_core-2.27.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:f4e5658dbffe8843a0f12366a4c2d1c316dbe09bb4dfbdc9d2d9cd6031de8aae"},
{file = "pydantic_core-2.27.1-cp312-none-win32.whl", hash = "sha256:672ebbe820bb37988c4d136eca2652ee114992d5d41c7e4858cdd90ea94ffe5c"},
{file = "pydantic_core-2.27.1-cp312-none-win_amd64.whl", hash = "sha256:66ff044fd0bb1768688aecbe28b6190f6e799349221fb0de0e6f4048eca14c16"},
{file = "pydantic_core-2.27.1-cp312-none-win_arm64.whl", hash = "sha256:9a3b0793b1bbfd4146304e23d90045f2a9b5fd5823aa682665fbdaf2a6c28f3e"},
{file = "pydantic_core-2.27.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:f216dbce0e60e4d03e0c4353c7023b202d95cbaeff12e5fd2e82ea0a66905073"},
{file = "pydantic_core-2.27.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a2e02889071850bbfd36b56fd6bc98945e23670773bc7a76657e90e6b6603c08"},
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42b0e23f119b2b456d07ca91b307ae167cc3f6c846a7b169fca5326e32fdc6cf"},
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:764be71193f87d460a03f1f7385a82e226639732214b402f9aa61f0d025f0737"},
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1c00666a3bd2f84920a4e94434f5974d7bbc57e461318d6bb34ce9cdbbc1f6b2"},
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3ccaa88b24eebc0f849ce0a4d09e8a408ec5a94afff395eb69baf868f5183107"},
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c65af9088ac534313e1963443d0ec360bb2b9cba6c2909478d22c2e363d98a51"},
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206b5cf6f0c513baffaeae7bd817717140770c74528f3e4c3e1cec7871ddd61a"},
{file = "pydantic_core-2.27.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:062f60e512fc7fff8b8a9d680ff0ddaaef0193dba9fa83e679c0c5f5fbd018bc"},
{file = "pydantic_core-2.27.1-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:a0697803ed7d4af5e4c1adf1670af078f8fcab7a86350e969f454daf598c4960"},
{file = "pydantic_core-2.27.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:58ca98a950171f3151c603aeea9303ef6c235f692fe555e883591103da709b23"},
{file = "pydantic_core-2.27.1-cp313-none-win32.whl", hash = "sha256:8065914ff79f7eab1599bd80406681f0ad08f8e47c880f17b416c9f8f7a26d05"},
{file = "pydantic_core-2.27.1-cp313-none-win_amd64.whl", hash = "sha256:ba630d5e3db74c79300d9a5bdaaf6200172b107f263c98a0539eeecb857b2337"},
{file = "pydantic_core-2.27.1-cp313-none-win_arm64.whl", hash = "sha256:45cf8588c066860b623cd11c4ba687f8d7175d5f7ef65f7129df8a394c502de5"},
{file = "pydantic_core-2.27.1-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:5897bec80a09b4084aee23f9b73a9477a46c3304ad1d2d07acca19723fb1de62"},
{file = "pydantic_core-2.27.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d0165ab2914379bd56908c02294ed8405c252250668ebcb438a55494c69f44ab"},
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6b9af86e1d8e4cfc82c2022bfaa6f459381a50b94a29e95dcdda8442d6d83864"},
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f6c8a66741c5f5447e047ab0ba7a1c61d1e95580d64bce852e3df1f895c4067"},
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9a42d6a8156ff78981f8aa56eb6394114e0dedb217cf8b729f438f643608cbcd"},
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:64c65f40b4cd8b0e049a8edde07e38b476da7e3aaebe63287c899d2cff253fa5"},
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdcf339322a3fae5cbd504edcefddd5a50d9ee00d968696846f089b4432cf78"},
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bf99c8404f008750c846cb4ac4667b798a9f7de673ff719d705d9b2d6de49c5f"},
{file = "pydantic_core-2.27.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8f1edcea27918d748c7e5e4d917297b2a0ab80cad10f86631e488b7cddf76a36"},
{file = "pydantic_core-2.27.1-cp38-cp38-musllinux_1_1_armv7l.whl", hash = "sha256:159cac0a3d096f79ab6a44d77a961917219707e2a130739c64d4dd46281f5c2a"},
{file = "pydantic_core-2.27.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:029d9757eb621cc6e1848fa0b0310310de7301057f623985698ed7ebb014391b"},
{file = "pydantic_core-2.27.1-cp38-none-win32.whl", hash = "sha256:a28af0695a45f7060e6f9b7092558a928a28553366519f64083c63a44f70e618"},
{file = "pydantic_core-2.27.1-cp38-none-win_amd64.whl", hash = "sha256:2d4567c850905d5eaaed2f7a404e61012a51caf288292e016360aa2b96ff38d4"},
{file = "pydantic_core-2.27.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:e9386266798d64eeb19dd3677051f5705bf873e98e15897ddb7d76f477131967"},
{file = "pydantic_core-2.27.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4228b5b646caa73f119b1ae756216b59cc6e2267201c27d3912b592c5e323b60"},
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b3dfe500de26c52abe0477dde16192ac39c98f05bf2d80e76102d394bd13854"},
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:aee66be87825cdf72ac64cb03ad4c15ffef4143dbf5c113f64a5ff4f81477bf9"},
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b748c44bb9f53031c8cbc99a8a061bc181c1000c60a30f55393b6e9c45cc5bd"},
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ca038c7f6a0afd0b2448941b6ef9d5e1949e999f9e5517692eb6da58e9d44be"},
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e0bd57539da59a3e4671b90a502da9a28c72322a4f17866ba3ac63a82c4498e"},
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ac6c2c45c847bbf8f91930d88716a0fb924b51e0c6dad329b793d670ec5db792"},
{file = "pydantic_core-2.27.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b94d4ba43739bbe8b0ce4262bcc3b7b9f31459ad120fb595627eaeb7f9b9ca01"},
{file = "pydantic_core-2.27.1-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:00e6424f4b26fe82d44577b4c842d7df97c20be6439e8e685d0d715feceb9fb9"},
{file = "pydantic_core-2.27.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:38de0a70160dd97540335b7ad3a74571b24f1dc3ed33f815f0880682e6880131"},
{file = "pydantic_core-2.27.1-cp39-none-win32.whl", hash = "sha256:7ccebf51efc61634f6c2344da73e366c75e735960b5654b63d7e6f69a5885fa3"},
{file = "pydantic_core-2.27.1-cp39-none-win_amd64.whl", hash = "sha256:a57847b090d7892f123726202b7daa20df6694cbd583b67a592e856bff603d6c"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3fa80ac2bd5856580e242dbc202db873c60a01b20309c8319b5c5986fbe53ce6"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d950caa237bb1954f1b8c9227b5065ba6875ac9771bb8ec790d956a699b78676"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e4216e64d203e39c62df627aa882f02a2438d18a5f21d7f721621f7a5d3611d"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:02a3d637bd387c41d46b002f0e49c52642281edacd2740e5a42f7017feea3f2c"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:161c27ccce13b6b0c8689418da3885d3220ed2eae2ea5e9b2f7f3d48f1d52c27"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:19910754e4cc9c63bc1c7f6d73aa1cfee82f42007e407c0f413695c2f7ed777f"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:e173486019cc283dc9778315fa29a363579372fe67045e971e89b6365cc035ed"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:af52d26579b308921b73b956153066481f064875140ccd1dfd4e77db89dbb12f"},
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:981fb88516bd1ae8b0cbbd2034678a39dedc98752f264ac9bc5839d3923fa04c"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5fde892e6c697ce3e30c61b239330fc5d569a71fefd4eb6512fc6caec9dd9e2f"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:816f5aa087094099fff7edabb5e01cc370eb21aa1a1d44fe2d2aefdfb5599b31"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c10c309e18e443ddb108f0ef64e8729363adbfd92d6d57beec680f6261556f3"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98476c98b02c8e9b2eec76ac4156fd006628b1b2d0ef27e548ffa978393fd154"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c3027001c28434e7ca5a6e1e527487051136aa81803ac812be51802150d880dd"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:7699b1df36a48169cdebda7ab5a2bac265204003f153b4bd17276153d997670a"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1c39b07d90be6b48968ddc8c19e7585052088fd7ec8d568bb31ff64c70ae3c97"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:46ccfe3032b3915586e469d4972973f893c0a2bb65669194a5bdea9bacc088c2"},
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:62ba45e21cf6571d7f716d903b5b7b6d2617e2d5d67c0923dc47b9d41369f840"},
{file = "pydantic_core-2.27.1.tar.gz", hash = "sha256:62a763352879b84aa31058fc931884055fd75089cccbd9d58bb6afd01141b235"},
]
[package.dependencies]
@@ -1209,13 +1243,13 @@ yaml = ["pyyaml (>=6.0.1)"]
[[package]]
name = "pyjwt"
version = "2.9.0"
version = "2.10.0"
description = "JSON Web Token implementation in Python"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
files = [
{file = "PyJWT-2.9.0-py3-none-any.whl", hash = "sha256:3b02fb0f44517787776cf48f2ae25d8e14f300e6d7545a4315cee571a415e850"},
{file = "pyjwt-2.9.0.tar.gz", hash = "sha256:7e1e5b56cc735432a7369cbfa0efe50fa113ebecdc04ae6922deba8b84582d0c"},
{file = "PyJWT-2.10.0-py3-none-any.whl", hash = "sha256:543b77207db656de204372350926bed5a86201c4cbff159f623f79c7bb487a15"},
{file = "pyjwt-2.10.0.tar.gz", hash = "sha256:7628a7eb7938959ac1b26e819a1df0fd3259505627b575e4bad6d08f76db695c"},
]
[package.extras]
@@ -1224,6 +1258,63 @@ dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pyte
docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"]
tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
[[package]]
name = "pytest"
version = "8.3.3"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "pytest-8.3.3-py3-none-any.whl", hash = "sha256:a6853c7375b2663155079443d2e45de913a911a11d669df02a50814944db57b2"},
{file = "pytest-8.3.3.tar.gz", hash = "sha256:70b98107bd648308a7952b06e6ca9a50bc660be218d53c257cc1fc94fda10181"},
]
[package.dependencies]
colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=1.5,<2"
tomli = {version = ">=1", markers = "python_version < \"3.11\""}
[package.extras]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
[[package]]
name = "pytest-asyncio"
version = "0.24.0"
description = "Pytest support for asyncio"
optional = false
python-versions = ">=3.8"
files = [
{file = "pytest_asyncio-0.24.0-py3-none-any.whl", hash = "sha256:a811296ed596b69bf0b6f3dc40f83bcaf341b155a269052d82efa2b25ac7037b"},
{file = "pytest_asyncio-0.24.0.tar.gz", hash = "sha256:d081d828e576d85f875399194281e92bf8a68d60d72d1a2faf2feddb6c46b276"},
]
[package.dependencies]
pytest = ">=8.2,<9"
[package.extras]
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1.0)"]
testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"]
[[package]]
name = "pytest-mock"
version = "3.14.0"
description = "Thin-wrapper around the mock package for easier use with pytest"
optional = false
python-versions = ">=3.8"
files = [
{file = "pytest-mock-3.14.0.tar.gz", hash = "sha256:2719255a1efeceadbc056d6bf3df3d1c5015530fb40cf347c0f9afac88410bd0"},
{file = "pytest_mock-3.14.0-py3-none-any.whl", hash = "sha256:0b72c38033392a5f4621342fe11e9219ac11ec9d375f8e2a0c164539e0d70f6f"},
]
[package.dependencies]
pytest = ">=6.2.5"
[package.extras]
dev = ["pre-commit", "pytest-asyncio", "tox"]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
@@ -1322,6 +1413,33 @@ files = [
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.8.0"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
files = [
{file = "ruff-0.8.0-py3-none-linux_armv6l.whl", hash = "sha256:fcb1bf2cc6706adae9d79c8d86478677e3bbd4ced796ccad106fd4776d395fea"},
{file = "ruff-0.8.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:295bb4c02d58ff2ef4378a1870c20af30723013f441c9d1637a008baaf928c8b"},
{file = "ruff-0.8.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b1f1c76b47c18fa92ee78b60d2d20d7e866c55ee603e7d19c1e991fad933a9a"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eb0d4f250a7711b67ad513fde67e8870109e5ce590a801c3722580fe98c33a99"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0e55cce9aa93c5d0d4e3937e47b169035c7e91c8655b0974e61bb79cf398d49c"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3f4cd64916d8e732ce6b87f3f5296a8942d285bbbc161acee7fe561134af64f9"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:c5c1466be2a2ebdf7c5450dd5d980cc87c8ba6976fb82582fea18823da6fa362"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2dabfd05b96b7b8f2da00d53c514eea842bff83e41e1cceb08ae1966254a51df"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:facebdfe5a5af6b1588a1d26d170635ead6892d0e314477e80256ef4a8470cf3"},
{file = "ruff-0.8.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:87a8e86bae0dbd749c815211ca11e3a7bd559b9710746c559ed63106d382bd9c"},
{file = "ruff-0.8.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:85e654f0ded7befe2d61eeaf3d3b1e4ef3894469cd664ffa85006c7720f1e4a2"},
{file = "ruff-0.8.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:83a55679c4cb449fa527b8497cadf54f076603cc36779b2170b24f704171ce70"},
{file = "ruff-0.8.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:812e2052121634cf13cd6fddf0c1871d0ead1aad40a1a258753c04c18bb71bbd"},
{file = "ruff-0.8.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:780d5d8523c04202184405e60c98d7595bdb498c3c6abba3b6d4cdf2ca2af426"},
{file = "ruff-0.8.0-py3-none-win32.whl", hash = "sha256:5fdb6efecc3eb60bba5819679466471fd7d13c53487df7248d6e27146e985468"},
{file = "ruff-0.8.0-py3-none-win_amd64.whl", hash = "sha256:582891c57b96228d146725975fbb942e1f30a0c4ba19722e692ca3eb25cc9b4f"},
{file = "ruff-0.8.0-py3-none-win_arm64.whl", hash = "sha256:ba93e6294e9a737cd726b74b09a6972e36bb511f9a102f1d9a7e1ce94dd206a6"},
{file = "ruff-0.8.0.tar.gz", hash = "sha256:a7ccfe6331bf8c8dad715753e157457faf7351c2b69f62f32c165c2dbcbacd44"},
]
[[package]]
name = "six"
version = "1.16.0"
@@ -1346,19 +1464,18 @@ files = [
[[package]]
name = "storage3"
version = "0.8.2"
version = "0.9.0"
description = "Supabase Storage client for Python."
optional = false
python-versions = "<4.0,>=3.9"
files = [
{file = "storage3-0.8.2-py3-none-any.whl", hash = "sha256:f2e995b18c77a2a9265d1a33047d43e4d6abb11eb3ca5067959f68281c305de3"},
{file = "storage3-0.8.2.tar.gz", hash = "sha256:db05d3fe8fb73bd30c814c4c4749664f37a5dfc78b629e8c058ef558c2b89f5a"},
{file = "storage3-0.9.0-py3-none-any.whl", hash = "sha256:8b2fb91f0c61583a2f4eac74a8bae67e00d41ff38095c8a6cd3f2ce5e0ab76e7"},
{file = "storage3-0.9.0.tar.gz", hash = "sha256:e16697f60894c94e1d9df0d2e4af783c1b3f7dd08c9013d61978825c624188c4"},
]
[package.dependencies]
httpx = {version = ">=0.26,<0.28", extras = ["http2"]}
python-dateutil = ">=2.8.2,<3.0.0"
typing-extensions = ">=4.2.0,<5.0.0"
[[package]]
name = "strenum"
@@ -1378,37 +1495,48 @@ test = ["pylint", "pytest", "pytest-black", "pytest-cov", "pytest-pylint"]
[[package]]
name = "supabase"
version = "2.9.1"
version = "2.10.0"
description = "Supabase client for Python."
optional = false
python-versions = "<4.0,>=3.9"
files = [
{file = "supabase-2.9.1-py3-none-any.whl", hash = "sha256:a96f857a465712cb551679c1df66ba772c834f861756ce4aa2aa4cb703f6aeb7"},
{file = "supabase-2.9.1.tar.gz", hash = "sha256:51fce39c9eb50573126dabb342541ec5e1f13e7476938768f4b0ccfdb8c522cd"},
{file = "supabase-2.10.0-py3-none-any.whl", hash = "sha256:183fb23c04528593f8f81c24ceb8178f3a56bff40fec7ed873b6c55ebc2e420a"},
{file = "supabase-2.10.0.tar.gz", hash = "sha256:9ac095f8947bf60780e67c0edcbab53e2db3f6f3f022329397b093500bf2607c"},
]
[package.dependencies]
gotrue = ">=2.9.0,<3.0.0"
gotrue = ">=2.10.0,<3.0.0"
httpx = ">=0.26,<0.28"
postgrest = ">=0.17.0,<0.18.0"
postgrest = ">=0.18,<0.19"
realtime = ">=2.0.0,<3.0.0"
storage3 = ">=0.8.0,<0.9.0"
supafunc = ">=0.6.0,<0.7.0"
storage3 = ">=0.9.0,<0.10.0"
supafunc = ">=0.7.0,<0.8.0"
[[package]]
name = "supafunc"
version = "0.6.2"
version = "0.7.0"
description = "Library for Supabase Functions"
optional = false
python-versions = "<4.0,>=3.9"
files = [
{file = "supafunc-0.6.2-py3-none-any.whl", hash = "sha256:101b30616b0a1ce8cf938eca1df362fa4cf1deacb0271f53ebbd674190fb0da5"},
{file = "supafunc-0.6.2.tar.gz", hash = "sha256:c7dfa20db7182f7fe4ae436e94e05c06cd7ed98d697fed75d68c7b9792822adc"},
{file = "supafunc-0.7.0-py3-none-any.whl", hash = "sha256:4160260dc02bdd906be1e2ffd7cb3ae8b74ae437c892bb475352b6a99d9ff8eb"},
{file = "supafunc-0.7.0.tar.gz", hash = "sha256:5b1c415fba1395740b2b4eedd1d786384bd58b98f6333a11ba7889820a48b6a7"},
]
[package.dependencies]
httpx = {version = ">=0.26,<0.28", extras = ["http2"]}
[[package]]
name = "tomli"
version = "2.1.0"
description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
files = [
{file = "tomli-2.1.0-py3-none-any.whl", hash = "sha256:a5c57c3d1c56f5ccdf89f6523458f60ef716e210fc47c4cfb188c5ba473e0391"},
{file = "tomli-2.1.0.tar.gz", hash = "sha256:3f646cae2aec94e17d04973e4249548320197cfabdf130015d023de4b74d8ab8"},
]
[[package]]
name = "typing-extensions"
version = "4.12.2"
@@ -1724,4 +1852,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.10,<4.0"
content-hash = "f80654aae542b1f2f3a44a01f197f87ffbaea52f474dd2cc2b72b8d56b155563"
content-hash = "54bf6e076ec4d09be2307f07240018459dd6594efdc55a2dc2dc1d673184587e"

View File

@@ -10,16 +10,25 @@ packages = [{ include = "autogpt_libs" }]
colorama = "^0.4.6"
expiringdict = "^1.2.2"
google-cloud-logging = "^3.11.3"
pydantic = "^2.9.2"
pydantic = "^2.10.2"
pydantic-settings = "^2.6.1"
pyjwt = "^2.8.0"
pyjwt = "^2.10.0"
pytest-asyncio = "^0.24.0"
pytest-mock = "^3.14.0"
python = ">=3.10,<4.0"
python-dotenv = "^1.0.1"
supabase = "^2.9.1"
supabase = "^2.10.0"
[tool.poetry.group.dev.dependencies]
redis = "^5.2.0"
ruff = "^0.8.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.ruff]
line-length = 88
[tool.ruff.lint]
extend-select = ["I"] # sort dependencies

View File

@@ -28,8 +28,15 @@ SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
SUPABASE_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
# For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow for integrations to work.
FRONTEND_BASE_URL=http://localhost:3000
## For local development, you may need to set FRONTEND_BASE_URL for the OAuth flow
## for integrations to work. Defaults to the value of PLATFORM_BASE_URL if not set.
# FRONTEND_BASE_URL=http://localhost:3000
## PLATFORM_BASE_URL must be set to a *publicly accessible* URL pointing to your backend
## to use the platform's webhook-related functionality.
## If you are developing locally, you can use something like ngrok to get a publc URL
## and tunnel it to your locally running backend.
PLATFORM_BASE_URL=https://your-public-url-here
## == INTEGRATION CREDENTIALS == ##
# Each set of server side credentials is required for the corresponding 3rd party
@@ -57,6 +64,7 @@ GOOGLE_CLIENT_SECRET=
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
OPEN_ROUTER_API_KEY=
# Reddit
REDDIT_CLIENT_ID=

View File

@@ -1,4 +1,4 @@
FROM python:3.11-slim-buster AS builder
FROM python:3.11.10-slim-bookworm AS builder
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
@@ -35,7 +35,7 @@ COPY autogpt_platform/backend/schema.prisma ./
RUN poetry config virtualenvs.create false \
&& poetry run prisma generate
FROM python:3.11-slim-buster AS server_dependencies
FROM python:3.11.10-slim-bookworm AS server_dependencies
WORKDIR /app

View File

@@ -60,13 +60,6 @@ for block_cls in all_subclasses(Block):
input_schema = block.input_schema.model_fields
output_schema = block.output_schema.model_fields
# Prevent duplicate field name in input_schema and output_schema
duplicate_field_names = set(input_schema.keys()) & set(output_schema.keys())
if duplicate_field_names:
raise ValueError(
f"{block.name} has duplicate field names in input_schema and output_schema: {duplicate_field_names}"
)
# Make sure `error` field is a string in the output schema
if "error" in output_schema and output_schema["error"].annotation is not str:
raise ValueError(

View File

@@ -0,0 +1,100 @@
import logging
from autogpt_libs.utils.cache import thread_cached
from backend.data.block import (
Block,
BlockCategory,
BlockInput,
BlockOutput,
BlockSchema,
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus
from backend.data.model import SchemaField
logger = logging.getLogger(__name__)
@thread_cached
def get_executor_manager_client():
from backend.executor import ExecutionManager
from backend.util.service import get_service_client
return get_service_client(ExecutionManager)
@thread_cached
def get_event_bus():
from backend.data.execution import RedisExecutionEventBus
return RedisExecutionEventBus()
class AgentExecutorBlock(Block):
class Input(BlockSchema):
user_id: str = SchemaField(description="User ID")
graph_id: str = SchemaField(description="Graph ID")
graph_version: int = SchemaField(description="Graph Version")
data: BlockInput = SchemaField(description="Input data for the graph")
input_schema: dict = SchemaField(description="Input schema for the graph")
output_schema: dict = SchemaField(description="Output schema for the graph")
class Output(BlockSchema):
pass
def __init__(self):
super().__init__(
id="e189baac-8c20-45a1-94a7-55177ea42565",
description="Executes an existing agent inside your agent",
input_schema=AgentExecutorBlock.Input,
output_schema=AgentExecutorBlock.Output,
block_type=BlockType.AGENT,
categories={BlockCategory.AGENT},
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
executor_manager = get_executor_manager_client()
event_bus = get_event_bus()
graph_exec = executor_manager.add_execution(
graph_id=input_data.graph_id,
graph_version=input_data.graph_version,
user_id=input_data.user_id,
data=input_data.data,
)
log_id = f"Graph #{input_data.graph_id}-V{input_data.graph_version}, exec-id: {graph_exec.graph_exec_id}"
logger.info(f"Starting execution of {log_id}")
for event in event_bus.listen(
graph_id=graph_exec.graph_id, graph_exec_id=graph_exec.graph_exec_id
):
logger.info(
f"Execution {log_id} produced input {event.input_data} output {event.output_data}"
)
if not event.node_id:
if event.status in [ExecutionStatus.COMPLETED, ExecutionStatus.FAILED]:
logger.info(f"Execution {log_id} ended with status {event.status}")
break
else:
continue
if not event.block_id:
logger.warning(f"{log_id} received event without block_id {event}")
continue
block = get_block(event.block_id)
if not block or block.block_type != BlockType.OUTPUT:
continue
output_name = event.input_data.get("name")
if not output_name:
logger.warning(f"{log_id} produced an output with no name {event}")
continue
for output_data in event.output_data.get("output", []):
logger.info(f"Execution {log_id} produced {output_name}: {output_data}")
yield output_name, output_data

View File

@@ -0,0 +1,326 @@
from enum import Enum
from typing import Literal
import replicate
from pydantic import SecretStr
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
class ImageSize(str, Enum):
"""
Semantic sizes that map reliably across all models
"""
SQUARE = "square" # For profile pictures, icons, etc.
LANDSCAPE = "landscape" # For traditional photos, scenes
PORTRAIT = "portrait" # For vertical photos, portraits
WIDE = "wide" # For cinematic, desktop wallpapers
TALL = "tall" # For mobile wallpapers, stories
# Mapping semantic sizes to model-specific formats
SIZE_TO_SD_RATIO = {
ImageSize.SQUARE: "1:1",
ImageSize.LANDSCAPE: "4:3",
ImageSize.PORTRAIT: "3:4",
ImageSize.WIDE: "16:9",
ImageSize.TALL: "9:16",
}
SIZE_TO_FLUX_RATIO = {
ImageSize.SQUARE: "1:1",
ImageSize.LANDSCAPE: "4:3",
ImageSize.PORTRAIT: "3:4",
ImageSize.WIDE: "16:9",
ImageSize.TALL: "9:16",
}
SIZE_TO_FLUX_DIMENSIONS = {
ImageSize.SQUARE: (1024, 1024),
ImageSize.LANDSCAPE: (1365, 1024),
ImageSize.PORTRAIT: (1024, 1365),
ImageSize.WIDE: (1440, 810), # Adjusted to maintain 16:9 within 1440 limit
ImageSize.TALL: (810, 1440), # Adjusted to maintain 9:16 within 1440 limit
}
SIZE_TO_RECRAFT_DIMENSIONS = {
ImageSize.SQUARE: "1024x1024",
ImageSize.LANDSCAPE: "1365x1024",
ImageSize.PORTRAIT: "1024x1365",
ImageSize.WIDE: "1536x1024",
ImageSize.TALL: "1024x1536",
}
class ImageStyle(str, Enum):
"""
Complete set of supported styles
"""
ANY = "any"
# Realistic image styles
REALISTIC = "realistic_image"
REALISTIC_BW = "realistic_image/b_and_w"
REALISTIC_HDR = "realistic_image/hdr"
REALISTIC_NATURAL = "realistic_image/natural_light"
REALISTIC_STUDIO = "realistic_image/studio_portrait"
REALISTIC_ENTERPRISE = "realistic_image/enterprise"
REALISTIC_HARD_FLASH = "realistic_image/hard_flash"
REALISTIC_MOTION_BLUR = "realistic_image/motion_blur"
# Digital illustration styles
DIGITAL_ART = "digital_illustration"
PIXEL_ART = "digital_illustration/pixel_art"
HAND_DRAWN = "digital_illustration/hand_drawn"
GRAIN = "digital_illustration/grain"
SKETCH = "digital_illustration/infantile_sketch"
POSTER = "digital_illustration/2d_art_poster"
POSTER_2 = "digital_illustration/2d_art_poster_2"
HANDMADE_3D = "digital_illustration/handmade_3d"
HAND_DRAWN_OUTLINE = "digital_illustration/hand_drawn_outline"
ENGRAVING_COLOR = "digital_illustration/engraving_color"
class ImageGenModel(str, Enum):
"""
Available model providers
"""
FLUX = "Flux 1.1 Pro"
FLUX_ULTRA = "Flux 1.1 Pro Ultra"
RECRAFT = "Recraft v3"
SD3_5 = "Stable Diffusion 3.5 Medium"
class AIImageGeneratorBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[Literal["replicate"], Literal["api_key"]] = (
CredentialsField(
provider="replicate",
supported_credential_types={"api_key"},
description="Enter your Replicate API key to access the image generation API. You can obtain an API key from https://replicate.com/account/api-tokens.",
)
)
prompt: str = SchemaField(
description="Text prompt for image generation",
placeholder="e.g., 'A red panda using a laptop in a snowy forest'",
title="Prompt",
)
model: ImageGenModel = SchemaField(
description="The AI model to use for image generation",
default=ImageGenModel.SD3_5,
title="Model",
)
size: ImageSize = SchemaField(
description=(
"Format of the generated image:\n"
"- Square: Perfect for profile pictures, icons\n"
"- Landscape: Traditional photo format\n"
"- Portrait: Vertical photos, portraits\n"
"- Wide: Cinematic format, desktop wallpapers\n"
"- Tall: Mobile wallpapers, social media stories"
),
default=ImageSize.SQUARE,
title="Image Format",
)
style: ImageStyle = SchemaField(
description="Visual style for the generated image",
default=ImageStyle.ANY,
title="Image Style",
)
class Output(BlockSchema):
image_url: str = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
id="ed1ae7a0-b770-4089-b520-1f0005fad19a",
description="Generate images using various AI models through a unified interface",
categories={BlockCategory.AI},
input_schema=AIImageGeneratorBlock.Input,
output_schema=AIImageGeneratorBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"prompt": "An octopus using a laptop in a snowy forest with 'AutoGPT' clearly visible on the screen",
"model": ImageGenModel.RECRAFT,
"size": ImageSize.SQUARE,
"style": ImageStyle.REALISTIC,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"image_url",
"https://replicate.delivery/generated-image.webp",
),
],
test_mock={
"_run_client": lambda *args, **kwargs: "https://replicate.delivery/generated-image.webp"
},
)
def _run_client(
self, credentials: APIKeyCredentials, model_name: str, input_params: dict
):
try:
# Initialize Replicate client
client = replicate.Client(api_token=credentials.api_key.get_secret_value())
# Run the model with input parameters
output = client.run(model_name, input=input_params, wait=False)
# Process output
if isinstance(output, list) and len(output) > 0:
if isinstance(output[0], FileOutput):
result_url = output[0].url
else:
result_url = output[0]
elif isinstance(output, FileOutput):
result_url = output.url
elif isinstance(output, str):
result_url = output
else:
result_url = None
return result_url
except TypeError as e:
raise TypeError(f"Error during model execution: {e}")
except Exception as e:
raise RuntimeError(f"Unexpected error during model execution: {e}")
def generate_image(self, input_data: Input, credentials: APIKeyCredentials):
try:
# Handle style-based prompt modification for models without native style support
modified_prompt = input_data.prompt
if input_data.model not in [ImageGenModel.RECRAFT]:
style_prefix = self._style_to_prompt_prefix(input_data.style)
modified_prompt = f"{style_prefix} {modified_prompt}".strip()
if input_data.model == ImageGenModel.SD3_5:
# Use Stable Diffusion 3.5 with aspect ratio
input_params = {
"prompt": modified_prompt,
"aspect_ratio": SIZE_TO_SD_RATIO[input_data.size],
"output_format": "webp",
"output_quality": 90,
"steps": 40,
"cfg_scale": 7.0,
}
output = self._run_client(
credentials,
"stability-ai/stable-diffusion-3.5-medium",
input_params,
)
return output
elif input_data.model == ImageGenModel.FLUX:
# Use Flux-specific dimensions with 'jpg' format to avoid ReplicateError
width, height = SIZE_TO_FLUX_DIMENSIONS[input_data.size]
input_params = {
"prompt": modified_prompt,
"width": width,
"height": height,
"aspect_ratio": SIZE_TO_FLUX_RATIO[input_data.size],
"output_format": "jpg", # Set to jpg for Flux models
"output_quality": 90,
}
output = self._run_client(
credentials, "black-forest-labs/flux-1.1-pro", input_params
)
return output
elif input_data.model == ImageGenModel.FLUX_ULTRA:
width, height = SIZE_TO_FLUX_DIMENSIONS[input_data.size]
input_params = {
"prompt": modified_prompt,
"width": width,
"height": height,
"aspect_ratio": SIZE_TO_FLUX_RATIO[input_data.size],
"output_format": "jpg",
"output_quality": 90,
}
output = self._run_client(
credentials, "black-forest-labs/flux-1.1-pro-ultra", input_params
)
return output
elif input_data.model == ImageGenModel.RECRAFT:
input_params = {
"prompt": input_data.prompt,
"size": SIZE_TO_RECRAFT_DIMENSIONS[input_data.size],
"style": input_data.style.value,
}
output = self._run_client(
credentials, "recraft-ai/recraft-v3", input_params
)
return output
except Exception as e:
raise RuntimeError(f"Failed to generate image: {str(e)}")
def _style_to_prompt_prefix(self, style: ImageStyle) -> str:
"""
Convert a style enum to a prompt prefix for models without native style support.
"""
if style == ImageStyle.ANY:
return ""
style_map = {
ImageStyle.REALISTIC: "photorealistic",
ImageStyle.REALISTIC_BW: "black and white photograph",
ImageStyle.REALISTIC_HDR: "HDR photograph",
ImageStyle.REALISTIC_NATURAL: "natural light photograph",
ImageStyle.REALISTIC_STUDIO: "studio portrait photograph",
ImageStyle.REALISTIC_ENTERPRISE: "enterprise photograph",
ImageStyle.REALISTIC_HARD_FLASH: "hard flash photograph",
ImageStyle.REALISTIC_MOTION_BLUR: "motion blur photograph",
ImageStyle.DIGITAL_ART: "digital art",
ImageStyle.PIXEL_ART: "pixel art",
ImageStyle.HAND_DRAWN: "hand drawn illustration",
ImageStyle.GRAIN: "grainy digital illustration",
ImageStyle.SKETCH: "sketchy illustration",
ImageStyle.POSTER: "2D art poster",
ImageStyle.POSTER_2: "alternate 2D art poster",
ImageStyle.HANDMADE_3D: "handmade 3D illustration",
ImageStyle.HAND_DRAWN_OUTLINE: "hand drawn outline illustration",
ImageStyle.ENGRAVING_COLOR: "color engraving illustration",
}
style_text = style_map.get(style, "")
return f"{style_text} of" if style_text else ""
def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
try:
url = self.generate_image(input_data, credentials)
if url:
yield "image_url", url
else:
yield "error", "Image generation returned an empty result."
except Exception as e:
# Capture and return only the message of the exception, avoiding serialization of non-serializable objects
yield "error", str(e)
# Test credentials stay the same
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -0,0 +1,228 @@
import logging
import time
from enum import Enum
from typing import Literal
import replicate
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
logger = logging.getLogger(__name__)
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.type,
}
# Model version enum
class MusicGenModelVersion(str, Enum):
STEREO_LARGE = "stereo-large"
MELODY_LARGE = "melody-large"
LARGE = "large"
# Audio format enum
class AudioFormat(str, Enum):
WAV = "wav"
MP3 = "mp3"
# Normalization strategy enum
class NormalizationStrategy(str, Enum):
LOUDNESS = "loudness"
CLIP = "clip"
PEAK = "peak"
RMS = "rms"
class AIMusicGeneratorBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[Literal["replicate"], Literal["api_key"]] = (
CredentialsField(
provider="replicate",
supported_credential_types={"api_key"},
description="The Replicate integration can be used with "
"any API key with sufficient permissions for the blocks it is used on.",
)
)
prompt: str = SchemaField(
description="A description of the music you want to generate",
placeholder="e.g., 'An upbeat electronic dance track with heavy bass'",
title="Prompt",
)
music_gen_model_version: MusicGenModelVersion = SchemaField(
description="Model to use for generation",
default=MusicGenModelVersion.STEREO_LARGE,
title="Model Version",
)
duration: int = SchemaField(
description="Duration of the generated audio in seconds",
default=8,
title="Duration",
)
temperature: float = SchemaField(
description="Controls the 'conservativeness' of the sampling process. Higher temperature means more diversity",
default=1.0,
title="Temperature",
)
top_k: int = SchemaField(
description="Reduces sampling to the k most likely tokens",
default=250,
title="Top K",
)
top_p: float = SchemaField(
description="Reduces sampling to tokens with cumulative probability of p. When set to 0 (default), top_k sampling is used",
default=0.0,
title="Top P",
)
classifier_free_guidance: int = SchemaField(
description="Increases the influence of inputs on the output. Higher values produce lower-variance outputs that adhere more closely to inputs",
default=3,
title="Classifier Free Guidance",
)
output_format: AudioFormat = SchemaField(
description="Output format for generated audio",
default=AudioFormat.WAV,
title="Output Format",
)
normalization_strategy: NormalizationStrategy = SchemaField(
description="Strategy for normalizing audio",
default=NormalizationStrategy.LOUDNESS,
title="Normalization Strategy",
)
class Output(BlockSchema):
result: str = SchemaField(description="URL of the generated audio file")
error: str = SchemaField(description="Error message if the model run failed")
def __init__(self):
super().__init__(
id="44f6c8ad-d75c-4ae1-8209-aad1c0326928",
description="This block generates music using Meta's MusicGen model on Replicate.",
categories={BlockCategory.AI},
input_schema=AIMusicGeneratorBlock.Input,
output_schema=AIMusicGeneratorBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"prompt": "An upbeat electronic dance track with heavy bass",
"music_gen_model_version": MusicGenModelVersion.STEREO_LARGE,
"duration": 8,
"temperature": 1.0,
"top_k": 250,
"top_p": 0.0,
"classifier_free_guidance": 3,
"output_format": AudioFormat.WAV,
"normalization_strategy": NormalizationStrategy.LOUDNESS,
},
test_output=[
(
"result",
"https://replicate.com/output/generated-audio-url.wav",
),
],
test_mock={
"run_model": lambda api_key, music_gen_model_version, prompt, duration, temperature, top_k, top_p, classifier_free_guidance, output_format, normalization_strategy: "https://replicate.com/output/generated-audio-url.wav",
},
test_credentials=TEST_CREDENTIALS,
)
def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
max_retries = 3
retry_delay = 5 # seconds
last_error = None
for attempt in range(max_retries):
try:
logger.debug(
f"[AIMusicGeneratorBlock] - Running model (attempt {attempt + 1})"
)
result = self.run_model(
api_key=credentials.api_key,
music_gen_model_version=input_data.music_gen_model_version,
prompt=input_data.prompt,
duration=input_data.duration,
temperature=input_data.temperature,
top_k=input_data.top_k,
top_p=input_data.top_p,
classifier_free_guidance=input_data.classifier_free_guidance,
output_format=input_data.output_format,
normalization_strategy=input_data.normalization_strategy,
)
if result and result != "No output received":
yield "result", result
return
else:
last_error = "Model returned empty or invalid response"
raise ValueError(last_error)
except Exception as e:
last_error = f"Unexpected error: {str(e)}"
logger.error(f"[AIMusicGeneratorBlock] - Error: {last_error}")
if attempt < max_retries - 1:
time.sleep(retry_delay)
continue
# If we've exhausted all retries, yield the error
yield "error", f"Failed after {max_retries} attempts. Last error: {last_error}"
def run_model(
self,
api_key: SecretStr,
music_gen_model_version: MusicGenModelVersion,
prompt: str,
duration: int,
temperature: float,
top_k: int,
top_p: float,
classifier_free_guidance: int,
output_format: AudioFormat,
normalization_strategy: NormalizationStrategy,
):
# Initialize Replicate client with the API key
client = replicate.Client(api_token=api_key.get_secret_value())
# Run the model with parameters
output = client.run(
"meta/musicgen:671ac645ce5e552cc63a54a2bbff63fcf798043055d2dac5fc9e36a837eedcfb",
input={
"prompt": prompt,
"music_gen_model_version": music_gen_model_version,
"duration": duration,
"temperature": temperature,
"top_k": top_k,
"top_p": top_p,
"classifier_free_guidance": classifier_free_guidance,
"output_format": output_format,
"normalization_strategy": normalization_strategy,
},
)
# Handle the output
if isinstance(output, list) and len(output) > 0:
result_url = output[0] # If output is a list, get the first element
elif isinstance(output, str):
result_url = output # If output is a string, use it directly
else:
result_url = (
"No output received" # Fallback message if output is not as expected
)
return result_url

View File

@@ -3,12 +3,16 @@ import time
from enum import Enum
from typing import Literal
import requests
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.util.request import requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -217,7 +221,6 @@ class AIShortformVideoCreatorBlock(Block):
url = "https://webhook.site/token"
headers = {"Accept": "application/json", "Content-Type": "application/json"}
response = requests.post(url, headers=headers)
response.raise_for_status()
webhook_data = response.json()
return webhook_data["uuid"], f"https://webhook.site/{webhook_data['uuid']}"
@@ -228,14 +231,12 @@ class AIShortformVideoCreatorBlock(Block):
logger.debug(
f"API Response Status Code: {response.status_code}, Content: {response.text}"
)
response.raise_for_status()
return response.json()
def check_video_status(self, api_key: SecretStr, pid: str) -> dict:
url = f"https://www.revid.ai/api/public/v2/status?pid={pid}"
headers = {"key": api_key.get_secret_value()}
response = requests.get(url, headers=headers)
response.raise_for_status()
return response.json()
def wait_for_video(

View File

@@ -233,7 +233,9 @@ class AgentOutputBlock(Block):
)
name: str = SchemaField(description="The name of the output.")
title: str | None = SchemaField(
description="The title of the input.", default=None, advanced=True
description="The title of the output.",
default=None,
advanced=True,
)
description: str | None = SchemaField(
description="The description of the output.",
@@ -262,7 +264,7 @@ class AgentOutputBlock(Block):
def __init__(self):
super().__init__(
id="363ae599-353e-4804-937e-b2ee3cef3da4",
description=("Stores the output of the graph for users to see."),
description="Stores the output of the graph for users to see.",
input_schema=AgentOutputBlock.Input,
output_schema=AgentOutputBlock.Output,
test_input=[
@@ -313,16 +315,26 @@ class AgentOutputBlock(Block):
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict | None = SchemaField(
default=None,
dictionary: dict[Any, Any] = SchemaField(
default={},
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
placeholder='{"key1": "value1", "key2": "value2"}',
)
key: str = SchemaField(
description="The key for the new entry.", placeholder="new_key"
default="",
description="The key for the new entry.",
placeholder="new_key",
advanced=False,
)
value: Any = SchemaField(
description="The value for the new entry.", placeholder="new_value"
default=None,
description="The value for the new entry.",
placeholder="new_value",
advanced=False,
)
entries: dict[Any, Any] = SchemaField(
default={},
description="The entries to add to the dictionary. This is the batch version of the `key` and `value` fields.",
advanced=True,
)
class Output(BlockSchema):
@@ -345,6 +357,10 @@ class AddToDictionaryBlock(Block):
"value": "new_value",
},
{"key": "first_key", "value": "first_value"},
{
"dictionary": {"existing_key": "existing_value"},
"entries": {"new_key": "new_value", "first_key": "first_value"},
},
],
test_output=[
(
@@ -352,38 +368,49 @@ class AddToDictionaryBlock(Block):
{"existing_key": "existing_value", "new_key": "new_value"},
),
("updated_dictionary", {"first_key": "first_value"}),
(
"updated_dictionary",
{
"existing_key": "existing_value",
"new_key": "new_value",
"first_key": "first_value",
},
),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
# If no dictionary is provided, create a new one
if input_data.dictionary is None:
updated_dict = {}
else:
# Create a copy of the input dictionary to avoid modifying the original
updated_dict = input_data.dictionary.copy()
updated_dict = input_data.dictionary.copy()
# Add the new key-value pair
updated_dict[input_data.key] = input_data.value
if input_data.value is not None and input_data.key:
updated_dict[input_data.key] = input_data.value
for key, value in input_data.entries.items():
updated_dict[key] = value
yield "updated_dictionary", updated_dict
class AddToListBlock(Block):
class Input(BlockSchema):
list: List[Any] | None = SchemaField(
default=None,
list: List[Any] = SchemaField(
default=[],
advanced=False,
description="The list to add the entry to. If not provided, a new list will be created.",
placeholder='[1, "string", {"key": "value"}]',
)
entry: Any = SchemaField(
description="The entry to add to the list. Can be of any type (string, int, dict, etc.).",
placeholder='{"new_key": "new_value"}',
advanced=False,
default=None,
)
entries: List[Any] = SchemaField(
default=[],
description="The entries to add to the list. This is the batch version of the `entry` field.",
advanced=True,
)
position: int | None = SchemaField(
default=None,
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
placeholder="0",
)
class Output(BlockSchema):
@@ -407,6 +434,12 @@ class AddToListBlock(Block):
},
{"entry": "first_entry"},
{"list": ["a", "b", "c"], "entry": "d"},
{
"entry": "e",
"entries": ["f", "g"],
"list": ["a", "b"],
"position": 1,
},
],
test_output=[
(
@@ -420,22 +453,20 @@ class AddToListBlock(Block):
),
("updated_list", ["first_entry"]),
("updated_list", ["a", "b", "c", "d"]),
("updated_list", ["a", "f", "g", "e", "b"]),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
# If no list is provided, create a new one
if input_data.list is None:
updated_list = []
else:
# Create a copy of the input list to avoid modifying the original
updated_list = input_data.list.copy()
entries_added = input_data.entries.copy()
if input_data.entry:
entries_added.append(input_data.entry)
# Add the new entry
if input_data.position is None:
updated_list.append(input_data.entry)
updated_list = input_data.list.copy()
if (pos := input_data.position) is not None:
updated_list = updated_list[:pos] + entries_added + updated_list[pos:]
else:
updated_list.insert(input_data.position, input_data.entry)
updated_list += entries_added
yield "updated_list", updated_list

View File

@@ -75,11 +75,17 @@ class ConditionBlock(Block):
value1 = input_data.value1
if isinstance(value1, str):
value1 = float(value1.strip())
try:
value1 = float(value1.strip())
except ValueError:
value1 = value1.strip()
value2 = input_data.value2
if isinstance(value2, str):
value2 = float(value2.strip())
try:
value2 = float(value2.strip())
except ValueError:
value2 = value2.strip()
yes_value = input_data.yes_value if input_data.yes_value is not None else value1
no_value = input_data.no_value if input_data.no_value is not None else value2

View File

@@ -0,0 +1,43 @@
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class WordCharacterCountBlock(Block):
class Input(BlockSchema):
text: str = SchemaField(
description="Input text to count words and characters",
placeholder="Enter your text here",
advanced=False,
)
class Output(BlockSchema):
word_count: int = SchemaField(description="Number of words in the input text")
character_count: int = SchemaField(
description="Number of characters in the input text"
)
error: str = SchemaField(
description="Error message if the counting operation failed"
)
def __init__(self):
super().__init__(
id="ab2a782d-22cf-4587-8a70-55b59b3f9f90",
description="Counts the number of words and characters in a given text.",
categories={BlockCategory.TEXT},
input_schema=WordCharacterCountBlock.Input,
output_schema=WordCharacterCountBlock.Output,
test_input={"text": "Hello, how are you?"},
test_output=[("word_count", 4), ("character_count", 19)],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
text = input_data.text
word_count = len(text.split())
character_count = len(text)
yield "word_count", word_count
yield "character_count", character_count
except Exception as e:
yield "error", str(e)

View File

@@ -3,11 +3,15 @@ from typing import Literal
import aiohttp
import discord
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
DiscordCredentials = CredentialsMetaInput[Literal["discord"], Literal["api_key"]]

View File

@@ -0,0 +1,36 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsField, CredentialsMetaInput
FalCredentials = APIKeyCredentials
FalCredentialsInput = CredentialsMetaInput[
Literal["fal"],
Literal["api_key"],
]
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="fal",
api_key=SecretStr("mock-fal-api-key"),
title="Mock FAL API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
def FalCredentialsField() -> FalCredentialsInput:
"""
Creates a FAL credentials input on a block.
"""
return CredentialsField(
provider="fal",
supported_credential_types={"api_key"},
description="The FAL integration can be used with an API Key.",
)

View File

@@ -0,0 +1,199 @@
import logging
import time
from enum import Enum
from typing import Any, Dict
import httpx
from backend.blocks.fal._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
FalCredentials,
FalCredentialsField,
FalCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
logger = logging.getLogger(__name__)
class FalModel(str, Enum):
MOCHI = "fal-ai/mochi-v1"
LUMA = "fal-ai/luma-dream-machine"
class AIVideoGeneratorBlock(Block):
class Input(BlockSchema):
prompt: str = SchemaField(
description="Description of the video to generate.",
placeholder="A dog running in a field.",
)
model: FalModel = SchemaField(
title="FAL Model",
default=FalModel.MOCHI,
description="The FAL model to use for video generation.",
)
credentials: FalCredentialsInput = FalCredentialsField()
class Output(BlockSchema):
video_url: str = SchemaField(description="The URL of the generated video.")
error: str = SchemaField(
description="Error message if video generation failed."
)
logs: list[str] = SchemaField(
description="Generation progress logs.", optional=True
)
def __init__(self):
super().__init__(
id="530cf046-2ce0-4854-ae2c-659db17c7a46",
description="Generate videos using FAL AI models.",
categories={BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"prompt": "A dog running in a field.",
"model": FalModel.MOCHI,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("video_url", "https://fal.media/files/example/video.mp4")],
test_mock={
"generate_video": lambda *args, **kwargs: "https://fal.media/files/example/video.mp4"
},
)
def _get_headers(self, api_key: str) -> Dict[str, str]:
"""Get headers for FAL API requests."""
return {
"Authorization": f"Key {api_key}",
"Content-Type": "application/json",
}
def _submit_request(
self, url: str, headers: Dict[str, str], data: Dict[str, Any]
) -> Dict[str, Any]:
"""Submit a request to the FAL API."""
try:
response = httpx.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
except httpx.HTTPError as e:
logger.error(f"FAL API request failed: {str(e)}")
raise RuntimeError(f"Failed to submit request: {str(e)}")
def _poll_status(self, status_url: str, headers: Dict[str, str]) -> Dict[str, Any]:
"""Poll the status endpoint until completion or failure."""
try:
response = httpx.get(status_url, headers=headers)
response.raise_for_status()
return response.json()
except httpx.HTTPError as e:
logger.error(f"Failed to get status: {str(e)}")
raise RuntimeError(f"Failed to get status: {str(e)}")
def generate_video(self, input_data: Input, credentials: FalCredentials) -> str:
"""Generate video using the specified FAL model."""
base_url = "https://queue.fal.run"
api_key = credentials.api_key.get_secret_value()
headers = self._get_headers(api_key)
# Submit generation request
submit_url = f"{base_url}/{input_data.model.value}"
submit_data = {"prompt": input_data.prompt}
seen_logs = set()
try:
# Submit request to queue
submit_response = httpx.post(submit_url, headers=headers, json=submit_data)
submit_response.raise_for_status()
request_data = submit_response.json()
# Get request_id and urls from initial response
request_id = request_data.get("request_id")
status_url = request_data.get("status_url")
result_url = request_data.get("response_url")
if not all([request_id, status_url, result_url]):
raise ValueError("Missing required data in submission response")
# Poll for status with exponential backoff
max_attempts = 30
attempt = 0
base_wait_time = 5
while attempt < max_attempts:
status_response = httpx.get(f"{status_url}?logs=1", headers=headers)
status_response.raise_for_status()
status_data = status_response.json()
# Process new logs only
logs = status_data.get("logs", [])
if logs and isinstance(logs, list):
for log in logs:
if isinstance(log, dict):
# Create a unique key for this log entry
log_key = (
f"{log.get('timestamp', '')}-{log.get('message', '')}"
)
if log_key not in seen_logs:
seen_logs.add(log_key)
message = log.get("message", "")
if message:
logger.debug(
f"[FAL Generation] [{log.get('level', 'INFO')}] [{log.get('source', '')}] [{log.get('timestamp', '')}] {message}"
)
status = status_data.get("status")
if status == "COMPLETED":
# Get the final result
result_response = httpx.get(result_url, headers=headers)
result_response.raise_for_status()
result_data = result_response.json()
if "video" not in result_data or not isinstance(
result_data["video"], dict
):
raise ValueError("Invalid response format - missing video data")
video_url = result_data["video"].get("url")
if not video_url:
raise ValueError("No video URL in response")
return video_url
elif status == "FAILED":
error_msg = status_data.get("error", "No error details provided")
raise RuntimeError(f"Video generation failed: {error_msg}")
elif status == "IN_QUEUE":
position = status_data.get("queue_position", "unknown")
logger.debug(
f"[FAL Generation] Status: In queue, position: {position}"
)
elif status == "IN_PROGRESS":
logger.debug(
"[FAL Generation] Status: Request is being processed..."
)
else:
logger.info(f"[FAL Generation] Status: Unknown status: {status}")
wait_time = min(base_wait_time * (2**attempt), 60) # Cap at 60 seconds
time.sleep(wait_time)
attempt += 1
raise RuntimeError("Maximum polling attempts reached")
except httpx.HTTPError as e:
raise RuntimeError(f"API request failed: {str(e)}")
def run(
self, input_data: Input, *, credentials: FalCredentials, **kwargs
) -> BlockOutput:
try:
video_url = self.generate_video(input_data, credentials)
yield "video_url", video_url
except Exception as e:
error_message = str(e)
yield "error", error_message

View File

@@ -0,0 +1,43 @@
from urllib.parse import urlparse
from backend.blocks.github._auth import GithubCredentials
from backend.util.request import Requests
def _convert_to_api_url(url: str) -> str:
"""
Converts a standard GitHub URL to the corresponding GitHub API URL.
Handles repository URLs, issue URLs, pull request URLs, and more.
"""
parsed_url = urlparse(url)
path_parts = parsed_url.path.strip("/").split("/")
if len(path_parts) >= 2:
owner, repo = path_parts[0], path_parts[1]
api_base = f"https://api.github.com/repos/{owner}/{repo}"
if len(path_parts) > 2:
additional_path = "/".join(path_parts[2:])
api_url = f"{api_base}/{additional_path}"
else:
# Repository base URL
api_url = api_base
else:
raise ValueError("Invalid GitHub URL format.")
return api_url
def _get_headers(credentials: GithubCredentials) -> dict[str, str]:
return {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
def get_api(credentials: GithubCredentials, convert_urls: bool = True) -> Requests:
return Requests(
trusted_origins=["https://api.github.com", "https://github.com"],
extra_url_validator=_convert_to_api_url if convert_urls else None,
extra_headers=_get_headers(credentials),
)

View File

@@ -1,12 +1,13 @@
from typing import Literal
from autogpt_libs.supabase_integration_credentials_store.types import (
APIKeyCredentials,
OAuth2Credentials,
)
from pydantic import SecretStr
from backend.data.model import CredentialsField, CredentialsMetaInput
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
OAuth2Credentials,
)
from backend.util.settings import Secrets
secrets = Secrets()

View File

@@ -0,0 +1,700 @@
{
"action": "synchronize",
"number": 8358,
"pull_request": {
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/8358",
"id": 2128918491,
"node_id": "PR_kwDOJKSTjM5-5Lfb",
"html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/8358",
"diff_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/8358.diff",
"patch_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/8358.patch",
"issue_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/8358",
"number": 8358,
"state": "open",
"locked": false,
"title": "feat(platform, blocks): Webhook-triggered blocks",
"user": {
"login": "Pwuts",
"id": 12185583,
"node_id": "MDQ6VXNlcjEyMTg1NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/12185583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pwuts",
"html_url": "https://github.com/Pwuts",
"followers_url": "https://api.github.com/users/Pwuts/followers",
"following_url": "https://api.github.com/users/Pwuts/following{/other_user}",
"gists_url": "https://api.github.com/users/Pwuts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pwuts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pwuts/subscriptions",
"organizations_url": "https://api.github.com/users/Pwuts/orgs",
"repos_url": "https://api.github.com/users/Pwuts/repos",
"events_url": "https://api.github.com/users/Pwuts/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pwuts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
},
"body": "- Resolves #8352\r\n\r\n## Changes 🏗️\r\n\r\n- feat(blocks): Add GitHub Pull Request Trigger block\r\n\r\n### feat(platform): Add support for Webhook-triggered blocks\r\n- ⚠️ Add `PLATFORM_BASE_URL` setting\r\n\r\n- Add webhook config option and `BlockType.WEBHOOK` to `Block`\r\n - Add check to `Block.__init__` to enforce type and shape of webhook event filter\r\n - Add check to `Block.__init__` to enforce `payload` input on webhook blocks\r\n\r\n- Add `Webhook` model + CRUD functions in `backend.data.integrations` to represent webhooks created by our system\r\n - Add `IntegrationWebhook` to DB schema + reference `AgentGraphNode.webhook_id`\r\n - Add `set_node_webhook(..)` in `backend.data.graph`\r\n\r\n- Add webhook-related endpoints:\r\n - `POST /integrations/{provider}/webhooks/{webhook_id}/ingress` endpoint, to receive webhook payloads, and for all associated nodes create graph executions\r\n - Add `Node.is_triggered_by_event_type(..)` helper method\r\n - `POST /integrations/{provider}/webhooks/{webhook_id}/ping` endpoint, to allow testing a webhook\r\n - Add `WebhookEvent` + pub/sub functions in `backend.data.integrations`\r\n\r\n- Add `backend.integrations.webhooks` module, including:\r\n - `graph_lifecycle_hooks`, e.g. `on_graph_activate(..)`, to handle corresponding webhook creation etc.\r\n - Add calls to these hooks in the graph create/update endpoints\r\n - `BaseWebhooksManager` + `GithubWebhooksManager` to handle creating + registering, removing + deregistering, and retrieving existing webhooks, and validating incoming payloads\r\n\r\n### Other improvements\r\n- fix(blocks): Allow having an input and output pin with the same name\r\n- feat(blocks): Allow hiding inputs (e.g. `payload`) with `SchemaField(hidden=True)`\r\n- feat(backend/data): Add `graph_id`, `graph_version` to `Node`; `user_id` to `GraphMeta`\r\n - Add `Creatable` versions of `Node`, `GraphMeta` and `Graph` without these properties\r\n - Add `graph_from_creatable(..)` helper function in `backend.data.graph`\r\n- refactor(backend/data): Make `RedisEventQueue` generic\r\n- refactor(frontend): Deduplicate & clean up code for different block types in `generateInputHandles(..)` in `CustomNode`\r\n- refactor(backend): Remove unused subgraph functionality\r\n\r\n## How it works\r\n- When a graph is created, the `on_graph_activate` and `on_node_activate` hooks are called on the graph and its nodes\r\n- If a webhook-triggered node has presets for all the relevant inputs, `on_node_activate` will get/create a suitable webhook and link it by setting `AgentGraphNode.webhook_id`\r\n - `on_node_activate` uses `webhook_manager.get_suitable_webhook(..)`, which tries to find a suitable webhook (with matching requirements) or creates it if none exists yet\r\n- When a graph is deactivated (in favor of a newer/other version) or deleted, `on_graph_deactivate` and `on_node_deactivate` are called on the graph and its nodes to clean up webhooks that are no longer in use\r\n- When a valid webhook payload is received, two things happen:\r\n 1. It is broadcast on the Redis channel `webhooks/{webhook_id}/{event_type}`\r\n 2. Graph executions are initiated for all nodes triggered by this webhook\r\n\r\n## TODO\r\n- [ ] #8537\r\n- [x] #8538\r\n- [ ] #8357\r\n- [ ] ~~#8554~~ can be done in a follow-up PR\r\n- [ ] Test test test!\r\n- [ ] Add note on `repo` input of webhook blocks that the credentials used must have the right permissions for the given organization/repo\r\n- [x] Implement proper detection and graceful handling of webhook creation failing due to insufficient permissions. This should give a clear message to the user to e.g. \"give the app access to this organization in your settings\".\r\n- [ ] Nice-to-have: make a button on webhook blocks to trigger a ping and check its result. The API endpoints for this is already implemented.",
"created_at": "2024-10-16T22:13:47Z",
"updated_at": "2024-11-11T18:34:54Z",
"closed_at": null,
"merged_at": null,
"merge_commit_sha": "cbfd0cdd8db52cdd5a3b7ce088fc0ab4617a652e",
"assignee": {
"login": "Pwuts",
"id": 12185583,
"node_id": "MDQ6VXNlcjEyMTg1NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/12185583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pwuts",
"html_url": "https://github.com/Pwuts",
"followers_url": "https://api.github.com/users/Pwuts/followers",
"following_url": "https://api.github.com/users/Pwuts/following{/other_user}",
"gists_url": "https://api.github.com/users/Pwuts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pwuts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pwuts/subscriptions",
"organizations_url": "https://api.github.com/users/Pwuts/orgs",
"repos_url": "https://api.github.com/users/Pwuts/repos",
"events_url": "https://api.github.com/users/Pwuts/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pwuts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
},
"assignees": [
{
"login": "Pwuts",
"id": 12185583,
"node_id": "MDQ6VXNlcjEyMTg1NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/12185583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pwuts",
"html_url": "https://github.com/Pwuts",
"followers_url": "https://api.github.com/users/Pwuts/followers",
"following_url": "https://api.github.com/users/Pwuts/following{/other_user}",
"gists_url": "https://api.github.com/users/Pwuts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pwuts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pwuts/subscriptions",
"organizations_url": "https://api.github.com/users/Pwuts/orgs",
"repos_url": "https://api.github.com/users/Pwuts/repos",
"events_url": "https://api.github.com/users/Pwuts/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pwuts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
],
"requested_reviewers": [
{
"login": "kcze",
"id": 34861343,
"node_id": "MDQ6VXNlcjM0ODYxMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/34861343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kcze",
"html_url": "https://github.com/kcze",
"followers_url": "https://api.github.com/users/kcze/followers",
"following_url": "https://api.github.com/users/kcze/following{/other_user}",
"gists_url": "https://api.github.com/users/kcze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kcze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kcze/subscriptions",
"organizations_url": "https://api.github.com/users/kcze/orgs",
"repos_url": "https://api.github.com/users/kcze/repos",
"events_url": "https://api.github.com/users/kcze/events{/privacy}",
"received_events_url": "https://api.github.com/users/kcze/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
],
"requested_teams": [
{
"name": "DevOps",
"id": 9547361,
"node_id": "T_kwDOB8roIc4Aka5h",
"slug": "devops",
"description": "",
"privacy": "closed",
"notification_setting": "notifications_enabled",
"url": "https://api.github.com/organizations/130738209/team/9547361",
"html_url": "https://github.com/orgs/Significant-Gravitas/teams/devops",
"members_url": "https://api.github.com/organizations/130738209/team/9547361/members{/member}",
"repositories_url": "https://api.github.com/organizations/130738209/team/9547361/repos",
"permission": "pull",
"parent": null
}
],
"labels": [
{
"id": 5272676214,
"node_id": "LA_kwDOJKSTjM8AAAABOkandg",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5410633769,
"node_id": "LA_kwDOJKSTjM8AAAABQn-4KQ",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/size/xl",
"name": "size/xl",
"color": "E751DD",
"default": false,
"description": ""
},
{
"id": 6892322271,
"node_id": "LA_kwDOJKSTjM8AAAABmtB93w",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/Review%20effort%20[1-5]:%204",
"name": "Review effort [1-5]: 4",
"color": "d1bcf9",
"default": false,
"description": null
},
{
"id": 7218433025,
"node_id": "LA_kwDOJKSTjM8AAAABrkCMAQ",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/platform/frontend",
"name": "platform/frontend",
"color": "033C07",
"default": false,
"description": "AutoGPT Platform - Front end"
},
{
"id": 7219356193,
"node_id": "LA_kwDOJKSTjM8AAAABrk6iIQ",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/platform/backend",
"name": "platform/backend",
"color": "ededed",
"default": false,
"description": "AutoGPT Platform - Back end"
},
{
"id": 7515330106,
"node_id": "LA_kwDOJKSTjM8AAAABv_LWOg",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/platform/blocks",
"name": "platform/blocks",
"color": "eb5757",
"default": false,
"description": null
}
],
"milestone": null,
"draft": false,
"commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/8358/commits",
"review_comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/8358/comments",
"review_comment_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/comments{/number}",
"comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/8358/comments",
"statuses_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/statuses/8f708a2b60463eec10747d8f45dead35b5a45bd0",
"head": {
"label": "Significant-Gravitas:reinier/open-1961-implement-github-on-pull-request-block",
"ref": "reinier/open-1961-implement-github-on-pull-request-block",
"sha": "8f708a2b60463eec10747d8f45dead35b5a45bd0",
"user": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"followers_url": "https://api.github.com/users/Significant-Gravitas/followers",
"following_url": "https://api.github.com/users/Significant-Gravitas/following{/other_user}",
"gists_url": "https://api.github.com/users/Significant-Gravitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Significant-Gravitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Significant-Gravitas/subscriptions",
"organizations_url": "https://api.github.com/users/Significant-Gravitas/orgs",
"repos_url": "https://api.github.com/users/Significant-Gravitas/repos",
"events_url": "https://api.github.com/users/Significant-Gravitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Significant-Gravitas/received_events",
"type": "Organization",
"user_view_type": "public",
"site_admin": false
},
"repo": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"followers_url": "https://api.github.com/users/Significant-Gravitas/followers",
"following_url": "https://api.github.com/users/Significant-Gravitas/following{/other_user}",
"gists_url": "https://api.github.com/users/Significant-Gravitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Significant-Gravitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Significant-Gravitas/subscriptions",
"organizations_url": "https://api.github.com/users/Significant-Gravitas/orgs",
"repos_url": "https://api.github.com/users/Significant-Gravitas/repos",
"events_url": "https://api.github.com/users/Significant-Gravitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Significant-Gravitas/received_events",
"type": "Organization",
"user_view_type": "public",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"forks_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/forks",
"keys_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/teams",
"hooks_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/hooks",
"issue_events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/events{/number}",
"events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/events",
"assignees_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/assignees{/user}",
"branches_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/branches{/branch}",
"tags_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/tags",
"blobs_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/statuses/{sha}",
"languages_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/languages",
"stargazers_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/stargazers",
"contributors_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/contributors",
"subscribers_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/subscribers",
"subscription_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/subscription",
"commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/contents/{+path}",
"compare_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/merges",
"archive_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/downloads",
"issues_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues{/number}",
"pulls_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls{/number}",
"milestones_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/milestones{/number}",
"notifications_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels{/name}",
"releases_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases{/id}",
"deployments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/deployments",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-11-11T18:16:29Z",
"pushed_at": "2024-11-11T18:34:52Z",
"git_url": "git://github.com/Significant-Gravitas/AutoGPT.git",
"ssh_url": "git@github.com:Significant-Gravitas/AutoGPT.git",
"clone_url": "https://github.com/Significant-Gravitas/AutoGPT.git",
"svn_url": "https://github.com/Significant-Gravitas/AutoGPT",
"homepage": "https://agpt.co",
"size": 181894,
"stargazers_count": 168203,
"watchers_count": 168203,
"language": "Python",
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"has_discussions": true,
"forks_count": 44376,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 189,
"license": {
"key": "other",
"name": "Other",
"spdx_id": "NOASSERTION",
"url": null,
"node_id": "MDc6TGljZW5zZTA="
},
"allow_forking": true,
"is_template": false,
"web_commit_signoff_required": false,
"topics": [
"ai",
"artificial-intelligence",
"autonomous-agents",
"gpt-4",
"openai",
"python"
],
"visibility": "public",
"forks": 44376,
"open_issues": 189,
"watchers": 168203,
"default_branch": "master",
"allow_squash_merge": true,
"allow_merge_commit": false,
"allow_rebase_merge": false,
"allow_auto_merge": true,
"delete_branch_on_merge": true,
"allow_update_branch": true,
"use_squash_pr_title_as_default": true,
"squash_merge_commit_message": "COMMIT_MESSAGES",
"squash_merge_commit_title": "PR_TITLE",
"merge_commit_message": "BLANK",
"merge_commit_title": "PR_TITLE"
}
},
"base": {
"label": "Significant-Gravitas:dev",
"ref": "dev",
"sha": "0b5b95eff5e18c1e162d2b30b66a7be2bed1cbc2",
"user": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"followers_url": "https://api.github.com/users/Significant-Gravitas/followers",
"following_url": "https://api.github.com/users/Significant-Gravitas/following{/other_user}",
"gists_url": "https://api.github.com/users/Significant-Gravitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Significant-Gravitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Significant-Gravitas/subscriptions",
"organizations_url": "https://api.github.com/users/Significant-Gravitas/orgs",
"repos_url": "https://api.github.com/users/Significant-Gravitas/repos",
"events_url": "https://api.github.com/users/Significant-Gravitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Significant-Gravitas/received_events",
"type": "Organization",
"user_view_type": "public",
"site_admin": false
},
"repo": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"followers_url": "https://api.github.com/users/Significant-Gravitas/followers",
"following_url": "https://api.github.com/users/Significant-Gravitas/following{/other_user}",
"gists_url": "https://api.github.com/users/Significant-Gravitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Significant-Gravitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Significant-Gravitas/subscriptions",
"organizations_url": "https://api.github.com/users/Significant-Gravitas/orgs",
"repos_url": "https://api.github.com/users/Significant-Gravitas/repos",
"events_url": "https://api.github.com/users/Significant-Gravitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Significant-Gravitas/received_events",
"type": "Organization",
"user_view_type": "public",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"forks_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/forks",
"keys_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/teams",
"hooks_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/hooks",
"issue_events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/events{/number}",
"events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/events",
"assignees_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/assignees{/user}",
"branches_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/branches{/branch}",
"tags_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/tags",
"blobs_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/statuses/{sha}",
"languages_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/languages",
"stargazers_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/stargazers",
"contributors_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/contributors",
"subscribers_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/subscribers",
"subscription_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/subscription",
"commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/contents/{+path}",
"compare_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/merges",
"archive_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/downloads",
"issues_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues{/number}",
"pulls_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls{/number}",
"milestones_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/milestones{/number}",
"notifications_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels{/name}",
"releases_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases{/id}",
"deployments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/deployments",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-11-11T18:16:29Z",
"pushed_at": "2024-11-11T18:34:52Z",
"git_url": "git://github.com/Significant-Gravitas/AutoGPT.git",
"ssh_url": "git@github.com:Significant-Gravitas/AutoGPT.git",
"clone_url": "https://github.com/Significant-Gravitas/AutoGPT.git",
"svn_url": "https://github.com/Significant-Gravitas/AutoGPT",
"homepage": "https://agpt.co",
"size": 181894,
"stargazers_count": 168203,
"watchers_count": 168203,
"language": "Python",
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"has_discussions": true,
"forks_count": 44376,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 189,
"license": {
"key": "other",
"name": "Other",
"spdx_id": "NOASSERTION",
"url": null,
"node_id": "MDc6TGljZW5zZTA="
},
"allow_forking": true,
"is_template": false,
"web_commit_signoff_required": false,
"topics": [
"ai",
"artificial-intelligence",
"autonomous-agents",
"gpt-4",
"openai",
"python"
],
"visibility": "public",
"forks": 44376,
"open_issues": 189,
"watchers": 168203,
"default_branch": "master",
"allow_squash_merge": true,
"allow_merge_commit": false,
"allow_rebase_merge": false,
"allow_auto_merge": true,
"delete_branch_on_merge": true,
"allow_update_branch": true,
"use_squash_pr_title_as_default": true,
"squash_merge_commit_message": "COMMIT_MESSAGES",
"squash_merge_commit_title": "PR_TITLE",
"merge_commit_message": "BLANK",
"merge_commit_title": "PR_TITLE"
}
},
"_links": {
"self": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/8358"
},
"html": {
"href": "https://github.com/Significant-Gravitas/AutoGPT/pull/8358"
},
"issue": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/8358"
},
"comments": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/8358/comments"
},
"review_comments": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/8358/comments"
},
"review_comment": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/comments{/number}"
},
"commits": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls/8358/commits"
},
"statuses": {
"href": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/statuses/8f708a2b60463eec10747d8f45dead35b5a45bd0"
}
},
"author_association": "MEMBER",
"auto_merge": null,
"active_lock_reason": null,
"merged": false,
"mergeable": null,
"rebaseable": null,
"mergeable_state": "unknown",
"merged_by": null,
"comments": 12,
"review_comments": 29,
"maintainer_can_modify": false,
"commits": 62,
"additions": 1674,
"deletions": 331,
"changed_files": 36
},
"before": "f40aef87672203f47bbbd53f83fae0964c5624da",
"after": "8f708a2b60463eec10747d8f45dead35b5a45bd0",
"repository": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"followers_url": "https://api.github.com/users/Significant-Gravitas/followers",
"following_url": "https://api.github.com/users/Significant-Gravitas/following{/other_user}",
"gists_url": "https://api.github.com/users/Significant-Gravitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Significant-Gravitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Significant-Gravitas/subscriptions",
"organizations_url": "https://api.github.com/users/Significant-Gravitas/orgs",
"repos_url": "https://api.github.com/users/Significant-Gravitas/repos",
"events_url": "https://api.github.com/users/Significant-Gravitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Significant-Gravitas/received_events",
"type": "Organization",
"user_view_type": "public",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"forks_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/forks",
"keys_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/teams",
"hooks_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/hooks",
"issue_events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/events{/number}",
"events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/events",
"assignees_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/assignees{/user}",
"branches_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/branches{/branch}",
"tags_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/tags",
"blobs_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/statuses/{sha}",
"languages_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/languages",
"stargazers_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/stargazers",
"contributors_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/contributors",
"subscribers_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/subscribers",
"subscription_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/subscription",
"commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/contents/{+path}",
"compare_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/merges",
"archive_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/downloads",
"issues_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues{/number}",
"pulls_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/pulls{/number}",
"milestones_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/milestones{/number}",
"notifications_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels{/name}",
"releases_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases{/id}",
"deployments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/deployments",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-11-11T18:16:29Z",
"pushed_at": "2024-11-11T18:34:52Z",
"git_url": "git://github.com/Significant-Gravitas/AutoGPT.git",
"ssh_url": "git@github.com:Significant-Gravitas/AutoGPT.git",
"clone_url": "https://github.com/Significant-Gravitas/AutoGPT.git",
"svn_url": "https://github.com/Significant-Gravitas/AutoGPT",
"homepage": "https://agpt.co",
"size": 181894,
"stargazers_count": 168203,
"watchers_count": 168203,
"language": "Python",
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"has_discussions": true,
"forks_count": 44376,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 189,
"license": {
"key": "other",
"name": "Other",
"spdx_id": "NOASSERTION",
"url": null,
"node_id": "MDc6TGljZW5zZTA="
},
"allow_forking": true,
"is_template": false,
"web_commit_signoff_required": false,
"topics": [
"ai",
"artificial-intelligence",
"autonomous-agents",
"gpt-4",
"openai",
"python"
],
"visibility": "public",
"forks": 44376,
"open_issues": 189,
"watchers": 168203,
"default_branch": "master",
"custom_properties": {
}
},
"organization": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"url": "https://api.github.com/orgs/Significant-Gravitas",
"repos_url": "https://api.github.com/orgs/Significant-Gravitas/repos",
"events_url": "https://api.github.com/orgs/Significant-Gravitas/events",
"hooks_url": "https://api.github.com/orgs/Significant-Gravitas/hooks",
"issues_url": "https://api.github.com/orgs/Significant-Gravitas/issues",
"members_url": "https://api.github.com/orgs/Significant-Gravitas/members{/member}",
"public_members_url": "https://api.github.com/orgs/Significant-Gravitas/public_members{/member}",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"description": ""
},
"enterprise": {
"id": 149607,
"slug": "significant-gravitas",
"name": "Significant Gravitas",
"node_id": "E_kgDOAAJIZw",
"avatar_url": "https://avatars.githubusercontent.com/b/149607?v=4",
"description": "The creators of AutoGPT",
"website_url": "discord.gg/autogpt",
"html_url": "https://github.com/enterprises/significant-gravitas",
"created_at": "2024-04-18T17:43:53Z",
"updated_at": "2024-10-23T16:59:55Z"
},
"sender": {
"login": "Pwuts",
"id": 12185583,
"node_id": "MDQ6VXNlcjEyMTg1NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/12185583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pwuts",
"html_url": "https://github.com/Pwuts",
"followers_url": "https://api.github.com/users/Pwuts/followers",
"following_url": "https://api.github.com/users/Pwuts/following{/other_user}",
"gists_url": "https://api.github.com/users/Pwuts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pwuts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pwuts/subscriptions",
"organizations_url": "https://api.github.com/users/Pwuts/orgs",
"repos_url": "https://api.github.com/users/Pwuts/repos",
"events_url": "https://api.github.com/users/Pwuts/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pwuts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
}

View File

@@ -1,9 +1,11 @@
import requests
from urllib.parse import urlparse
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._api import get_api
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
@@ -13,6 +15,10 @@ from ._auth import (
)
def is_github_url(url: str) -> bool:
return urlparse(url).netloc == "github.com"
# --8<-- [start:GithubCommentBlockExample]
class GithubCommentBlock(Block):
class Input(BlockSchema):
@@ -40,15 +46,27 @@ class GithubCommentBlock(Block):
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCommentBlock.Input,
output_schema=GithubCommentBlock.Output,
test_input={
"issue_url": "https://github.com/owner/repo/issues/1",
"comment": "This is a test comment.",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_input=[
{
"issue_url": "https://github.com/owner/repo/issues/1",
"comment": "This is a test comment.",
"credentials": TEST_CREDENTIALS_INPUT,
},
{
"issue_url": "https://github.com/owner/repo/pull/1",
"comment": "This is a test comment.",
"credentials": TEST_CREDENTIALS_INPUT,
},
],
test_credentials=TEST_CREDENTIALS,
test_output=[
("id", 1337),
("url", "https://github.com/owner/repo/issues/1#issuecomment-1337"),
("id", 1337),
(
"url",
"https://github.com/owner/repo/issues/1#issuecomment-1337",
),
],
test_mock={
"post_comment": lambda *args, **kwargs: (
@@ -62,27 +80,12 @@ class GithubCommentBlock(Block):
def post_comment(
credentials: GithubCredentials, issue_url: str, body_text: str
) -> tuple[int, str]:
if "/pull/" in issue_url:
api_url = (
issue_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
+ "/comments"
)
else:
api_url = (
issue_url.replace("github.com", "api.github.com/repos") + "/comments"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
data = {"body": body_text}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
if "pull" in issue_url:
issue_url = issue_url.replace("pull", "issues")
comments_url = issue_url + "/comments"
response = api.post(comments_url, json=data)
comment = response.json()
return comment["id"], comment["html_url"]
@@ -156,16 +159,10 @@ class GithubMakeIssueBlock(Block):
def create_issue(
credentials: GithubCredentials, repo_url: str, title: str, body: str
) -> tuple[int, str]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/issues"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
data = {"title": title, "body": body}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
issues_url = repo_url + "/issues"
response = api.post(issues_url, json=data)
issue = response.json()
return issue["number"], issue["html_url"]
@@ -232,21 +229,12 @@ class GithubReadIssueBlock(Block):
def read_issue(
credentials: GithubCredentials, issue_url: str
) -> tuple[str, str, str]:
api_url = issue_url.replace("github.com", "api.github.com/repos")
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
response = api.get(issue_url)
data = response.json()
title = data.get("title", "No title found")
body = data.get("body", "No body content found")
user = data.get("user", {}).get("login", "No user found")
return title, body, user
def run(
@@ -260,9 +248,12 @@ class GithubReadIssueBlock(Block):
credentials,
input_data.issue_url,
)
yield "title", title
yield "body", body
yield "user", user
if title:
yield "title", title
if body:
yield "body", body
if user:
yield "user", user
class GithubListIssuesBlock(Block):
@@ -318,20 +309,13 @@ class GithubListIssuesBlock(Block):
def list_issues(
credentials: GithubCredentials, repo_url: str
) -> list[Output.IssueItem]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/issues"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
issues_url = repo_url + "/issues"
response = api.get(issues_url)
data = response.json()
issues: list[GithubListIssuesBlock.Output.IssueItem] = [
{"title": issue["title"], "url": issue["html_url"]} for issue in data
]
return issues
def run(
@@ -385,28 +369,10 @@ class GithubAddLabelBlock(Block):
@staticmethod
def add_label(credentials: GithubCredentials, issue_url: str, label: str) -> str:
# Convert the provided GitHub URL to the API URL
if "/pull/" in issue_url:
api_url = (
issue_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
+ "/labels"
)
else:
api_url = (
issue_url.replace("github.com", "api.github.com/repos") + "/labels"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
data = {"labels": [label]}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
labels_url = issue_url + "/labels"
api.post(labels_url, json=data)
return "Label added successfully"
def run(
@@ -463,31 +429,9 @@ class GithubRemoveLabelBlock(Block):
@staticmethod
def remove_label(credentials: GithubCredentials, issue_url: str, label: str) -> str:
# Convert the provided GitHub URL to the API URL
if "/pull/" in issue_url:
api_url = (
issue_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
+ f"/labels/{label}"
)
else:
api_url = (
issue_url.replace("github.com", "api.github.com/repos")
+ f"/labels/{label}"
)
# Log the constructed API URL for debugging
print(f"Constructed API URL: {api_url}")
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.delete(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
label_url = issue_url + f"/labels/{label}"
api.delete(label_url)
return "Label removed successfully"
def run(
@@ -550,23 +494,10 @@ class GithubAssignIssueBlock(Block):
issue_url: str,
assignee: str,
) -> str:
# Extracting repo path and issue number from the issue URL
repo_path, issue_number = issue_url.replace("https://github.com/", "").split(
"/issues/"
)
api_url = (
f"https://api.github.com/repos/{repo_path}/issues/{issue_number}/assignees"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
assignees_url = issue_url + "/assignees"
data = {"assignees": [assignee]}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
api.post(assignees_url, json=data)
return "Issue assigned successfully"
def run(
@@ -629,23 +560,10 @@ class GithubUnassignIssueBlock(Block):
issue_url: str,
assignee: str,
) -> str:
# Extracting repo path and issue number from the issue URL
repo_path, issue_number = issue_url.replace("https://github.com/", "").split(
"/issues/"
)
api_url = (
f"https://api.github.com/repos/{repo_path}/issues/{issue_number}/assignees"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
assignees_url = issue_url + "/assignees"
data = {"assignees": [assignee]}
response = requests.delete(api_url, headers=headers, json=data)
response.raise_for_status()
api.delete(assignees_url, json=data)
return "Issue unassigned successfully"
def run(

View File

@@ -1,9 +1,9 @@
import requests
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._api import get_api
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
@@ -64,20 +64,13 @@ class GithubListPullRequestsBlock(Block):
@staticmethod
def list_prs(credentials: GithubCredentials, repo_url: str) -> list[Output.PRItem]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/pulls"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
pulls_url = repo_url + "/pulls"
response = api.get(pulls_url)
data = response.json()
pull_requests: list[GithubListPullRequestsBlock.Output.PRItem] = [
{"title": pr["title"], "url": pr["html_url"]} for pr in data
]
return pull_requests
def run(
@@ -110,7 +103,11 @@ class GithubMakePullRequestBlock(Block):
placeholder="Enter the pull request body",
)
head: str = SchemaField(
description="The name of the branch where your changes are implemented. For cross-repository pull requests in the same network, namespace head with a user like this: username:branch.",
description=(
"The name of the branch where your changes are implemented. "
"For cross-repository pull requests in the same network, "
"namespace head with a user like this: username:branch."
),
placeholder="Enter the head branch",
)
base: str = SchemaField(
@@ -162,17 +159,10 @@ class GithubMakePullRequestBlock(Block):
head: str,
base: str,
) -> tuple[int, str]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/pulls"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
pulls_url = repo_url + "/pulls"
data = {"title": title, "body": body, "head": head, "base": base}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
response = api.post(pulls_url, json=data)
pr_data = response.json()
return pr_data["number"], pr_data["html_url"]
@@ -194,13 +184,8 @@ class GithubMakePullRequestBlock(Block):
)
yield "number", number
yield "url", url
except requests.exceptions.HTTPError as http_err:
if http_err.response.status_code == 422:
error_details = http_err.response.json()
error_message = error_details.get("message", "Unknown error")
else:
error_message = str(http_err)
raise RuntimeError(f"Failed to create pull request: {error_message}")
except Exception as e:
yield "error", str(e)
class GithubReadPullRequestBlock(Block):
@@ -255,42 +240,21 @@ class GithubReadPullRequestBlock(Block):
@staticmethod
def read_pr(credentials: GithubCredentials, pr_url: str) -> tuple[str, str, str]:
api_url = pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/issues/"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
# Adjust the URL to access the issue endpoint for PR metadata
issue_url = pr_url.replace("/pull/", "/issues/")
response = api.get(issue_url)
data = response.json()
title = data.get("title", "No title found")
body = data.get("body", "No body content found")
author = data.get("user", {}).get("login", "No user found")
return title, body, author
@staticmethod
def read_pr_changes(credentials: GithubCredentials, pr_url: str) -> str:
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/files"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
files_url = pr_url + "/files"
response = api.get(files_url)
files = response.json()
changes = []
for file in files:
@@ -298,7 +262,6 @@ class GithubReadPullRequestBlock(Block):
patch = file.get("patch")
if filename and patch:
changes.append(f"File: {filename}\n{patch}")
return "\n\n".join(changes)
def run(
@@ -367,23 +330,10 @@ class GithubAssignPRReviewerBlock(Block):
def assign_reviewer(
credentials: GithubCredentials, pr_url: str, reviewer: str
) -> str:
# Convert the PR URL to the appropriate API endpoint
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/requested_reviewers"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
reviewers_url = pr_url + "/requested_reviewers"
data = {"reviewers": [reviewer]}
response = requests.post(api_url, headers=headers, json=data)
response.raise_for_status()
api.post(reviewers_url, json=data)
return "Reviewer assigned successfully"
def run(
@@ -400,17 +350,8 @@ class GithubAssignPRReviewerBlock(Block):
input_data.reviewer,
)
yield "status", status
except requests.exceptions.HTTPError as http_err:
if http_err.response.status_code == 422:
error_msg = (
"Failed to assign reviewer: "
f"The reviewer '{input_data.reviewer}' may not have permission "
"or the pull request is not in a valid state. "
f"Detailed error: {http_err.response.text}"
)
else:
error_msg = f"HTTP error: {http_err} - {http_err.response.text}"
raise RuntimeError(error_msg)
except Exception as e:
yield "error", str(e)
class GithubUnassignPRReviewerBlock(Block):
@@ -456,21 +397,10 @@ class GithubUnassignPRReviewerBlock(Block):
def unassign_reviewer(
credentials: GithubCredentials, pr_url: str, reviewer: str
) -> str:
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/requested_reviewers"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
api = get_api(credentials)
reviewers_url = pr_url + "/requested_reviewers"
data = {"reviewers": [reviewer]}
response = requests.delete(api_url, headers=headers, json=data)
response.raise_for_status()
api.delete(reviewers_url, json=data)
return "Reviewer unassigned successfully"
def run(
@@ -480,12 +410,15 @@ class GithubUnassignPRReviewerBlock(Block):
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.unassign_reviewer(
credentials,
input_data.pr_url,
input_data.reviewer,
)
yield "status", status
try:
status = self.unassign_reviewer(
credentials,
input_data.pr_url,
input_data.reviewer,
)
yield "status", status
except Exception as e:
yield "error", str(e)
class GithubListPRReviewersBlock(Block):
@@ -544,26 +477,14 @@ class GithubListPRReviewersBlock(Block):
def list_reviewers(
credentials: GithubCredentials, pr_url: str
) -> list[Output.ReviewerItem]:
api_url = (
pr_url.replace("github.com", "api.github.com/repos").replace(
"/pull/", "/pulls/"
)
+ "/requested_reviewers"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
reviewers_url = pr_url + "/requested_reviewers"
response = api.get(reviewers_url)
data = response.json()
reviewers: list[GithubListPRReviewersBlock.Output.ReviewerItem] = [
{"username": reviewer["login"], "url": reviewer["html_url"]}
for reviewer in data.get("users", [])
]
return reviewers
def run(

View File

@@ -1,11 +1,11 @@
import base64
import requests
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from ._api import get_api
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
@@ -68,17 +68,11 @@ class GithubListTagsBlock(Block):
def list_tags(
credentials: GithubCredentials, repo_url: str
) -> list[Output.TagItem]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/tags"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
tags_url = repo_url + "/tags"
response = api.get(tags_url)
data = response.json()
repo_path = repo_url.replace("https://github.com/", "")
tags: list[GithubListTagsBlock.Output.TagItem] = [
{
"name": tag["name"],
@@ -86,7 +80,6 @@ class GithubListTagsBlock(Block):
}
for tag in data
]
return tags
def run(
@@ -157,20 +150,18 @@ class GithubListBranchesBlock(Block):
def list_branches(
credentials: GithubCredentials, repo_url: str
) -> list[Output.BranchItem]:
api_url = repo_url.replace("github.com", "api.github.com/repos") + "/branches"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
branches_url = repo_url + "/branches"
response = api.get(branches_url)
data = response.json()
repo_path = repo_url.replace("https://github.com/", "")
branches: list[GithubListBranchesBlock.Output.BranchItem] = [
{"name": branch["name"], "url": branch["commit"]["url"]} for branch in data
{
"name": branch["name"],
"url": f"https://github.com/{repo_path}/tree/{branch['name']}",
}
for branch in data
]
return branches
def run(
@@ -246,6 +237,8 @@ class GithubListDiscussionsBlock(Block):
def list_discussions(
credentials: GithubCredentials, repo_url: str, num_discussions: int
) -> list[Output.DiscussionItem]:
api = get_api(credentials)
# GitHub GraphQL API endpoint is different; we'll use api.post with custom URL
repo_path = repo_url.replace("https://github.com/", "")
owner, repo = repo_path.split("/")
query = """
@@ -261,24 +254,15 @@ class GithubListDiscussionsBlock(Block):
}
"""
variables = {"owner": owner, "repo": repo, "num": num_discussions}
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.post(
response = api.post(
"https://api.github.com/graphql",
json={"query": query, "variables": variables},
headers=headers,
)
response.raise_for_status()
data = response.json()
discussions: list[GithubListDiscussionsBlock.Output.DiscussionItem] = [
{"title": discussion["title"], "url": discussion["url"]}
for discussion in data["data"]["repository"]["discussions"]["nodes"]
]
return discussions
def run(
@@ -348,21 +332,13 @@ class GithubListReleasesBlock(Block):
def list_releases(
credentials: GithubCredentials, repo_url: str
) -> list[Output.ReleaseItem]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/releases"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
releases_url = repo_url + "/releases"
response = api.get(releases_url)
data = response.json()
releases: list[GithubListReleasesBlock.Output.ReleaseItem] = [
{"name": release["name"], "url": release["html_url"]} for release in data
]
return releases
def run(
@@ -432,16 +408,9 @@ class GithubReadFileBlock(Block):
def read_file(
credentials: GithubCredentials, repo_url: str, file_path: str, branch: str
) -> tuple[str, int]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/contents/{file_path}?ref={branch}"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
content_url = repo_url + f"/contents/{file_path}?ref={branch}"
response = api.get(content_url)
content = response.json()
if isinstance(content, list):
@@ -549,46 +518,33 @@ class GithubReadFolderBlock(Block):
def read_folder(
credentials: GithubCredentials, repo_url: str, folder_path: str, branch: str
) -> tuple[list[Output.FileEntry], list[Output.DirEntry]]:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/contents/{folder_path}?ref={branch}"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
contents_url = repo_url + f"/contents/{folder_path}?ref={branch}"
response = api.get(contents_url)
content = response.json()
if isinstance(content, list):
# Multiple entries of different types exist at this path
if not (dir := next((d for d in content if d["type"] == "dir"), None)):
raise TypeError("Not a folder")
content = dir
if content["type"] != "dir":
if not isinstance(content, list):
raise TypeError("Not a folder")
return (
[
GithubReadFolderBlock.Output.FileEntry(
name=entry["name"],
path=entry["path"],
size=entry["size"],
)
for entry in content["entries"]
if entry["type"] == "file"
],
[
GithubReadFolderBlock.Output.DirEntry(
name=entry["name"],
path=entry["path"],
)
for entry in content["entries"]
if entry["type"] == "dir"
],
)
files = [
GithubReadFolderBlock.Output.FileEntry(
name=entry["name"],
path=entry["path"],
size=entry["size"],
)
for entry in content
if entry["type"] == "file"
]
dirs = [
GithubReadFolderBlock.Output.DirEntry(
name=entry["name"],
path=entry["path"],
)
for entry in content
if entry["type"] == "dir"
]
return files, dirs
def run(
self,
@@ -656,26 +612,16 @@ class GithubMakeBranchBlock(Block):
new_branch: str,
source_branch: str,
) -> str:
repo_path = repo_url.replace("https://github.com/", "")
ref_api_url = (
f"https://api.github.com/repos/{repo_path}/git/refs/heads/{source_branch}"
)
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.get(ref_api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
# Get the SHA of the source branch
ref_url = repo_url + f"/git/refs/heads/{source_branch}"
response = api.get(ref_url)
sha = response.json()["object"]["sha"]
create_branch_api_url = f"https://api.github.com/repos/{repo_path}/git/refs"
# Create the new branch
create_ref_url = repo_url + "/git/refs"
data = {"ref": f"refs/heads/{new_branch}", "sha": sha}
response = requests.post(create_branch_api_url, headers=headers, json=data)
response.raise_for_status()
response = api.post(create_ref_url, json=data)
return "Branch created successfully"
def run(
@@ -735,16 +681,9 @@ class GithubDeleteBranchBlock(Block):
def delete_branch(
credentials: GithubCredentials, repo_url: str, branch: str
) -> str:
repo_path = repo_url.replace("https://github.com/", "")
api_url = f"https://api.github.com/repos/{repo_path}/git/refs/heads/{branch}"
headers = {
"Authorization": credentials.bearer(),
"Accept": "application/vnd.github.v3+json",
}
response = requests.delete(api_url, headers=headers)
response.raise_for_status()
api = get_api(credentials)
ref_url = repo_url + f"/git/refs/heads/{branch}"
api.delete(ref_url)
return "Branch deleted successfully"
def run(

View File

@@ -0,0 +1,156 @@
import json
import logging
from pathlib import Path
from pydantic import BaseModel
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockWebhookConfig,
)
from backend.data.model import SchemaField
from ._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GithubCredentialsField,
GithubCredentialsInput,
)
logger = logging.getLogger(__name__)
# --8<-- [start:GithubTriggerExample]
class GitHubTriggerBase:
class Input(BlockSchema):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description=(
"Repository to subscribe to.\n\n"
"**Note:** Make sure your GitHub credentials have permissions "
"to create webhooks on this repo."
),
placeholder="{owner}/{repo}",
)
# --8<-- [start:example-payload-field]
payload: dict = SchemaField(hidden=True, default={})
# --8<-- [end:example-payload-field]
class Output(BlockSchema):
payload: dict = SchemaField(
description="The complete webhook payload that was received from GitHub. "
"Includes information about the affected resource (e.g. pull request), "
"the event, and the user who triggered the event."
)
triggered_by_user: dict = SchemaField(
description="Object representing the GitHub user who triggered the event"
)
error: str = SchemaField(
description="Error message if the payload could not be processed"
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "payload", input_data.payload
yield "triggered_by_user", input_data.payload["sender"]
class GithubPullRequestTriggerBlock(GitHubTriggerBase, Block):
EXAMPLE_PAYLOAD_FILE = (
Path(__file__).parent / "example_payloads" / "pull_request.synchronize.json"
)
# --8<-- [start:example-event-filter]
class Input(GitHubTriggerBase.Input):
class EventsFilter(BaseModel):
"""
https://docs.github.com/en/webhooks/webhook-events-and-payloads#pull_request
"""
opened: bool = False
edited: bool = False
closed: bool = False
reopened: bool = False
synchronize: bool = False
assigned: bool = False
unassigned: bool = False
labeled: bool = False
unlabeled: bool = False
converted_to_draft: bool = False
locked: bool = False
unlocked: bool = False
enqueued: bool = False
dequeued: bool = False
milestoned: bool = False
demilestoned: bool = False
ready_for_review: bool = False
review_requested: bool = False
review_request_removed: bool = False
auto_merge_enabled: bool = False
auto_merge_disabled: bool = False
events: EventsFilter = SchemaField(
title="Events", description="The events to subscribe to"
)
# --8<-- [end:example-event-filter]
class Output(GitHubTriggerBase.Output):
event: str = SchemaField(
description="The PR event that triggered the webhook (e.g. 'opened')"
)
number: int = SchemaField(description="The number of the affected pull request")
pull_request: dict = SchemaField(
description="Object representing the affected pull request"
)
pull_request_url: str = SchemaField(
description="The URL of the affected pull request"
)
def __init__(self):
from backend.integrations.webhooks.github import GithubWebhookType
example_payload = json.loads(self.EXAMPLE_PAYLOAD_FILE.read_text())
super().__init__(
id="6c60ec01-8128-419e-988f-96a063ee2fea",
description="This block triggers on pull request events and outputs the event type and payload.",
categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT},
input_schema=GithubPullRequestTriggerBlock.Input,
output_schema=GithubPullRequestTriggerBlock.Output,
# --8<-- [start:example-webhook_config]
webhook_config=BlockWebhookConfig(
provider="github",
webhook_type=GithubWebhookType.REPO,
resource_format="{repo}",
event_filter_input="events",
event_format="pull_request.{event}",
),
# --8<-- [end:example-webhook_config]
test_input={
"repo": "Significant-Gravitas/AutoGPT",
"events": {"opened": True, "synchronize": True},
"credentials": TEST_CREDENTIALS_INPUT,
"payload": example_payload,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("payload", example_payload),
("triggered_by_user", example_payload["sender"]),
("event", example_payload["action"]),
("number", example_payload["number"]),
("pull_request", example_payload["pull_request"]),
("pull_request_url", example_payload["pull_request"]["html_url"]),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
yield from super().run(input_data, **kwargs)
yield "event", input_data.payload["action"]
yield "number", input_data.payload["number"]
yield "pull_request", input_data.payload["pull_request"]
yield "pull_request_url", input_data.payload["pull_request"]["html_url"]
# --8<-- [end:GithubTriggerExample]

View File

@@ -1,9 +1,8 @@
from typing import Literal
from autogpt_libs.supabase_integration_credentials_store.types import OAuth2Credentials
from pydantic import SecretStr
from backend.data.model import CredentialsField, CredentialsMetaInput
from backend.data.model import CredentialsField, CredentialsMetaInput, OAuth2Credentials
from backend.util.settings import Secrets
# --8<-- [start:GoogleOAuthIsConfigured]

View File

@@ -1,11 +1,15 @@
from typing import Literal
import googlemaps
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import BaseModel, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",

View File

@@ -1,6 +1,6 @@
from typing import Any, Optional
import requests
from backend.util.request import requests
class GetRequest:
@@ -11,5 +11,4 @@ class GetRequest:
if headers is None:
headers = {}
response = requests.get(url, headers=headers)
response.raise_for_status()
return response.json() if json else response.text

View File

@@ -1,10 +1,10 @@
import json
from enum import Enum
import requests
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
class HttpMethod(Enum):
@@ -31,9 +31,14 @@ class SendWebRequestBlock(Block):
description="The headers to include in the request",
default={},
)
body: object = SchemaField(
json_format: bool = SchemaField(
title="JSON format",
description="Whether to send and receive body as JSON",
default=True,
)
body: Any = SchemaField(
description="The body of the request",
default={},
default=None,
)
class Output(BlockSchema):
@@ -58,13 +63,16 @@ class SendWebRequestBlock(Block):
input_data.method.value,
input_data.url,
headers=input_data.headers,
json=input_data.body,
json=input_data.body if input_data.json_format else None,
data=input_data.body if not input_data.json_format else None,
)
result = response.json() if input_data.json_format else response.text
if response.status_code // 100 == 2:
yield "response", response.json()
yield "response", result
elif response.status_code // 100 == 4:
yield "client_error", response.json()
yield "client_error", result
elif response.status_code // 100 == 5:
yield "server_error", response.json()
yield "server_error", result
else:
raise ValueError(f"Unexpected status code: {response.status_code}")

View File

@@ -0,0 +1,36 @@
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsField, CredentialsMetaInput
HubSpotCredentials = APIKeyCredentials
HubSpotCredentialsInput = CredentialsMetaInput[
Literal["hubspot"],
Literal["api_key"],
]
def HubSpotCredentialsField() -> HubSpotCredentialsInput:
"""Creates a HubSpot credentials input on a block."""
return CredentialsField(
provider="hubspot",
supported_credential_types={"api_key"},
description="The HubSpot integration requires an API Key.",
)
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="hubspot",
api_key=SecretStr("mock-hubspot-api-key"),
title="Mock HubSpot API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}

View File

@@ -0,0 +1,106 @@
from backend.blocks.hubspot._auth import (
HubSpotCredentials,
HubSpotCredentialsField,
HubSpotCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
class HubSpotCompanyBlock(Block):
class Input(BlockSchema):
credentials: HubSpotCredentialsInput = HubSpotCredentialsField()
operation: str = SchemaField(
description="Operation to perform (create, update, get)", default="get"
)
company_data: dict = SchemaField(
description="Company data for create/update operations", default={}
)
domain: str = SchemaField(
description="Company domain for get/update operations", default=""
)
class Output(BlockSchema):
company: dict = SchemaField(description="Company information")
status: str = SchemaField(description="Operation status")
def __init__(self):
super().__init__(
id="3ae02219-d540-47cd-9c78-3ad6c7d9820a",
description="Manages HubSpot companies - create, update, and retrieve company information",
categories={BlockCategory.CRM},
input_schema=HubSpotCompanyBlock.Input,
output_schema=HubSpotCompanyBlock.Output,
)
def run(
self, input_data: Input, *, credentials: HubSpotCredentials, **kwargs
) -> BlockOutput:
base_url = "https://api.hubapi.com/crm/v3/objects/companies"
headers = {
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
"Content-Type": "application/json",
}
if input_data.operation == "create":
response = requests.post(
base_url, headers=headers, json={"properties": input_data.company_data}
)
result = response.json()
yield "company", result
yield "status", "created"
elif input_data.operation == "get":
search_url = f"{base_url}/search"
search_data = {
"filterGroups": [
{
"filters": [
{
"propertyName": "domain",
"operator": "EQ",
"value": input_data.domain,
}
]
}
]
}
response = requests.post(search_url, headers=headers, json=search_data)
result = response.json()
yield "company", result.get("results", [{}])[0]
yield "status", "retrieved"
elif input_data.operation == "update":
# First get company ID by domain
search_response = requests.post(
f"{base_url}/search",
headers=headers,
json={
"filterGroups": [
{
"filters": [
{
"propertyName": "domain",
"operator": "EQ",
"value": input_data.domain,
}
]
}
]
},
)
company_id = search_response.json().get("results", [{}])[0].get("id")
if company_id:
response = requests.patch(
f"{base_url}/{company_id}",
headers=headers,
json={"properties": input_data.company_data},
)
result = response.json()
yield "company", result
yield "status", "updated"
else:
yield "company", {}
yield "status", "company_not_found"

View File

@@ -0,0 +1,106 @@
from backend.blocks.hubspot._auth import (
HubSpotCredentials,
HubSpotCredentialsField,
HubSpotCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
class HubSpotContactBlock(Block):
class Input(BlockSchema):
credentials: HubSpotCredentialsInput = HubSpotCredentialsField()
operation: str = SchemaField(
description="Operation to perform (create, update, get)", default="get"
)
contact_data: dict = SchemaField(
description="Contact data for create/update operations", default={}
)
email: str = SchemaField(
description="Email address for get/update operations", default=""
)
class Output(BlockSchema):
contact: dict = SchemaField(description="Contact information")
status: str = SchemaField(description="Operation status")
def __init__(self):
super().__init__(
id="5267326e-c4c1-4016-9f54-4e72ad02f813",
description="Manages HubSpot contacts - create, update, and retrieve contact information",
categories={BlockCategory.CRM},
input_schema=HubSpotContactBlock.Input,
output_schema=HubSpotContactBlock.Output,
)
def run(
self, input_data: Input, *, credentials: HubSpotCredentials, **kwargs
) -> BlockOutput:
base_url = "https://api.hubapi.com/crm/v3/objects/contacts"
headers = {
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
"Content-Type": "application/json",
}
if input_data.operation == "create":
response = requests.post(
base_url, headers=headers, json={"properties": input_data.contact_data}
)
result = response.json()
yield "contact", result
yield "status", "created"
elif input_data.operation == "get":
# Search for contact by email
search_url = f"{base_url}/search"
search_data = {
"filterGroups": [
{
"filters": [
{
"propertyName": "email",
"operator": "EQ",
"value": input_data.email,
}
]
}
]
}
response = requests.post(search_url, headers=headers, json=search_data)
result = response.json()
yield "contact", result.get("results", [{}])[0]
yield "status", "retrieved"
elif input_data.operation == "update":
search_response = requests.post(
f"{base_url}/search",
headers=headers,
json={
"filterGroups": [
{
"filters": [
{
"propertyName": "email",
"operator": "EQ",
"value": input_data.email,
}
]
}
]
},
)
contact_id = search_response.json().get("results", [{}])[0].get("id")
if contact_id:
response = requests.patch(
f"{base_url}/{contact_id}",
headers=headers,
json={"properties": input_data.contact_data},
)
result = response.json()
yield "contact", result
yield "status", "updated"
else:
yield "contact", {}
yield "status", "contact_not_found"

View File

@@ -0,0 +1,121 @@
from datetime import datetime, timedelta
from backend.blocks.hubspot._auth import (
HubSpotCredentials,
HubSpotCredentialsField,
HubSpotCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
class HubSpotEngagementBlock(Block):
class Input(BlockSchema):
credentials: HubSpotCredentialsInput = HubSpotCredentialsField()
operation: str = SchemaField(
description="Operation to perform (send_email, track_engagement)",
default="send_email",
)
email_data: dict = SchemaField(
description="Email data including recipient, subject, content",
default={},
)
contact_id: str = SchemaField(
description="Contact ID for engagement tracking", default=""
)
timeframe_days: int = SchemaField(
description="Number of days to look back for engagement",
default=30,
optional=True,
)
class Output(BlockSchema):
result: dict = SchemaField(description="Operation result")
status: str = SchemaField(description="Operation status")
def __init__(self):
super().__init__(
id="c6524385-7d87-49d6-a470-248bd29ca765",
description="Manages HubSpot engagements - sends emails and tracks engagement metrics",
categories={BlockCategory.CRM, BlockCategory.COMMUNICATION},
input_schema=HubSpotEngagementBlock.Input,
output_schema=HubSpotEngagementBlock.Output,
)
def run(
self, input_data: Input, *, credentials: HubSpotCredentials, **kwargs
) -> BlockOutput:
base_url = "https://api.hubapi.com"
headers = {
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
"Content-Type": "application/json",
}
if input_data.operation == "send_email":
# Using the email send API
email_url = f"{base_url}/crm/v3/objects/emails"
email_data = {
"properties": {
"hs_timestamp": datetime.now().isoformat(),
"hubspot_owner_id": "1", # This should be configurable
"hs_email_direction": "OUTBOUND",
"hs_email_status": "SEND",
"hs_email_subject": input_data.email_data.get("subject"),
"hs_email_text": input_data.email_data.get("content"),
"hs_email_to_email": input_data.email_data.get("recipient"),
}
}
response = requests.post(email_url, headers=headers, json=email_data)
result = response.json()
yield "result", result
yield "status", "email_sent"
elif input_data.operation == "track_engagement":
# Get engagement events for the contact
from_date = datetime.now() - timedelta(days=input_data.timeframe_days)
engagement_url = (
f"{base_url}/crm/v3/objects/contacts/{input_data.contact_id}/engagement"
)
params = {"limit": 100, "after": from_date.isoformat()}
response = requests.get(engagement_url, headers=headers, params=params)
engagements = response.json()
# Process engagement metrics
metrics = {
"email_opens": 0,
"email_clicks": 0,
"email_replies": 0,
"last_engagement": None,
"engagement_score": 0,
}
for engagement in engagements.get("results", []):
eng_type = engagement.get("properties", {}).get("hs_engagement_type")
if eng_type == "EMAIL":
metrics["email_opens"] += 1
elif eng_type == "EMAIL_CLICK":
metrics["email_clicks"] += 1
elif eng_type == "EMAIL_REPLY":
metrics["email_replies"] += 1
# Update last engagement time
eng_time = engagement.get("properties", {}).get("hs_timestamp")
if eng_time and (
not metrics["last_engagement"]
or eng_time > metrics["last_engagement"]
):
metrics["last_engagement"] = eng_time
# Calculate simple engagement score
metrics["engagement_score"] = (
metrics["email_opens"]
+ metrics["email_clicks"] * 2
+ metrics["email_replies"] * 3
)
yield "result", metrics
yield "status", "engagement_tracked"

View File

@@ -1,12 +1,17 @@
from enum import Enum
from typing import Any, Dict, Literal, Optional
import requests
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from requests.exceptions import RequestException
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.util.request import requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -242,9 +247,8 @@ class IdeogramModelBlock(Block):
try:
response = requests.post(url, json=data, headers=headers)
response.raise_for_status()
return response.json()["data"][0]["url"]
except requests.exceptions.RequestException as e:
except RequestException as e:
raise Exception(f"Failed to fetch image: {str(e)}")
def upscale_image(self, api_key: SecretStr, image_url: str):
@@ -256,7 +260,6 @@ class IdeogramModelBlock(Block):
try:
# Step 1: Download the image from the provided URL
image_response = requests.get(image_url)
image_response.raise_for_status()
# Step 2: Send the downloaded image to the upscale API
files = {
@@ -272,8 +275,7 @@ class IdeogramModelBlock(Block):
files=files,
)
response.raise_for_status()
return response.json()["data"][0]["url"]
except requests.exceptions.RequestException as e:
except RequestException as e:
raise Exception(f"Failed to upscale image: {str(e)}")

View File

@@ -2,13 +2,28 @@ from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.json import json
class StepThroughItemsBlock(Block):
class Input(BlockSchema):
items: list | dict = SchemaField(
items: list = SchemaField(
advanced=False,
description="The list or dictionary of items to iterate over",
placeholder="[1, 2, 3, 4, 5] or {'key1': 'value1', 'key2': 'value2'}",
default=[],
)
items_object: dict = SchemaField(
advanced=False,
description="The list or dictionary of items to iterate over",
placeholder="[1, 2, 3, 4, 5] or {'key1': 'value1', 'key2': 'value2'}",
default={},
)
items_str: str = SchemaField(
advanced=False,
description="The list or dictionary of items to iterate over",
placeholder="[1, 2, 3, 4, 5] or {'key1': 'value1', 'key2': 'value2'}",
default="",
)
class Output(BlockSchema):
@@ -39,14 +54,20 @@ class StepThroughItemsBlock(Block):
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
items = input_data.items
if isinstance(items, dict):
# If items is a dictionary, iterate over its values
for item in items.values():
yield "item", item
yield "key", item
else:
# If items is a list, iterate over the list
for index, item in enumerate(items):
yield "item", item
yield "key", index
for data in [input_data.items, input_data.items_object, input_data.items_str]:
if not data:
continue
if isinstance(data, str):
items = json.loads(data)
else:
items = data
if isinstance(items, dict):
# If items is a dictionary, iterate over its values
for item in items.values():
yield "item", item
yield "key", item
else:
# If items is a list, iterate over the list
for index, item in enumerate(items):
yield "item", item
yield "key", index

View File

@@ -1,9 +1,8 @@
from typing import Literal
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.data.model import CredentialsField, CredentialsMetaInput
from backend.data.model import APIKeyCredentials, CredentialsField, CredentialsMetaInput
JinaCredentials = APIKeyCredentials
JinaCredentialsInput = CredentialsMetaInput[

View File

@@ -1,5 +1,3 @@
import requests
from backend.blocks.jina._auth import (
JinaCredentials,
JinaCredentialsField,
@@ -7,6 +5,7 @@ from backend.blocks.jina._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
class JinaChunkingBlock(Block):
@@ -57,7 +56,6 @@ class JinaChunkingBlock(Block):
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
result = response.json()
all_chunks.extend(result.get("chunks", []))

View File

@@ -1,5 +1,3 @@
import requests
from backend.blocks.jina._auth import (
JinaCredentials,
JinaCredentialsField,
@@ -7,6 +5,7 @@ from backend.blocks.jina._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
class JinaEmbeddingBlock(Block):

View File

@@ -55,3 +55,53 @@ class SearchTheWebBlock(Block, GetRequest):
# Output the search results
yield "results", results
class ExtractWebsiteContentBlock(Block, GetRequest):
class Input(BlockSchema):
credentials: JinaCredentialsInput = JinaCredentialsField()
url: str = SchemaField(description="The URL to scrape the content from")
raw_content: bool = SchemaField(
default=False,
title="Raw Content",
description="Whether to do a raw scrape of the content or use Jina-ai Reader to scrape the content",
advanced=True,
)
class Output(BlockSchema):
content: str = SchemaField(description="The scraped content from the given URL")
error: str = SchemaField(
description="Error message if the content cannot be retrieved"
)
def __init__(self):
super().__init__(
id="436c3984-57fd-4b85-8e9a-459b356883bd",
description="This block scrapes the content from the given web URL.",
categories={BlockCategory.SEARCH},
input_schema=ExtractWebsiteContentBlock.Input,
output_schema=ExtractWebsiteContentBlock.Output,
test_input={
"url": "https://en.wikipedia.org/wiki/Artificial_intelligence",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=("content", "scraped content"),
test_mock={"get_request": lambda *args, **kwargs: "scraped content"},
)
def run(
self, input_data: Input, *, credentials: JinaCredentials, **kwargs
) -> BlockOutput:
if input_data.raw_content:
url = input_data.url
headers = {}
else:
url = f"https://r.jina.ai/{input_data.url}"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
}
content = self.get_request(url, json=False, headers=headers)
yield "content", content

View File

@@ -5,7 +5,6 @@ from json import JSONDecodeError
from types import MappingProxyType
from typing import TYPE_CHECKING, Any, List, Literal, NamedTuple
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
if TYPE_CHECKING:
@@ -17,24 +16,23 @@ import openai
from groq import Groq
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.util import json
from backend.util.settings import BehaveAs, Settings
logger = logging.getLogger(__name__)
# LlmApiKeys = {
# "openai": BlockSecret("openai_api_key"),
# "anthropic": BlockSecret("anthropic_api_key"),
# "groq": BlockSecret("groq_api_key"),
# "ollama": BlockSecret(value=""),
# }
AICredentials = CredentialsMetaInput[Literal["llm"], Literal["api_key"]]
LLMProviderName = Literal["anthropic", "groq", "openai", "ollama", "open_router"]
AICredentials = CredentialsMetaInput[LLMProviderName, Literal["api_key"]]
TEST_CREDENTIALS = APIKeyCredentials(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
provider="llm",
provider="openai",
api_key=SecretStr("mock-openai-api-key"),
title="Mock OpenAI API key",
expires_at=None,
@@ -50,15 +48,18 @@ TEST_CREDENTIALS_INPUT = {
def AICredentialsField() -> AICredentials:
return CredentialsField(
description="API key for the LLM provider.",
provider="llm",
provider=["anthropic", "groq", "openai", "ollama", "open_router"],
supported_credential_types={"api_key"},
discriminator="model",
discriminator_mapping={
model.value: model.metadata.provider for model in LlmModel
},
)
class ModelMetadata(NamedTuple):
provider: str
context_window: int
cost_factor: int
class LlmModelMeta(EnumMeta):
@@ -104,6 +105,18 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
# Ollama models
OLLAMA_LLAMA3_8B = "llama3"
OLLAMA_LLAMA3_405B = "llama3.1:405b"
# OpenRouter models
GEMINI_FLASH_1_5_8B = "google/gemini-flash-1.5"
GEMINI_FLASH_1_5_EXP = "google/gemini-flash-1.5-exp"
GROK_BETA = "x-ai/grok-beta"
MISTRAL_NEMO = "mistralai/mistral-nemo"
COHERE_COMMAND_R_08_2024 = "cohere/command-r-08-2024"
COHERE_COMMAND_R_PLUS_08_2024 = "cohere/command-r-plus-08-2024"
EVA_QWEN_2_5_32B = "eva-unit-01/eva-qwen-2.5-32b"
DEEPSEEK_CHAT = "deepseek/deepseek-chat"
PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE = (
"perplexity/llama-3.1-sonar-large-128k-online"
)
@property
def metadata(self) -> ModelMetadata:
@@ -117,31 +130,38 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
def context_window(self) -> int:
return self.metadata.context_window
@property
def cost_factor(self) -> int:
return self.metadata.cost_factor
MODEL_METADATA = {
LlmModel.O1_PREVIEW: ModelMetadata("openai", 32000, cost_factor=16),
LlmModel.O1_MINI: ModelMetadata("openai", 62000, cost_factor=4),
LlmModel.GPT4O_MINI: ModelMetadata("openai", 128000, cost_factor=1),
LlmModel.GPT4O: ModelMetadata("openai", 128000, cost_factor=3),
LlmModel.GPT4_TURBO: ModelMetadata("openai", 128000, cost_factor=10),
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385, cost_factor=1),
LlmModel.CLAUDE_3_5_SONNET: ModelMetadata("anthropic", 200000, cost_factor=4),
LlmModel.CLAUDE_3_HAIKU: ModelMetadata("anthropic", 200000, cost_factor=1),
LlmModel.LLAMA3_8B: ModelMetadata("groq", 8192, cost_factor=1),
LlmModel.LLAMA3_70B: ModelMetadata("groq", 8192, cost_factor=1),
LlmModel.MIXTRAL_8X7B: ModelMetadata("groq", 32768, cost_factor=1),
LlmModel.GEMMA_7B: ModelMetadata("groq", 8192, cost_factor=1),
LlmModel.GEMMA2_9B: ModelMetadata("groq", 8192, cost_factor=1),
LlmModel.LLAMA3_1_405B: ModelMetadata("groq", 8192, cost_factor=1),
LlmModel.O1_PREVIEW: ModelMetadata("openai", 32000),
LlmModel.O1_MINI: ModelMetadata("openai", 62000),
LlmModel.GPT4O_MINI: ModelMetadata("openai", 128000),
LlmModel.GPT4O: ModelMetadata("openai", 128000),
LlmModel.GPT4_TURBO: ModelMetadata("openai", 128000),
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385),
LlmModel.CLAUDE_3_5_SONNET: ModelMetadata("anthropic", 200000),
LlmModel.CLAUDE_3_HAIKU: ModelMetadata("anthropic", 200000),
LlmModel.LLAMA3_8B: ModelMetadata("groq", 8192),
LlmModel.LLAMA3_70B: ModelMetadata("groq", 8192),
LlmModel.MIXTRAL_8X7B: ModelMetadata("groq", 32768),
LlmModel.GEMMA_7B: ModelMetadata("groq", 8192),
LlmModel.GEMMA2_9B: ModelMetadata("groq", 8192),
LlmModel.LLAMA3_1_405B: ModelMetadata("groq", 8192),
# Limited to 16k during preview
LlmModel.LLAMA3_1_70B: ModelMetadata("groq", 131072, cost_factor=1),
LlmModel.LLAMA3_1_8B: ModelMetadata("groq", 131072, cost_factor=1),
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata("ollama", 8192, cost_factor=1),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata("ollama", 8192, cost_factor=1),
LlmModel.LLAMA3_1_70B: ModelMetadata("groq", 131072),
LlmModel.LLAMA3_1_8B: ModelMetadata("groq", 131072),
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata("ollama", 8192),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata("ollama", 8192),
LlmModel.GEMINI_FLASH_1_5_8B: ModelMetadata("open_router", 8192),
LlmModel.GEMINI_FLASH_1_5_EXP: ModelMetadata("open_router", 8192),
LlmModel.GROK_BETA: ModelMetadata("open_router", 8192),
LlmModel.MISTRAL_NEMO: ModelMetadata("open_router", 4000),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata("open_router", 4000),
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: ModelMetadata("open_router", 4000),
LlmModel.EVA_QWEN_2_5_32B: ModelMetadata("open_router", 4000),
LlmModel.DEEPSEEK_CHAT: ModelMetadata("open_router", 8192),
LlmModel.PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE: ModelMetadata(
"open_router", 8192
),
}
for model in LlmModel:
@@ -354,6 +374,34 @@ class AIStructuredResponseGeneratorBlock(Block):
response.get("prompt_eval_count") or 0,
response.get("eval_count") or 0,
)
elif provider == "open_router":
client = openai.OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=credentials.api_key.get_secret_value(),
)
response = client.chat.completions.create(
extra_headers={
"HTTP-Referer": "https://agpt.co",
"X-Title": "AutoGPT",
},
model=llm_model.value,
messages=prompt, # type: ignore
max_tokens=max_tokens,
)
# If there's no response, raise an error
if not response.choices:
if response:
raise ValueError(f"OpenRouter error: {response}")
else:
raise ValueError("No response from OpenRouter.")
return (
response.choices[0].message.content or "",
response.usage.prompt_tokens if response.usage else 0,
response.usage.completion_tokens if response.usage else 0,
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
@@ -475,7 +523,7 @@ class AIStructuredResponseGeneratorBlock(Block):
class AITextGeneratorBlock(Block):
class Input(BlockSchema):
prompt: str = SchemaField(
description="The prompt to send to the language model.",
description="The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces.",
placeholder="Enter your prompt here...",
)
model: LlmModel = SchemaField(

View File

@@ -1,18 +1,18 @@
from enum import Enum
from typing import List, Literal
import requests
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
BlockSecret,
CredentialsField,
CredentialsMetaInput,
SchemaField,
SecretField,
)
from backend.util.request import requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",

View File

@@ -1,10 +1,15 @@
from typing import Literal
import uuid
from typing import Any, Literal
from autogpt_libs.supabase_integration_credentials_store import APIKeyCredentials
from pinecone import Pinecone, ServerlessSpec
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
PineconeCredentials = APIKeyCredentials
PineconeCredentialsInput = CredentialsMetaInput[
@@ -98,10 +103,14 @@ class PineconeQueryBlock(Block):
include_metadata: bool = SchemaField(
description="Whether to include metadata in the response", default=True
)
host: str = SchemaField(description="Host for pinecone")
host: str = SchemaField(description="Host for pinecone", default="")
idx_name: str = SchemaField(description="Index name for pinecone")
class Output(BlockSchema):
results: dict = SchemaField(description="Query results from Pinecone")
results: Any = SchemaField(description="Query results from Pinecone")
combined_results: Any = SchemaField(
description="Combined results from Pinecone"
)
def __init__(self):
super().__init__(
@@ -119,13 +128,105 @@ class PineconeQueryBlock(Block):
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
pc = Pinecone(api_key=credentials.api_key.get_secret_value())
idx = pc.Index(host=input_data.host)
results = idx.query(
namespace=input_data.namespace,
vector=input_data.query_vector,
top_k=input_data.top_k,
include_values=input_data.include_values,
include_metadata=input_data.include_metadata,
try:
# Create a new client instance
pc = Pinecone(api_key=credentials.api_key.get_secret_value())
# Get the index
idx = pc.Index(input_data.idx_name)
# Ensure query_vector is in correct format
query_vector = input_data.query_vector
if isinstance(query_vector, list) and len(query_vector) > 0:
if isinstance(query_vector[0], list):
query_vector = query_vector[0]
results = idx.query(
namespace=input_data.namespace,
vector=query_vector,
top_k=input_data.top_k,
include_values=input_data.include_values,
include_metadata=input_data.include_metadata,
).to_dict()
combined_text = ""
if results["matches"]:
texts = [
match["metadata"]["text"]
for match in results["matches"]
if match.get("metadata", {}).get("text")
]
combined_text = "\n\n".join(texts)
# Return both the raw matches and combined text
yield "results", {
"matches": results["matches"],
"combined_text": combined_text,
}
yield "combined_results", combined_text
except Exception as e:
error_msg = f"Error querying Pinecone: {str(e)}"
raise RuntimeError(error_msg) from e
class PineconeInsertBlock(Block):
class Input(BlockSchema):
credentials: PineconeCredentialsInput = PineconeCredentialsField()
index: str = SchemaField(description="Initialized Pinecone index")
chunks: list = SchemaField(description="List of text chunks to ingest")
embeddings: list = SchemaField(
description="List of embeddings corresponding to the chunks"
)
yield "results", results
namespace: str = SchemaField(
description="Namespace to use in Pinecone", default=""
)
metadata: dict = SchemaField(
description="Additional metadata to store with each vector", default={}
)
class Output(BlockSchema):
upsert_response: str = SchemaField(
description="Response from Pinecone upsert operation"
)
def __init__(self):
super().__init__(
id="477f2168-cd91-475a-8146-9499a5982434",
description="Upload data to a Pinecone index",
categories={BlockCategory.LOGIC},
input_schema=PineconeInsertBlock.Input,
output_schema=PineconeInsertBlock.Output,
)
def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
**kwargs,
) -> BlockOutput:
try:
# Create a new client instance
pc = Pinecone(api_key=credentials.api_key.get_secret_value())
# Get the index
idx = pc.Index(input_data.index)
vectors = []
for chunk, embedding in zip(input_data.chunks, input_data.embeddings):
vector_metadata = input_data.metadata.copy()
vector_metadata["text"] = chunk
vectors.append(
{
"id": str(uuid.uuid4()),
"values": embedding,
"metadata": vector_metadata,
}
)
idx.upsert(vectors=vectors, namespace=input_data.namespace)
yield "upsert_response", "successfully upserted"
except Exception as e:
error_msg = f"Error uploading to Pinecone: {str(e)}"
raise RuntimeError(error_msg) from e

View File

@@ -3,12 +3,16 @@ from enum import Enum
from typing import Literal
import replicate
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",

View File

@@ -1,12 +1,16 @@
from typing import Literal
from urllib.parse import quote
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.blocks.helpers.http import GetRequest
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
class GetWikipediaSummaryBlock(Block, GetRequest):
@@ -40,44 +44,6 @@ class GetWikipediaSummaryBlock(Block, GetRequest):
yield "summary", response["extract"]
class ExtractWebsiteContentBlock(Block, GetRequest):
class Input(BlockSchema):
url: str = SchemaField(description="The URL to scrape the content from")
raw_content: bool = SchemaField(
default=False,
title="Raw Content",
description="Whether to do a raw scrape of the content or use Jina-ai Reader to scrape the content",
advanced=True,
)
class Output(BlockSchema):
content: str = SchemaField(description="The scraped content from the given URL")
error: str = SchemaField(
description="Error message if the content cannot be retrieved"
)
def __init__(self):
super().__init__(
id="436c3984-57fd-4b85-8e9a-459b356883bd",
description="This block scrapes the content from the given web URL.",
categories={BlockCategory.SEARCH},
input_schema=ExtractWebsiteContentBlock.Input,
output_schema=ExtractWebsiteContentBlock.Output,
test_input={"url": "https://en.wikipedia.org/wiki/Artificial_intelligence"},
test_output=("content", "scraped content"),
test_mock={"get_request": lambda url, json: "scraped content"},
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
if input_data.raw_content:
url = input_data.url
else:
url = f"https://r.jina.ai/{input_data.url}"
content = self.get_request(url, json=False)
yield "content", content
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="openweathermap",

View File

@@ -1,12 +1,16 @@
import time
from typing import Literal
import requests
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.util.request import requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -118,7 +122,6 @@ class CreateTalkingAvatarVideoBlock(Block):
"authorization": f"Basic {api_key.get_secret_value()}",
}
response = requests.post(url, json=payload, headers=headers)
response.raise_for_status()
return response.json()
def get_clip_status(self, api_key: SecretStr, clip_id: str) -> dict:
@@ -128,7 +131,6 @@ class CreateTalkingAvatarVideoBlock(Block):
"authorization": f"Basic {api_key.get_secret_value()}",
}
response = requests.get(url, headers=headers)
response.raise_for_status()
return response.json()
def run(

View File

@@ -1,11 +1,15 @@
from typing import Any, Literal
import requests
from autogpt_libs.supabase_integration_credentials_store.types import APIKeyCredentials
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, CredentialsMetaInput, SchemaField
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.util.request import requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -86,7 +90,6 @@ class UnrealTextToSpeechBlock(Block):
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
def run(

View File

@@ -140,20 +140,25 @@ class GetCurrentDateAndTimeBlock(Block):
class CountdownTimerBlock(Block):
class Input(BlockSchema):
input_message: Any = SchemaField(
advanced=False,
description="Message to output after the timer finishes",
default="timer finished",
)
seconds: Union[int, str] = SchemaField(
description="Duration in seconds", default=0
advanced=False, description="Duration in seconds", default=0
)
minutes: Union[int, str] = SchemaField(
description="Duration in minutes", default=0
advanced=False, description="Duration in minutes", default=0
)
hours: Union[int, str] = SchemaField(
advanced=False, description="Duration in hours", default=0
)
days: Union[int, str] = SchemaField(
advanced=False, description="Duration in days", default=0
)
hours: Union[int, str] = SchemaField(description="Duration in hours", default=0)
days: Union[int, str] = SchemaField(description="Duration in days", default=0)
class Output(BlockSchema):
output_message: str = SchemaField(
output_message: Any = SchemaField(
description="Message after the timer finishes"
)

View File

@@ -62,7 +62,22 @@ class TranscribeYoutubeVideoBlock(Block):
@staticmethod
def get_transcript(video_id: str):
return YouTubeTranscriptApi.get_transcript(video_id)
try:
transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)
if not transcript_list:
raise ValueError(f"No transcripts found for the video: {video_id}")
for transcript in transcript_list:
first_transcript = transcript_list.find_transcript(
[transcript.language_code]
)
return YouTubeTranscriptApi.get_transcript(
video_id, languages=[first_transcript.language_code]
)
except Exception:
raise ValueError(f"No transcripts found for the video: {video_id}")
def run(self, input_data: Input, **kwargs) -> BlockOutput:
video_id = self.extract_video_id(input_data.youtube_url)

View File

@@ -93,6 +93,16 @@ def stop():
print("Server Stopped")
@main.command()
def gen_encrypt_key():
"""
Generate a new encryption key
"""
from cryptography.fernet import Fernet
print(Fernet.generate_key().decode())
@click.group()
def test():
"""

View File

@@ -0,0 +1,325 @@
import logging
import uuid
from datetime import datetime, timezone
from typing import List, Optional
from autogpt_libs.api_key.key_manager import APIKeyManager
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.errors import PrismaError
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import (
APIKeyCreateInput,
APIKeyUpdateInput,
APIKeyWhereInput,
APIKeyWhereUniqueInput,
)
from pydantic import BaseModel
from backend.data.db import BaseDbModel
logger = logging.getLogger(__name__)
# Some basic exceptions
class APIKeyError(Exception):
"""Base exception for API key operations"""
pass
class APIKeyNotFoundError(APIKeyError):
"""Raised when an API key is not found"""
pass
class APIKeyPermissionError(APIKeyError):
"""Raised when there are permission issues with API key operations"""
pass
class APIKeyValidationError(APIKeyError):
"""Raised when API key validation fails"""
pass
class APIKey(BaseDbModel):
name: str
prefix: str
key: str
status: APIKeyStatus = APIKeyStatus.ACTIVE
permissions: List[APIKeyPermission]
postfix: str
created_at: datetime
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None
description: Optional[str] = None
user_id: str
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKey(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
key=api_key.key,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKey from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
class APIKeyWithoutHash(BaseModel):
id: str
name: str
prefix: str
postfix: str
status: APIKeyStatus
permissions: List[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime]
revoked_at: Optional[datetime]
description: Optional[str]
user_id: str
@staticmethod
def from_db(api_key: PrismaAPIKey):
try:
return APIKeyWithoutHash(
id=api_key.id,
name=api_key.name,
prefix=api_key.prefix,
postfix=api_key.postfix,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
description=api_key.description,
user_id=api_key.userId,
)
except Exception as e:
logger.error(f"Error creating APIKeyWithoutHash from db: {str(e)}")
raise APIKeyError(f"Failed to create API key object: {str(e)}")
async def generate_api_key(
name: str,
user_id: str,
permissions: List[APIKeyPermission],
description: Optional[str] = None,
) -> tuple[APIKeyWithoutHash, str]:
"""
Generate a new API key and store it in the database.
Returns the API key object (without hash) and the plain text key.
"""
try:
api_manager = APIKeyManager()
key = api_manager.generate_api_key()
api_key = await PrismaAPIKey.prisma().create(
data=APIKeyCreateInput(
id=str(uuid.uuid4()),
name=name,
prefix=key.prefix,
postfix=key.postfix,
key=key.hash,
permissions=[p for p in permissions],
description=description,
userId=user_id,
)
)
api_key_without_hash = APIKeyWithoutHash.from_db(api_key)
return api_key_without_hash, key.raw
except PrismaError as e:
logger.error(f"Database error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while generating API key: {str(e)}")
raise APIKeyError(f"Failed to generate API key: {str(e)}")
async def validate_api_key(plain_text_key: str) -> Optional[APIKey]:
"""
Validate an API key and return the API key object if valid.
"""
try:
if not plain_text_key.startswith(APIKeyManager.PREFIX):
logger.warning("Invalid API key format")
return None
prefix = plain_text_key[: APIKeyManager.PREFIX_LENGTH]
api_manager = APIKeyManager()
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(prefix=prefix, status=(APIKeyStatus.ACTIVE))
)
if not api_key:
logger.warning(f"No active API key found with prefix {prefix}")
return None
is_valid = api_manager.verify_api_key(plain_text_key, api_key.key)
if not is_valid:
logger.warning("API key verification failed")
return None
return APIKey.from_db(api_key)
except Exception as e:
logger.error(f"Error validating API key: {str(e)}")
raise APIKeyValidationError(f"Failed to validate API key: {str(e)}")
async def revoke_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to revoke this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(
status=APIKeyStatus.REVOKED, revokedAt=datetime.now(timezone.utc)
),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while revoking API key: {str(e)}")
raise APIKeyError(f"Failed to revoke API key: {str(e)}")
async def list_user_api_keys(user_id: str) -> List[APIKeyWithoutHash]:
try:
where_clause: APIKeyWhereInput = {"userId": user_id}
api_keys = await PrismaAPIKey.prisma().find_many(
where=where_clause, order={"createdAt": "desc"}
)
return [APIKeyWithoutHash.from_db(key) for key in api_keys]
except PrismaError as e:
logger.error(f"Database error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while listing API keys: {str(e)}")
raise APIKeyError(f"Failed to list API keys: {str(e)}")
async def suspend_api_key(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if not api_key:
raise APIKeyNotFoundError(f"API key with id {key_id} not found")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to suspend this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(status=APIKeyStatus.SUSPENDED),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while suspending API key: {str(e)}")
raise APIKeyError(f"Failed to suspend API key: {str(e)}")
def has_permission(api_key: APIKey, required_permission: APIKeyPermission) -> bool:
try:
return required_permission in api_key.permissions
except Exception as e:
logger.error(f"Error checking API key permissions: {str(e)}")
return False
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyWithoutHash]:
try:
api_key = await PrismaAPIKey.prisma().find_first(
where=APIKeyWhereInput(id=key_id, userId=user_id)
)
if not api_key:
return None
return APIKeyWithoutHash.from_db(api_key)
except PrismaError as e:
logger.error(f"Database error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while getting API key: {str(e)}")
raise APIKeyError(f"Failed to get API key: {str(e)}")
async def update_api_key_permissions(
key_id: str, user_id: str, permissions: List[APIKeyPermission]
) -> Optional[APIKeyWithoutHash]:
"""
Update the permissions of an API key.
"""
try:
api_key = await PrismaAPIKey.prisma().find_unique(where={"id": key_id})
if api_key is None:
raise APIKeyNotFoundError("No such API key found.")
if api_key.userId != user_id:
raise APIKeyPermissionError(
"You do not have permission to update this API key."
)
where_clause: APIKeyWhereUniqueInput = {"id": key_id}
updated_api_key = await PrismaAPIKey.prisma().update(
where=where_clause,
data=APIKeyUpdateInput(permissions=permissions),
)
if updated_api_key:
return APIKeyWithoutHash.from_db(updated_api_key)
return None
except (APIKeyNotFoundError, APIKeyPermissionError) as e:
raise e
except PrismaError as e:
logger.error(f"Database error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")
except Exception as e:
logger.error(f"Unexpected error while updating API key permissions: {str(e)}")
raise APIKeyError(f"Failed to update API key permissions: {str(e)}")

View File

@@ -15,13 +15,20 @@ from typing import (
import jsonref
import jsonschema
from autogpt_libs.supabase_integration_credentials_store.types import Credentials
from prisma.models import AgentBlock
from pydantic import BaseModel
from backend.util import json
from backend.util.settings import Config
from .model import CREDENTIALS_FIELD_NAME, ContributorDetails, CredentialsMetaInput
from .model import (
CREDENTIALS_FIELD_NAME,
ContributorDetails,
Credentials,
CredentialsMetaInput,
)
app_config = Config()
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
@@ -34,6 +41,8 @@ class BlockType(Enum):
INPUT = "Input"
OUTPUT = "Output"
NOTE = "Note"
WEBHOOK = "Webhook"
AGENT = "Agent"
class BlockCategory(Enum):
@@ -48,6 +57,8 @@ class BlockCategory(Enum):
COMMUNICATION = "Block that interacts with communication platforms."
DEVELOPER_TOOLS = "Developer tools such as GitHub blocks."
DATA = "Block that interacts with structured data."
AGENT = "Block that interacts with other agents."
CRM = "Block that interacts with CRM services."
def dict(self) -> dict[str, str]:
return {"category": self.name, "description": self.value}
@@ -92,15 +103,7 @@ class BlockSchema(BaseModel):
@classmethod
def validate_data(cls, data: BlockInput) -> str | None:
"""
Validate the data against the schema.
Returns the validation error message if the data does not match the schema.
"""
try:
jsonschema.validate(data, cls.jsonschema())
return None
except jsonschema.ValidationError as e:
return str(e)
return json.validate_with_jsonschema(schema=cls.jsonschema(), data=data)
@classmethod
def validate_field(cls, field_name: str, data: BlockInput) -> str | None:
@@ -183,6 +186,41 @@ class EmptySchema(BlockSchema):
pass
# --8<-- [start:BlockWebhookConfig]
class BlockWebhookConfig(BaseModel):
provider: str
"""The service provider that the webhook connects to"""
webhook_type: str
"""
Identifier for the webhook type. E.g. GitHub has repo and organization level hooks.
Only for use in the corresponding `WebhooksManager`.
"""
resource_format: str
"""
Template string for the resource that a block instance subscribes to.
Fields will be filled from the block's inputs (except `payload`).
Example: `f"{repo}/pull_requests"` (note: not how it's actually implemented)
Only for use in the corresponding `WebhooksManager`.
"""
event_filter_input: str
"""Name of the block's event filter input."""
event_format: str = "{event}"
"""
Template string for the event(s) that a block instance subscribes to.
Applied individually to each event selected in the event filter input.
Example: `"pull_request.{event}"` -> `"pull_request.opened"`
"""
# --8<-- [end:BlockWebhookConfig]
class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
def __init__(
self,
@@ -199,6 +237,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
disabled: bool = False,
static_output: bool = False,
block_type: BlockType = BlockType.STANDARD,
webhook_config: Optional[BlockWebhookConfig] = None,
):
"""
Initialize the block with the given schema.
@@ -229,9 +268,38 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.contributors = contributors or set()
self.disabled = disabled
self.static_output = static_output
self.block_type = block_type
self.block_type = block_type if not webhook_config else BlockType.WEBHOOK
self.webhook_config = webhook_config
self.execution_stats = {}
if self.webhook_config:
# Enforce shape of webhook event filter
event_filter_field = self.input_schema.model_fields[
self.webhook_config.event_filter_input
]
if not (
isinstance(event_filter_field.annotation, type)
and issubclass(event_filter_field.annotation, BaseModel)
and all(
field.annotation is bool
for field in event_filter_field.annotation.model_fields.values()
)
):
raise NotImplementedError(
f"{self.name} has an invalid webhook event selector: "
"field must be a BaseModel and all its fields must be boolean"
)
# Enforce presence of 'payload' input
if "payload" not in self.input_schema.model_fields:
raise TypeError(
f"{self.name} is webhook-triggered but has no 'payload' input"
)
# Disable webhook-triggered block if webhook functionality not available
if not app_config.platform_base_url:
self.disabled = True
@classmethod
def create(cls: Type["Block"]) -> "Block":
return cls()
@@ -299,7 +367,9 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
):
if output_name == "error":
raise RuntimeError(output_data)
if error := self.output_schema.validate_field(output_name, output_data):
if self.block_type == BlockType.STANDARD and (
error := self.output_schema.validate_field(output_name, output_data)
):
raise ValueError(f"Block produced an invalid output data: {error}")
yield output_name, output_data

View File

@@ -0,0 +1,256 @@
from typing import Type
from backend.blocks.ai_music_generator import AIMusicGeneratorBlock
from backend.blocks.ai_shortform_video_block import AIShortformVideoCreatorBlock
from backend.blocks.ideogram import IdeogramModelBlock
from backend.blocks.jina.embeddings import JinaEmbeddingBlock
from backend.blocks.jina.search import ExtractWebsiteContentBlock, SearchTheWebBlock
from backend.blocks.llm import (
MODEL_METADATA,
AIConversationBlock,
AIListGeneratorBlock,
AIStructuredResponseGeneratorBlock,
AITextGeneratorBlock,
AITextSummarizerBlock,
LlmModel,
)
from backend.blocks.replicate_flux_advanced import ReplicateFluxAdvancedModelBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.data.block import Block
from backend.data.cost import BlockCost, BlockCostType
from backend.integrations.credentials_store import (
anthropic_credentials,
did_credentials,
groq_credentials,
ideogram_credentials,
jina_credentials,
open_router_credentials,
openai_credentials,
replicate_credentials,
revid_credentials,
unreal_credentials,
)
# =============== Configure the cost for each LLM Model call =============== #
MODEL_COST: dict[LlmModel, int] = {
LlmModel.O1_PREVIEW: 16,
LlmModel.O1_MINI: 4,
LlmModel.GPT4O_MINI: 1,
LlmModel.GPT4O: 3,
LlmModel.GPT4_TURBO: 10,
LlmModel.GPT3_5_TURBO: 1,
LlmModel.CLAUDE_3_5_SONNET: 4,
LlmModel.CLAUDE_3_HAIKU: 1,
LlmModel.LLAMA3_8B: 1,
LlmModel.LLAMA3_70B: 1,
LlmModel.MIXTRAL_8X7B: 1,
LlmModel.GEMMA_7B: 1,
LlmModel.GEMMA2_9B: 1,
LlmModel.LLAMA3_1_405B: 1,
LlmModel.LLAMA3_1_70B: 1,
LlmModel.LLAMA3_1_8B: 1,
LlmModel.OLLAMA_LLAMA3_8B: 1,
LlmModel.OLLAMA_LLAMA3_405B: 1,
LlmModel.GEMINI_FLASH_1_5_8B: 1,
LlmModel.GEMINI_FLASH_1_5_EXP: 1,
LlmModel.GROK_BETA: 5,
LlmModel.MISTRAL_NEMO: 1,
LlmModel.COHERE_COMMAND_R_08_2024: 1,
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: 3,
LlmModel.EVA_QWEN_2_5_32B: 1,
LlmModel.DEEPSEEK_CHAT: 2,
LlmModel.PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE: 1,
}
for model in LlmModel:
if model not in MODEL_COST:
raise ValueError(f"Missing MODEL_COST for model: {model}")
LLM_COST = (
# Anthropic Models
[
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": anthropic_credentials.id,
"provider": anthropic_credentials.provider,
"type": anthropic_credentials.type,
},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "anthropic"
]
# OpenAI Models
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": openai_credentials.id,
"provider": openai_credentials.provider,
"type": openai_credentials.type,
},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "openai"
]
# Groq Models
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {"id": groq_credentials.id},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "groq"
]
# Open Router Models
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": open_router_credentials.id,
"provider": open_router_credentials.provider,
"type": open_router_credentials.type,
},
},
cost_amount=cost,
)
for model, cost in MODEL_COST.items()
if MODEL_METADATA[model].provider == "open_router"
]
)
# =============== This is the exhaustive list of cost for each Block =============== #
BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
AIConversationBlock: LLM_COST,
AITextGeneratorBlock: LLM_COST,
AIStructuredResponseGeneratorBlock: LLM_COST,
AITextSummarizerBlock: LLM_COST,
AIListGeneratorBlock: LLM_COST,
CreateTalkingAvatarVideoBlock: [
BlockCost(
cost_amount=15,
cost_filter={
"credentials": {
"id": did_credentials.id,
"provider": did_credentials.provider,
"type": did_credentials.type,
}
},
)
],
SearchTheWebBlock: [
BlockCost(
cost_amount=1,
cost_filter={
"credentials": {
"id": jina_credentials.id,
"provider": jina_credentials.provider,
"type": jina_credentials.type,
}
},
)
],
ExtractWebsiteContentBlock: [
BlockCost(
cost_amount=1,
cost_filter={
"raw_content": False,
"credentials": {
"id": jina_credentials.id,
"provider": jina_credentials.provider,
"type": jina_credentials.type,
},
},
)
],
IdeogramModelBlock: [
BlockCost(
cost_amount=16,
cost_filter={
"credentials": {
"id": ideogram_credentials.id,
"provider": ideogram_credentials.provider,
"type": ideogram_credentials.type,
}
},
)
],
AIShortformVideoCreatorBlock: [
BlockCost(
cost_amount=50,
cost_filter={
"credentials": {
"id": revid_credentials.id,
"provider": revid_credentials.provider,
"type": revid_credentials.type,
}
},
)
],
ReplicateFluxAdvancedModelBlock: [
BlockCost(
cost_amount=10,
cost_filter={
"credentials": {
"id": replicate_credentials.id,
"provider": replicate_credentials.provider,
"type": replicate_credentials.type,
}
},
)
],
AIMusicGeneratorBlock: [
BlockCost(
cost_amount=11,
cost_filter={
"credentials": {
"id": replicate_credentials.id,
"provider": replicate_credentials.provider,
"type": replicate_credentials.type,
}
},
)
],
JinaEmbeddingBlock: [
BlockCost(
cost_amount=12,
cost_filter={
"credentials": {
"id": jina_credentials.id,
"provider": jina_credentials.provider,
"type": jina_credentials.type,
}
},
)
],
UnrealTextToSpeechBlock: [
BlockCost(
cost_amount=5,
cost_filter={
"credentials": {
"id": unreal_credentials.id,
"provider": unreal_credentials.provider,
"type": unreal_credentials.type,
}
},
)
],
}

View File

@@ -0,0 +1,32 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from backend.data.block import BlockInput
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)

View File

@@ -1,190 +1,17 @@
from abc import ABC, abstractmethod
from datetime import datetime, timezone
from enum import Enum
from typing import Any, Optional, Type
import prisma.errors
from autogpt_libs.supabase_integration_credentials_store.store import (
anthropic_credentials,
did_credentials,
groq_credentials,
ideogram_credentials,
openai_credentials,
replicate_credentials,
revid_credentials,
)
from prisma import Json
from prisma.enums import UserBlockCreditType
from prisma.errors import UniqueViolationError
from prisma.models import UserBlockCredit
from pydantic import BaseModel
from backend.blocks.ai_shortform_video_block import AIShortformVideoCreatorBlock
from backend.blocks.ideogram import IdeogramModelBlock
from backend.blocks.jina.search import SearchTheWebBlock
from backend.blocks.llm import (
MODEL_METADATA,
AIConversationBlock,
AIStructuredResponseGeneratorBlock,
AITextGeneratorBlock,
AITextSummarizerBlock,
LlmModel,
)
from backend.blocks.replicate_flux_advanced import ReplicateFluxAdvancedModelBlock
from backend.blocks.search import ExtractWebsiteContentBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.data.block import Block, BlockInput, get_block
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCost, BlockCostType
from backend.util.settings import Config
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)
llm_cost = (
[
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"api_key": None, # Running LLM with user own API key is free.
},
cost_amount=metadata.cost_factor,
)
for model, metadata in MODEL_METADATA.items()
]
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": anthropic_credentials.id,
"provider": anthropic_credentials.provider,
"type": anthropic_credentials.type,
},
},
cost_amount=metadata.cost_factor,
)
for model, metadata in MODEL_METADATA.items()
if metadata.provider == "anthropic"
]
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {
"id": openai_credentials.id,
"provider": openai_credentials.provider,
"type": openai_credentials.type,
},
},
cost_amount=metadata.cost_factor,
)
for model, metadata in MODEL_METADATA.items()
if metadata.provider == "openai"
]
+ [
BlockCost(
cost_type=BlockCostType.RUN,
cost_filter={
"model": model,
"credentials": {"id": groq_credentials.id},
},
cost_amount=metadata.cost_factor,
)
for model, metadata in MODEL_METADATA.items()
if metadata.provider == "groq"
]
+ [
BlockCost(
# Default cost is running LlmModel.GPT4O.
cost_amount=MODEL_METADATA[LlmModel.GPT4O].cost_factor,
cost_filter={"api_key": None},
),
]
)
BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
AIConversationBlock: llm_cost,
AITextGeneratorBlock: llm_cost,
AIStructuredResponseGeneratorBlock: llm_cost,
AITextSummarizerBlock: llm_cost,
CreateTalkingAvatarVideoBlock: [
BlockCost(
cost_amount=15,
cost_filter={
"credentials": {
"id": did_credentials.id,
"provider": did_credentials.provider,
"type": did_credentials.type,
}
},
)
],
SearchTheWebBlock: [BlockCost(cost_amount=1)],
ExtractWebsiteContentBlock: [
BlockCost(cost_amount=1, cost_filter={"raw_content": False})
],
IdeogramModelBlock: [
BlockCost(
cost_amount=1,
cost_filter={
"credentials": {
"id": ideogram_credentials.id,
"provider": ideogram_credentials.provider,
"type": ideogram_credentials.type,
}
},
)
],
AIShortformVideoCreatorBlock: [
BlockCost(
cost_amount=10,
cost_filter={
"credentials": {
"id": revid_credentials.id,
"provider": revid_credentials.provider,
"type": revid_credentials.type,
}
},
)
],
ReplicateFluxAdvancedModelBlock: [
BlockCost(
cost_amount=10,
cost_filter={
"credentials": {
"id": replicate_credentials.id,
"provider": replicate_credentials.provider,
"type": replicate_credentials.type,
}
},
)
],
}
config = Config()
class UserCreditBase(ABC):
@@ -243,7 +70,11 @@ class UserCredit(UserCreditBase):
async def get_or_refill_credit(self, user_id: str) -> int:
cur_time = self.time_now()
cur_month = cur_time.replace(day=1, hour=0, minute=0, second=0, microsecond=0)
nxt_month = cur_month.replace(month=cur_month.month + 1)
nxt_month = (
cur_month.replace(month=cur_month.month + 1)
if cur_month.month < 12
else cur_month.replace(year=cur_month.year + 1, month=1)
)
user_credit = await UserBlockCredit.prisma().group_by(
by=["userId"],
@@ -271,7 +102,7 @@ class UserCredit(UserCreditBase):
"createdAt": self.time_now(),
}
)
except prisma.errors.UniqueViolationError:
except UniqueViolationError:
pass # Already refilled this month
return self.num_user_credits_refill
@@ -280,8 +111,8 @@ class UserCredit(UserCreditBase):
def time_now():
return datetime.now(timezone.utc)
@staticmethod
def _block_usage_cost(
self,
block: Block,
input_data: BlockInput,
data_size: float,
@@ -292,28 +123,44 @@ class UserCredit(UserCreditBase):
return 0, {}
for block_cost in block_costs:
if all(
# None, [], {}, "", are considered the same value.
input_data.get(k) == b or (not input_data.get(k) and not b)
for k, b in block_cost.cost_filter.items()
):
if block_cost.cost_type == BlockCostType.RUN:
return block_cost.cost_amount, block_cost.cost_filter
if not self._is_cost_filter_match(block_cost.cost_filter, input_data):
continue
if block_cost.cost_type == BlockCostType.SECOND:
return (
int(run_time * block_cost.cost_amount),
block_cost.cost_filter,
)
if block_cost.cost_type == BlockCostType.RUN:
return block_cost.cost_amount, block_cost.cost_filter
if block_cost.cost_type == BlockCostType.BYTE:
return (
int(data_size * block_cost.cost_amount),
block_cost.cost_filter,
)
if block_cost.cost_type == BlockCostType.SECOND:
return (
int(run_time * block_cost.cost_amount),
block_cost.cost_filter,
)
if block_cost.cost_type == BlockCostType.BYTE:
return (
int(data_size * block_cost.cost_amount),
block_cost.cost_filter,
)
return 0, {}
def _is_cost_filter_match(
self, cost_filter: BlockInput, input_data: BlockInput
) -> bool:
"""
Filter rules:
- If costFilter is an object, then check if costFilter is the subset of inputValues
- Otherwise, check if costFilter is equal to inputValues.
- Undefined, null, and empty string are considered as equal.
"""
if not isinstance(cost_filter, dict) or not isinstance(input_data, dict):
return cost_filter == input_data
return all(
(not input_data.get(k) and not v)
or (input_data.get(k) and self._is_cost_filter_match(v, input_data[k]))
for k, v in cost_filter.items()
)
async def spend_credits(
self,
user_id: str,
@@ -377,7 +224,6 @@ class DisabledUserCredit(UserCreditBase):
def get_user_credit_model() -> UserCreditBase:
config = Config()
if config.enable_credit.lower() == "true":
return UserCredit(config.num_user_credits_refill)
else:

View File

@@ -23,15 +23,23 @@ logger = logging.getLogger(__name__)
async def connect():
if prisma.is_connected():
return
await prisma.connect()
if not prisma.is_connected():
raise ConnectionError("Failed to connect to Prisma.")
@conn_retry("Prisma", "Releasing connection")
async def disconnect():
if not prisma.is_connected():
return
await prisma.disconnect()
if prisma.is_connected():
raise ConnectionError("Failed to disconnect from Prisma.")
@asynccontextmanager
async def transaction():

View File

@@ -1,7 +1,7 @@
from collections import defaultdict
from datetime import datetime, timezone
from multiprocessing import Manager
from typing import Any, Generic, TypeVar
from typing import Any, AsyncGenerator, Generator, Generic, TypeVar
from prisma.enums import AgentExecutionStatus
from prisma.models import (
@@ -14,7 +14,9 @@ from pydantic import BaseModel
from backend.data.block import BlockData, BlockInput, CompletedBlockOutput
from backend.data.includes import EXECUTION_RESULT_INCLUDE, GRAPH_EXECUTION_INCLUDE
from backend.data.queue import AsyncRedisEventBus, RedisEventBus
from backend.util import json, mock
from backend.util.settings import Config
class GraphExecution(BaseModel):
@@ -64,6 +66,7 @@ class ExecutionResult(BaseModel):
graph_exec_id: str
node_exec_id: str
node_id: str
block_id: str
status: ExecutionStatus
input_data: BlockInput
output_data: CompletedBlockOutput
@@ -72,11 +75,31 @@ class ExecutionResult(BaseModel):
start_time: datetime | None
end_time: datetime | None
@staticmethod
def from_graph(graph: AgentGraphExecution):
return ExecutionResult(
graph_id=graph.agentGraphId,
graph_version=graph.agentGraphVersion,
graph_exec_id=graph.id,
node_exec_id="",
node_id="",
block_id="",
status=graph.executionStatus,
# TODO: Populate input_data & output_data from AgentNodeExecutions
# Input & Output comes AgentInputBlock & AgentOutputBlock.
input_data={},
output_data={},
add_time=graph.createdAt,
queue_time=graph.createdAt,
start_time=graph.startedAt,
end_time=graph.updatedAt,
)
@staticmethod
def from_db(execution: AgentNodeExecution):
if execution.executionData:
# Execution that has been queued for execution will persist its data.
input_data = json.loads(execution.executionData)
input_data = json.loads(execution.executionData, target_type=dict[str, Any])
else:
# For incomplete execution, executionData will not be yet available.
input_data: BlockInput = defaultdict()
@@ -93,9 +116,10 @@ class ExecutionResult(BaseModel):
graph_id=graph_execution.agentGraphId if graph_execution else "",
graph_version=graph_execution.agentGraphVersion if graph_execution else 0,
graph_exec_id=execution.agentGraphExecutionId,
block_id=execution.AgentNode.agentBlockId if execution.AgentNode else "",
node_exec_id=execution.id,
node_id=execution.agentNodeId,
status=ExecutionStatus(execution.executionStatus),
status=execution.executionStatus,
input_data=input_data,
output_data=output_data,
add_time=execution.addedTime,
@@ -248,15 +272,19 @@ async def update_graph_execution_start_time(graph_exec_id: str):
async def update_graph_execution_stats(
graph_exec_id: str,
stats: dict[str, Any],
):
) -> ExecutionResult:
status = ExecutionStatus.FAILED if stats.get("error") else ExecutionStatus.COMPLETED
await AgentGraphExecution.prisma().update(
res = await AgentGraphExecution.prisma().update(
where={"id": graph_exec_id},
data={
"executionStatus": status,
"stats": json.dumps(stats),
},
)
if not res:
raise ValueError(f"Execution {graph_exec_id} not found.")
return ExecutionResult.from_graph(res)
async def update_node_execution_stats(node_exec_id: str, stats: dict[str, Any]):
@@ -444,3 +472,42 @@ async def get_incomplete_executions(
include=EXECUTION_RESULT_INCLUDE,
)
return [ExecutionResult.from_db(execution) for execution in executions]
# --------------------- Event Bus --------------------- #
config = Config()
class RedisExecutionEventBus(RedisEventBus[ExecutionResult]):
Model = ExecutionResult
@property
def event_bus_name(self) -> str:
return config.execution_event_bus_name
def publish(self, res: ExecutionResult):
self.publish_event(res, f"{res.graph_id}/{res.graph_exec_id}")
def listen(
self, graph_id: str = "*", graph_exec_id: str = "*"
) -> Generator[ExecutionResult, None, None]:
for execution_result in self.listen_events(f"{graph_id}/{graph_exec_id}"):
yield execution_result
class AsyncRedisExecutionEventBus(AsyncRedisEventBus[ExecutionResult]):
Model = ExecutionResult
@property
def event_bus_name(self) -> str:
return config.execution_event_bus_name
async def publish(self, res: ExecutionResult):
await self.publish_event(res, f"{res.graph_id}/{res.graph_exec_id}")
async def listen(
self, graph_id: str = "*", graph_exec_id: str = "*"
) -> AsyncGenerator[ExecutionResult, None]:
async for execution_result in self.listen_events(f"{graph_id}/{graph_exec_id}"):
yield execution_result

View File

@@ -3,19 +3,23 @@ import logging
import uuid
from collections import defaultdict
from datetime import datetime, timezone
from typing import Any, Literal, Type
from typing import Any, Literal, Optional, Type
import prisma
from prisma.models import AgentGraph, AgentGraphExecution, AgentNode, AgentNodeLink
from prisma.types import AgentGraphWhereInput
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
from backend.blocks.basic import AgentInputBlock, AgentOutputBlock
from backend.data.block import BlockInput, BlockType, get_block, get_blocks
from backend.data.db import BaseDbModel, transaction
from backend.data.execution import ExecutionStatus
from backend.data.includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
from backend.util import json
from .block import BlockInput, BlockType, get_block, get_blocks
from .db import BaseDbModel, transaction
from .execution import ExecutionStatus
from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE
from .integrations import Webhook
logger = logging.getLogger(__name__)
@@ -48,20 +52,51 @@ class Node(BaseDbModel):
input_links: list[Link] = []
output_links: list[Link] = []
webhook_id: Optional[str] = None
class NodeModel(Node):
graph_id: str
graph_version: int
webhook: Optional[Webhook] = None
@staticmethod
def from_db(node: AgentNode):
if not node.AgentBlock:
raise ValueError(f"Invalid node {node.id}, invalid AgentBlock.")
obj = Node(
obj = NodeModel(
id=node.id,
block_id=node.AgentBlock.id,
input_default=json.loads(node.constantInput),
metadata=json.loads(node.metadata),
input_default=json.loads(node.constantInput, target_type=dict[str, Any]),
metadata=json.loads(node.metadata, target_type=dict[str, Any]),
graph_id=node.agentGraphId,
graph_version=node.agentGraphVersion,
webhook_id=node.webhookId,
webhook=Webhook.from_db(node.Webhook) if node.Webhook else None,
)
obj.input_links = [Link.from_db(link) for link in node.Input or []]
obj.output_links = [Link.from_db(link) for link in node.Output or []]
return obj
def is_triggered_by_event_type(self, event_type: str) -> bool:
if not (block := get_block(self.block_id)):
raise ValueError(f"Block #{self.block_id} not found for node #{self.id}")
if not block.webhook_config:
raise TypeError("This method can't be used on non-webhook blocks")
event_filter = self.input_default.get(block.webhook_config.event_filter_input)
if not event_filter:
raise ValueError(f"Event filter is not configured on node #{self.id}")
return event_type in [
block.webhook_config.event_format.format(event=k)
for k in event_filter
if event_filter[k] is True
]
# Fix 2-way reference Node <-> Webhook
Webhook.model_rebuild()
class GraphExecution(BaseDbModel):
execution_id: str
@@ -79,10 +114,13 @@ class GraphExecution(BaseDbModel):
duration = (end_time - start_time).total_seconds()
total_run_time = duration
if execution.stats:
stats = json.loads(execution.stats)
duration = stats.get("walltime", duration)
total_run_time = stats.get("nodes_walltime", total_run_time)
try:
stats = json.loads(execution.stats or "{}", target_type=dict[str, Any])
except ValueError:
stats = {}
duration = stats.get("walltime", duration)
total_run_time = stats.get("nodes_walltime", total_run_time)
return GraphExecution(
id=execution.id,
@@ -105,33 +143,6 @@ class Graph(BaseDbModel):
nodes: list[Node] = []
links: list[Link] = []
@staticmethod
def _generate_schema(
type_class: Type[AgentInputBlock.Input] | Type[AgentOutputBlock.Input],
data: list[dict],
) -> dict[str, Any]:
props = []
for p in data:
try:
props.append(type_class(**p))
except Exception as e:
logger.warning(f"Invalid {type_class}: {p}, {e}")
return {
"type": "object",
"properties": {
p.name: {
"secret": p.secret,
"advanced": p.advanced,
"title": p.title or p.name,
**({"description": p.description} if p.description else {}),
**({"default": p.value} if p.value is not None else {}),
}
for p in props
},
"required": [p.name for p in props if p.value is None],
}
@computed_field
@property
def input_schema(self) -> dict[str, Any]:
@@ -160,6 +171,38 @@ class Graph(BaseDbModel):
],
)
@staticmethod
def _generate_schema(
type_class: Type[AgentInputBlock.Input] | Type[AgentOutputBlock.Input],
data: list[dict],
) -> dict[str, Any]:
props = []
for p in data:
try:
props.append(type_class(**p))
except Exception as e:
logger.warning(f"Invalid {type_class}: {p}, {e}")
return {
"type": "object",
"properties": {
p.name: {
"secret": p.secret,
"advanced": p.advanced,
"title": p.title or p.name,
**({"description": p.description} if p.description else {}),
**({"default": p.value} if p.value is not None else {}),
}
for p in props
},
"required": [p.name for p in props if p.value is None],
}
class GraphModel(Graph):
user_id: str
nodes: list[NodeModel] = [] # type: ignore
@property
def starting_nodes(self) -> list[Node]:
outbound_nodes = {link.sink_id for link in self.links}
@@ -174,24 +217,35 @@ class Graph(BaseDbModel):
if node.id not in outbound_nodes or node.id in input_nodes
]
def reassign_ids(self, reassign_graph_id: bool = False):
def reassign_ids(self, user_id: str, reassign_graph_id: bool = False):
"""
Reassigns all IDs in the graph to new UUIDs.
This method can be used before storing a new graph to the database.
"""
self.validate_graph()
# Reassign Graph ID
id_map = {node.id: str(uuid.uuid4()) for node in self.nodes}
if reassign_graph_id:
self.id = str(uuid.uuid4())
# Reassign Node IDs
for node in self.nodes:
node.id = id_map[node.id]
# Reassign Link IDs
for link in self.links:
link.source_id = id_map[link.source_id]
link.sink_id = id_map[link.sink_id]
# Reassign User IDs for agent blocks
for node in self.nodes:
if node.block_id != AgentExecutorBlock().id:
continue
node.input_default["user_id"] = user_id
node.input_default.setdefault("data", {})
self.validate_graph()
def validate_graph(self, for_run: bool = False):
def sanitize(name):
return name.split("_#_")[0].split("_@_")[0].split("_$_")[0]
@@ -200,7 +254,7 @@ class Graph(BaseDbModel):
for link in self.links:
input_links[link.sink_id].append(link)
# Nodes: required fields are filled or connected
# Nodes: required fields are filled or connected and dependencies are satisfied
for node in self.nodes:
block = get_block(node.block_id)
if block is None:
@@ -215,11 +269,44 @@ class Graph(BaseDbModel):
for_run # Skip input completion validation, unless when executing.
or block.block_type == BlockType.INPUT
or block.block_type == BlockType.OUTPUT
or block.block_type == BlockType.AGENT
):
raise ValueError(
f"Node {block.name} #{node.id} required input missing: `{name}`"
)
# Get input schema properties and check dependencies
input_schema = block.input_schema.model_fields
required_fields = block.input_schema.get_required_fields()
def has_value(name):
return (
node is not None
and name in node.input_default
and node.input_default[name] is not None
and str(node.input_default[name]).strip() != ""
) or (name in input_schema and input_schema[name].default is not None)
# Validate dependencies between fields
for field_name, field_info in input_schema.items():
# Apply input dependency validation only on run & field with depends_on
json_schema_extra = field_info.json_schema_extra or {}
dependencies = json_schema_extra.get("depends_on", [])
if not for_run or not dependencies:
continue
# Check if dependent field has value in input_default
field_has_value = has_value(field_name)
field_is_required = field_name in required_fields
# Check for missing dependencies when dependent field is present
missing_deps = [dep for dep in dependencies if not has_value(dep)]
if missing_deps and (field_has_value or field_is_required):
raise ValueError(
f"Node {block.name} #{node.id}: Field `{field_name}` requires [{', '.join(missing_deps)}] to be set"
)
node_map = {v.id: v for v in self.nodes}
def is_static_output_block(nid: str) -> bool:
@@ -248,52 +335,65 @@ class Graph(BaseDbModel):
)
sanitized_name = sanitize(name)
vals = node.input_default
if i == 0:
fields = f"Valid output fields: {block.output_schema.get_fields()}"
fields = (
block.output_schema.get_fields()
if block.block_type != BlockType.AGENT
else vals.get("output_schema", {}).get("properties", {}).keys()
)
else:
fields = f"Valid input fields: {block.input_schema.get_fields()}"
fields = (
block.input_schema.get_fields()
if block.block_type != BlockType.AGENT
else vals.get("input_schema", {}).get("properties", {}).keys()
)
if sanitized_name not in fields:
raise ValueError(f"{suffix}, `{name}` invalid, {fields}")
fields_msg = f"Allowed fields: {fields}"
raise ValueError(f"{suffix}, `{name}` invalid, {fields_msg}")
if is_static_output_block(link.source_id):
link.is_static = True # Each value block output should be static.
# TODO: Add type compatibility check here.
@staticmethod
def from_db(graph: AgentGraph, hide_credentials: bool = False):
executions = [
GraphExecution.from_db(execution)
for execution in graph.AgentGraphExecution or []
]
nodes = graph.AgentNodes or []
return Graph(
return GraphModel(
id=graph.id,
user_id=graph.userId,
version=graph.version,
is_active=graph.isActive,
is_template=graph.isTemplate,
name=graph.name or "",
description=graph.description or "",
executions=executions,
nodes=[Graph._process_node(node, hide_credentials) for node in nodes],
nodes=[
GraphModel._process_node(node, hide_credentials)
for node in graph.AgentNodes or []
],
links=list(
{
Link.from_db(link)
for node in nodes
for node in graph.AgentNodes or []
for link in (node.Input or []) + (node.Output or [])
}
),
)
@staticmethod
def _process_node(node: AgentNode, hide_credentials: bool) -> Node:
node_dict = node.model_dump()
def _process_node(node: AgentNode, hide_credentials: bool) -> NodeModel:
node_dict = {field: getattr(node, field) for field in node.model_fields}
if hide_credentials and "constantInput" in node_dict:
constant_input = json.loads(node_dict["constantInput"])
constant_input = Graph._hide_credentials_in_input(constant_input)
constant_input = json.loads(
node_dict["constantInput"], target_type=dict[str, Any]
)
constant_input = GraphModel._hide_credentials_in_input(constant_input)
node_dict["constantInput"] = json.dumps(constant_input)
return Node.from_db(AgentNode(**node_dict))
return NodeModel.from_db(AgentNode(**node_dict))
@staticmethod
def _hide_credentials_in_input(input_data: dict[str, Any]) -> dict[str, Any]:
@@ -301,7 +401,7 @@ class Graph(BaseDbModel):
result = {}
for key, value in input_data.items():
if isinstance(value, dict):
result[key] = Graph._hide_credentials_in_input(value)
result[key] = GraphModel._hide_credentials_in_input(value)
elif isinstance(value, str) and any(
sensitive_key in key.lower() for sensitive_key in sensitive_keys
):
@@ -312,22 +412,37 @@ class Graph(BaseDbModel):
return result
# --------------------- Model functions --------------------- #
# --------------------- CRUD functions --------------------- #
async def get_node(node_id: str) -> Node:
async def get_node(node_id: str) -> NodeModel:
node = await AgentNode.prisma().find_unique_or_raise(
where={"id": node_id},
include=AGENT_NODE_INCLUDE,
)
return Node.from_db(node)
return NodeModel.from_db(node)
async def set_node_webhook(node_id: str, webhook_id: str | None) -> NodeModel:
node = await AgentNode.prisma().update(
where={"id": node_id},
data=(
{"Webhook": {"connect": {"id": webhook_id}}}
if webhook_id
else {"Webhook": {"disconnect": True}}
),
include=AGENT_NODE_INCLUDE,
)
if not node:
raise ValueError(f"Node #{node_id} not found")
return NodeModel.from_db(node)
async def get_graphs(
user_id: str,
include_executions: bool = False,
filter_by: Literal["active", "template"] | None = "active",
) -> list[Graph]:
) -> list[GraphModel]:
"""
Retrieves graph metadata objects.
Default behaviour is to get all currently active graphs.
@@ -338,7 +453,7 @@ async def get_graphs(
user_id: The ID of the user that owns the graph.
Returns:
list[Graph]: A list of objects representing the retrieved graph metadata.
list[GraphModel]: A list of objects representing the retrieved graphs.
"""
where_clause: AgentGraphWhereInput = {}
@@ -359,7 +474,7 @@ async def get_graphs(
include=graph_include,
)
return [Graph.from_db(graph) for graph in graphs]
return [GraphModel.from_db(graph) for graph in graphs]
async def get_graph(
@@ -368,7 +483,7 @@ async def get_graph(
template: bool = False,
user_id: str | None = None,
hide_credentials: bool = False,
) -> Graph | None:
) -> GraphModel | None:
"""
Retrieves a graph from the DB.
Defaults to the version with `is_active` if `version` is not passed,
@@ -393,38 +508,35 @@ async def get_graph(
include=AGENT_GRAPH_INCLUDE,
order={"version": "desc"},
)
return Graph.from_db(graph, hide_credentials) if graph else None
return GraphModel.from_db(graph, hide_credentials) if graph else None
async def set_graph_active_version(graph_id: str, version: int, user_id: str) -> None:
# Check if the graph belongs to the user
graph = await AgentGraph.prisma().find_first(
# Activate the requested version if it exists and is owned by the user.
updated_count = await AgentGraph.prisma().update_many(
data={"isActive": True},
where={
"id": graph_id,
"version": version,
"userId": user_id,
}
)
if not graph:
raise Exception(f"Graph #{graph_id} v{version} not found or not owned by user")
updated_graph = await AgentGraph.prisma().update(
data={"isActive": True},
where={
"graphVersionId": {"id": graph_id, "version": version},
},
)
if not updated_graph:
raise Exception(f"Graph #{graph_id} v{version} not found")
if updated_count == 0:
raise Exception(f"Graph #{graph_id} v{version} not found or not owned by user")
# Deactivate all other versions
# Deactivate all other versions.
await AgentGraph.prisma().update_many(
data={"isActive": False},
where={"id": graph_id, "version": {"not": version}, "userId": user_id},
where={
"id": graph_id,
"version": {"not": version},
"userId": user_id,
"isActive": True,
},
)
async def get_graph_all_versions(graph_id: str, user_id: str) -> list[Graph]:
async def get_graph_all_versions(graph_id: str, user_id: str) -> list[GraphModel]:
graph_versions = await AgentGraph.prisma().find_many(
where={"id": graph_id, "userId": user_id},
order={"version": "desc"},
@@ -434,7 +546,7 @@ async def get_graph_all_versions(graph_id: str, user_id: str) -> list[Graph]:
if not graph_versions:
return []
return [Graph.from_db(graph) for graph in graph_versions]
return [GraphModel.from_db(graph) for graph in graph_versions]
async def delete_graph(graph_id: str, user_id: str) -> int:
@@ -446,7 +558,7 @@ async def delete_graph(graph_id: str, user_id: str) -> int:
return entries_count
async def create_graph(graph: Graph, user_id: str) -> Graph:
async def create_graph(graph: Graph, user_id: str) -> GraphModel:
async with transaction() as tx:
await __create_graph(tx, graph, user_id)
@@ -502,3 +614,105 @@ async def __create_graph(tx, graph: Graph, user_id: str):
for link in graph.links
]
)
# ------------------------ UTILITIES ------------------------ #
def make_graph_model(creatable_graph: Graph, user_id: str) -> GraphModel:
"""
Convert a Graph to a GraphModel, setting graph_id and graph_version on all nodes.
Args:
creatable_graph (Graph): The creatable graph to convert.
user_id (str): The ID of the user creating the graph.
Returns:
GraphModel: The converted Graph object.
"""
# Create a new Graph object, inheriting properties from CreatableGraph
return GraphModel(
**creatable_graph.model_dump(exclude={"nodes"}),
user_id=user_id,
nodes=[
NodeModel(
**creatable_node.model_dump(),
graph_id=creatable_graph.id,
graph_version=creatable_graph.version,
)
for creatable_node in creatable_graph.nodes
],
)
async def fix_llm_provider_credentials():
"""Fix node credentials with provider `llm`"""
from backend.integrations.credentials_store import IntegrationCredentialsStore
from .user import get_user_integrations
store = IntegrationCredentialsStore()
broken_nodes = await prisma.get_client().query_raw(
"""
SELECT graph."userId" user_id,
node.id node_id,
node."constantInput" node_preset_input
FROM platform."AgentNode" node
LEFT JOIN platform."AgentGraph" graph
ON node."agentGraphId" = graph.id
WHERE node."constantInput"::jsonb->'credentials'->>'provider' = 'llm'
ORDER BY graph."userId";
"""
)
logger.info(f"Fixing LLM credential inputs on {len(broken_nodes)} nodes")
user_id: str = ""
user_integrations = None
for node in broken_nodes:
if node["user_id"] != user_id:
# Save queries by only fetching once per user
user_id = node["user_id"]
user_integrations = await get_user_integrations(user_id)
elif not user_integrations:
raise RuntimeError(f"Impossible state while processing node {node}")
node_id: str = node["node_id"]
node_preset_input: dict = json.loads(node["node_preset_input"])
credentials_meta: dict = node_preset_input["credentials"]
credentials = next(
(
c
for c in user_integrations.credentials
if c.id == credentials_meta["id"]
),
None,
)
if not credentials:
continue
if credentials.type != "api_key":
logger.warning(
f"User {user_id} credentials {credentials.id} with provider 'llm' "
f"has invalid type '{credentials.type}'"
)
continue
api_key = credentials.api_key.get_secret_value()
if api_key.startswith("sk-ant-api03-"):
credentials.provider = credentials_meta["provider"] = "anthropic"
elif api_key.startswith("sk-"):
credentials.provider = credentials_meta["provider"] = "openai"
elif api_key.startswith("gsk_"):
credentials.provider = credentials_meta["provider"] = "groq"
else:
logger.warning(
f"Could not identify provider from key prefix {api_key[:13]}*****"
)
continue
store.update_creds(user_id, credentials)
await AgentNode.prisma().update(
where={"id": node_id},
data={"constantInput": json.dumps(node_preset_input)},
)

View File

@@ -3,6 +3,7 @@ import prisma
AGENT_NODE_INCLUDE: prisma.types.AgentNodeInclude = {
"Input": True,
"Output": True,
"Webhook": True,
"AgentBlock": True,
}
@@ -27,3 +28,7 @@ GRAPH_EXECUTION_INCLUDE: prisma.types.AgentGraphExecutionInclude = {
}
}
}
INTEGRATION_WEBHOOK_INCLUDE: prisma.types.IntegrationWebhookInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE} # type: ignore
}

View File

@@ -0,0 +1,168 @@
import logging
from typing import TYPE_CHECKING, AsyncGenerator, Optional
from prisma import Json
from prisma.models import IntegrationWebhook
from pydantic import Field
from backend.data.includes import INTEGRATION_WEBHOOK_INCLUDE
from backend.data.queue import AsyncRedisEventBus
from .db import BaseDbModel
if TYPE_CHECKING:
from .graph import NodeModel
logger = logging.getLogger(__name__)
class Webhook(BaseDbModel):
user_id: str
provider: str
credentials_id: str
webhook_type: str
resource: str
events: list[str]
config: dict = Field(default_factory=dict)
secret: str
provider_webhook_id: str
attached_nodes: Optional[list["NodeModel"]] = None
@staticmethod
def from_db(webhook: IntegrationWebhook):
from .graph import NodeModel
return Webhook(
id=webhook.id,
user_id=webhook.userId,
provider=webhook.provider,
credentials_id=webhook.credentialsId,
webhook_type=webhook.webhookType,
resource=webhook.resource,
events=webhook.events,
config=dict(webhook.config),
secret=webhook.secret,
provider_webhook_id=webhook.providerWebhookId,
attached_nodes=(
[NodeModel.from_db(node) for node in webhook.AgentNodes]
if webhook.AgentNodes is not None
else None
),
)
# --------------------- CRUD functions --------------------- #
async def create_webhook(webhook: Webhook) -> Webhook:
created_webhook = await IntegrationWebhook.prisma().create(
data={
"id": webhook.id,
"userId": webhook.user_id,
"provider": webhook.provider,
"credentialsId": webhook.credentials_id,
"webhookType": webhook.webhook_type,
"resource": webhook.resource,
"events": webhook.events,
"config": Json(webhook.config),
"secret": webhook.secret,
"providerWebhookId": webhook.provider_webhook_id,
}
)
return Webhook.from_db(created_webhook)
async def get_webhook(webhook_id: str) -> Webhook:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
webhook = await IntegrationWebhook.prisma().find_unique_or_raise(
where={"id": webhook_id},
include=INTEGRATION_WEBHOOK_INCLUDE,
)
return Webhook.from_db(webhook)
async def get_all_webhooks(credentials_id: str) -> list[Webhook]:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
webhooks = await IntegrationWebhook.prisma().find_many(
where={"credentialsId": credentials_id},
include=INTEGRATION_WEBHOOK_INCLUDE,
)
return [Webhook.from_db(webhook) for webhook in webhooks]
async def find_webhook(
credentials_id: str, webhook_type: str, resource: str, events: list[str]
) -> Webhook | None:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
webhook = await IntegrationWebhook.prisma().find_first(
where={
"credentialsId": credentials_id,
"webhookType": webhook_type,
"resource": resource,
"events": {"has_every": events},
},
include=INTEGRATION_WEBHOOK_INCLUDE,
)
return Webhook.from_db(webhook) if webhook else None
async def update_webhook_config(webhook_id: str, updated_config: dict) -> Webhook:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
_updated_webhook = await IntegrationWebhook.prisma().update(
where={"id": webhook_id},
data={"config": Json(updated_config)},
include=INTEGRATION_WEBHOOK_INCLUDE,
)
if _updated_webhook is None:
raise ValueError(f"Webhook #{webhook_id} not found")
return Webhook.from_db(_updated_webhook)
async def delete_webhook(webhook_id: str) -> None:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
deleted = await IntegrationWebhook.prisma().delete(where={"id": webhook_id})
if not deleted:
raise ValueError(f"Webhook #{webhook_id} not found")
# --------------------- WEBHOOK EVENTS --------------------- #
class WebhookEvent(BaseDbModel):
provider: str
webhook_id: str
event_type: str
payload: dict
class WebhookEventBus(AsyncRedisEventBus[WebhookEvent]):
Model = WebhookEvent
@property
def event_bus_name(self) -> str:
return "webhooks"
async def publish(self, event: WebhookEvent):
await self.publish_event(event, f"{event.webhook_id}/{event.event_type}")
async def listen(
self, webhook_id: str, event_type: Optional[str] = None
) -> AsyncGenerator[WebhookEvent, None]:
async for event in self.listen_events(f"{webhook_id}/{event_type or '*'}"):
yield event
event_bus = WebhookEventBus()
async def publish_webhook_event(event: WebhookEvent):
await event_bus.publish(event)
async def listen_for_webhook_event(
webhook_id: str, event_type: Optional[str] = None
) -> WebhookEvent | None:
async for event in event_bus.listen(webhook_id, event_type):
return event # Only one event is expected

View File

@@ -1,10 +1,20 @@
from __future__ import annotations
import logging
from typing import Any, Callable, ClassVar, Generic, Optional, TypeVar
from typing import (
Annotated,
Any,
Callable,
ClassVar,
Generic,
Literal,
Optional,
TypedDict,
TypeVar,
)
from uuid import uuid4
from autogpt_libs.supabase_integration_credentials_store.types import CredentialsType
from pydantic import BaseModel, Field, GetCoreSchemaHandler
from pydantic import BaseModel, Field, GetCoreSchemaHandler, SecretStr, field_serializer
from pydantic_core import (
CoreSchema,
PydanticUndefined,
@@ -113,6 +123,8 @@ def SchemaField(
advanced: Optional[bool] = None,
secret: bool = False,
exclude: bool = False,
hidden: Optional[bool] = None,
depends_on: list[str] | None = None,
**kwargs,
) -> T:
json_extra = {
@@ -121,6 +133,8 @@ def SchemaField(
"placeholder": placeholder,
"secret": secret,
"advanced": advanced,
"hidden": hidden,
"depends_on": depends_on,
}.items()
if v is not None
}
@@ -137,6 +151,77 @@ def SchemaField(
)
class _BaseCredentials(BaseModel):
id: str = Field(default_factory=lambda: str(uuid4()))
provider: str
title: Optional[str]
@field_serializer("*")
def dump_secret_strings(value: Any, _info):
if isinstance(value, SecretStr):
return value.get_secret_value()
return value
class OAuth2Credentials(_BaseCredentials):
type: Literal["oauth2"] = "oauth2"
username: Optional[str]
"""Username of the third-party service user that these credentials belong to"""
access_token: SecretStr
access_token_expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the access token expires (if at all)"""
refresh_token: Optional[SecretStr]
refresh_token_expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the refresh token expires (if at all)"""
scopes: list[str]
metadata: dict[str, Any] = Field(default_factory=dict)
def bearer(self) -> str:
return f"Bearer {self.access_token.get_secret_value()}"
class APIKeyCredentials(_BaseCredentials):
type: Literal["api_key"] = "api_key"
api_key: SecretStr
expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the API key expires (if at all)"""
def bearer(self) -> str:
return f"Bearer {self.api_key.get_secret_value()}"
Credentials = Annotated[
OAuth2Credentials | APIKeyCredentials,
Field(discriminator="type"),
]
CredentialsType = Literal["api_key", "oauth2"]
class OAuthState(BaseModel):
token: str
provider: str
expires_at: int
"""Unix timestamp (seconds) indicating when this OAuth state expires"""
scopes: list[str]
class UserMetadata(BaseModel):
integration_credentials: list[Credentials] = Field(default_factory=list)
integration_oauth_states: list[OAuthState] = Field(default_factory=list)
class UserMetadataRaw(TypedDict, total=False):
integration_credentials: list[dict]
integration_oauth_states: list[dict]
class UserIntegrations(BaseModel):
credentials: list[Credentials] = Field(default_factory=list)
oauth_states: list[OAuthState] = Field(default_factory=list)
CP = TypeVar("CP", bound=str)
CT = TypeVar("CT", bound=CredentialsType)
@@ -151,11 +236,22 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
type: CT
class CredentialsFieldSchemaExtra(BaseModel, Generic[CP, CT]):
# TODO: move discrimination mechanism out of CredentialsField (frontend + backend)
credentials_provider: list[CP]
credentials_scopes: Optional[list[str]]
credentials_types: list[CT]
discriminator: Optional[str] = None
discriminator_mapping: Optional[dict[str, CP]] = None
def CredentialsField(
provider: CP,
provider: CP | list[CP],
supported_credential_types: set[CT],
required_scopes: set[str] = set(),
*,
discriminator: Optional[str] = None,
discriminator_mapping: Optional[dict[str, Any]] = None,
title: Optional[str] = None,
description: Optional[str] = None,
**kwargs,
@@ -164,20 +260,21 @@ def CredentialsField(
`CredentialsField` must and can only be used on fields named `credentials`.
This is enforced by the `BlockSchema` base class.
"""
json_extra = {
k: v
for k, v in {
"credentials_provider": provider,
"credentials_scopes": list(required_scopes) or None, # omit if empty
"credentials_types": list(supported_credential_types),
}.items()
if v is not None
}
if not isinstance(provider, str) and len(provider) > 1 and not discriminator:
raise TypeError("Multi-provider CredentialsField requires discriminator!")
field_schema_extra = CredentialsFieldSchemaExtra[CP, CT](
credentials_provider=[provider] if isinstance(provider, str) else provider,
credentials_scopes=list(required_scopes) or None, # omit if empty
credentials_types=list(supported_credential_types),
discriminator=discriminator,
discriminator_mapping=discriminator_mapping,
)
return Field(
title=title,
description=description,
json_schema_extra=json_extra,
json_schema_extra=field_schema_extra.model_dump(exclude_none=True),
**kwargs,
)

View File

@@ -9,11 +9,8 @@ from redis.asyncio.client import PubSub as AsyncPubSub
from redis.client import PubSub
from backend.data import redis
from backend.data.execution import ExecutionResult
from backend.util.settings import Config
logger = logging.getLogger(__name__)
config = Config()
class DateTimeEncoder(json.JSONEncoder):
@@ -36,7 +33,7 @@ class BaseRedisEventBus(Generic[M], ABC):
def _serialize_message(self, item: M, channel_key: str) -> tuple[str, str]:
message = json.dumps(item.model_dump(), cls=DateTimeEncoder)
channel_name = f"{self.event_bus_name}-{channel_key}"
channel_name = f"{self.event_bus_name}/{channel_key}"
logger.info(f"[{channel_name}] Publishing an event to Redis {message}")
return message, channel_name
@@ -54,7 +51,7 @@ class BaseRedisEventBus(Generic[M], ABC):
def _subscribe(
self, connection: redis.Redis | redis.AsyncRedis, channel_key: str
) -> tuple[PubSub | AsyncPubSub, str]:
channel_name = f"{self.event_bus_name}-{channel_key}"
channel_name = f"{self.event_bus_name}/{channel_key}"
pubsub = connection.pubsub()
return pubsub, channel_name
@@ -108,37 +105,3 @@ class AsyncRedisEventBus(BaseRedisEventBus[M], ABC):
async for message in pubsub.listen():
if event := self._deserialize_message(message, channel_key):
yield event
class RedisExecutionEventBus(RedisEventBus[ExecutionResult]):
Model = ExecutionResult
@property
def event_bus_name(self) -> str:
return config.execution_event_bus_name
def publish(self, res: ExecutionResult):
self.publish_event(res, f"{res.graph_id}-{res.graph_exec_id}")
def listen(
self, graph_id: str = "*", graph_exec_id: str = "*"
) -> Generator[ExecutionResult, None, None]:
for execution_result in self.listen_events(f"{graph_id}-{graph_exec_id}"):
yield execution_result
class AsyncRedisExecutionEventBus(AsyncRedisEventBus[ExecutionResult]):
Model = ExecutionResult
@property
def event_bus_name(self) -> str:
return config.execution_event_bus_name
async def publish(self, res: ExecutionResult):
await self.publish_event(res, f"{res.graph_id}-{res.graph_exec_id}")
async def listen(
self, graph_id: str = "*", graph_exec_id: str = "*"
) -> AsyncGenerator[ExecutionResult, None]:
async for execution_result in self.listen_events(f"{graph_id}-{graph_exec_id}"):
yield execution_result

View File

@@ -1,81 +0,0 @@
from datetime import datetime
from typing import Optional
from prisma.models import AgentGraphExecutionSchedule
from backend.data.block import BlockInput
from backend.data.db import BaseDbModel
from backend.util import json
class ExecutionSchedule(BaseDbModel):
graph_id: str
user_id: str
graph_version: int
schedule: str
is_enabled: bool
input_data: BlockInput
last_updated: Optional[datetime] = None
def __init__(self, is_enabled: Optional[bool] = None, **kwargs):
kwargs["is_enabled"] = (is_enabled is None) or is_enabled
super().__init__(**kwargs)
@staticmethod
def from_db(schedule: AgentGraphExecutionSchedule):
return ExecutionSchedule(
id=schedule.id,
graph_id=schedule.agentGraphId,
user_id=schedule.userId,
graph_version=schedule.agentGraphVersion,
schedule=schedule.schedule,
is_enabled=schedule.isEnabled,
last_updated=schedule.lastUpdated.replace(tzinfo=None),
input_data=json.loads(schedule.inputData),
)
async def get_active_schedules(last_fetch_time: datetime) -> list[ExecutionSchedule]:
query = AgentGraphExecutionSchedule.prisma().find_many(
where={"isEnabled": True, "lastUpdated": {"gt": last_fetch_time}},
order={"lastUpdated": "asc"},
)
return [ExecutionSchedule.from_db(schedule) for schedule in await query]
async def disable_schedule(schedule_id: str):
await AgentGraphExecutionSchedule.prisma().update(
where={"id": schedule_id}, data={"isEnabled": False}
)
async def get_schedules(graph_id: str, user_id: str) -> list[ExecutionSchedule]:
query = AgentGraphExecutionSchedule.prisma().find_many(
where={
"isEnabled": True,
"agentGraphId": graph_id,
"userId": user_id,
},
)
return [ExecutionSchedule.from_db(schedule) for schedule in await query]
async def add_schedule(schedule: ExecutionSchedule) -> ExecutionSchedule:
obj = await AgentGraphExecutionSchedule.prisma().create(
data={
"id": schedule.id,
"userId": schedule.user_id,
"agentGraphId": schedule.graph_id,
"agentGraphVersion": schedule.graph_version,
"schedule": schedule.schedule,
"isEnabled": schedule.is_enabled,
"inputData": json.dumps(schedule.input_data),
}
)
return ExecutionSchedule.from_db(obj)
async def update_schedule(schedule_id: str, is_enabled: bool, user_id: str):
await AgentGraphExecutionSchedule.prisma().update(
where={"id": schedule_id}, data={"isEnabled": is_enabled}
)

View File

@@ -1,23 +1,17 @@
import logging
from typing import Optional, cast
from autogpt_libs.supabase_integration_credentials_store.types import (
UserIntegrations,
UserMetadata,
UserMetadataRaw,
)
from autogpt_libs.auth.models import DEFAULT_USER_ID
from fastapi import HTTPException
from prisma import Json
from prisma.models import User
from backend.data.db import prisma
from backend.data.model import UserIntegrations, UserMetadata, UserMetadataRaw
from backend.util.encryption import JSONCryptor
logger = logging.getLogger(__name__)
DEFAULT_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
DEFAULT_EMAIL = "default@example.com"
async def get_or_create_user(user_data: dict) -> User:
user_id = user_data.get("sub")

View File

@@ -4,6 +4,7 @@ from typing import Any, Callable, Concatenate, Coroutine, ParamSpec, TypeVar, ca
from backend.data.credit import get_user_credit_model
from backend.data.execution import (
ExecutionResult,
RedisExecutionEventBus,
create_graph_execution,
get_execution_results,
get_incomplete_executions,
@@ -15,18 +16,18 @@ from backend.data.execution import (
upsert_execution_output,
)
from backend.data.graph import get_graph, get_node
from backend.data.queue import RedisExecutionEventBus
from backend.data.user import (
get_user_integrations,
get_user_metadata,
update_user_integrations,
update_user_metadata,
)
from backend.util.service import AppService, expose
from backend.util.service import AppService, expose, register_pydantic_serializers
from backend.util.settings import Config
P = ParamSpec("P")
R = TypeVar("R")
config = Config()
class DatabaseManager(AppService):
@@ -38,11 +39,11 @@ class DatabaseManager(AppService):
@classmethod
def get_port(cls) -> int:
return Config().database_api_port
return config.database_api_port
@expose
def send_execution_update(self, execution_result_dict: dict[Any, Any]):
self.event_queue.publish(ExecutionResult(**execution_result_dict))
def send_execution_update(self, execution_result: ExecutionResult):
self.event_queue.publish(execution_result)
@staticmethod
def exposed_run_and_wait(
@@ -55,6 +56,9 @@ class DatabaseManager(AppService):
res = self.run_and_wait(coroutine)
return res
# Register serializers for annotations on bare function
register_pydantic_serializers(f)
return wrapper
# Executions

View File

@@ -18,6 +18,7 @@ if TYPE_CHECKING:
from autogpt_libs.utils.cache import thread_cached
from backend.blocks.agent import AgentExecutorBlock
from backend.data import redis
from backend.data.block import Block, BlockData, BlockInput, BlockType, get_block
from backend.data.execution import (
@@ -29,7 +30,7 @@ from backend.data.execution import (
merge_execution_input,
parse_execution_output,
)
from backend.data.graph import Graph, Link, Node
from backend.data.graph import GraphModel, Link, Node
from backend.data.model import CREDENTIALS_FIELD_NAME, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util import json
@@ -125,7 +126,7 @@ def execute_node(
def update_execution(status: ExecutionStatus) -> ExecutionResult:
exec_update = db_client.update_execution_status(node_exec_id, status)
db_client.send_execution_update(exec_update.model_dump())
db_client.send_execution_update(exec_update)
return exec_update
node = db_client.get_node(node_id)
@@ -135,7 +136,6 @@ def execute_node(
logger.error(f"Block {node.block_id} not found.")
return
# Sanity check: validate the execution input.
log_metadata = LogMetadata(
user_id=user_id,
graph_eid=graph_exec_id,
@@ -144,11 +144,20 @@ def execute_node(
node_id=node_id,
block_name=node_block.name,
)
# Sanity check: validate the execution input.
input_data, error = validate_exec(node, data.data, resolve_input=False)
if input_data is None:
log_metadata.error(f"Skip execution, input validation error: {error}")
db_client.upsert_execution_output(node_exec_id, "error", error)
update_execution(ExecutionStatus.FAILED)
return
# Re-shape the input data for agent block.
# AgentExecutorBlock specially separate the node input_data & its input_default.
if isinstance(node_block, AgentExecutorBlock):
input_data = {**node.input_default, "data": input_data}
# Execute the node
input_data_str = json.dumps(input_data)
input_size = len(input_data_str)
@@ -177,7 +186,7 @@ def execute_node(
input_data, **extra_exec_kwargs
):
output_size += len(json.dumps(output_data))
log_metadata.info("Node produced output", output_name=output_data)
log_metadata.info("Node produced output", **{output_name: output_data})
db_client.upsert_execution_output(node_exec_id, output_name, output_data)
for execution in _enqueue_next_nodes(
@@ -244,14 +253,13 @@ def _enqueue_next_nodes(
graph_id: str,
log_metadata: LogMetadata,
) -> list[NodeExecution]:
def add_enqueued_execution(
node_exec_id: str, node_id: str, data: BlockInput
) -> NodeExecution:
exec_update = db_client.update_execution_status(
node_exec_id, ExecutionStatus.QUEUED, data
)
db_client.send_execution_update(exec_update.model_dump())
db_client.send_execution_update(exec_update)
return NodeExecution(
user_id=user_id,
graph_exec_id=graph_exec_id,
@@ -376,31 +384,46 @@ def validate_exec(
if not node_block:
return None, f"Block for {node.block_id} not found."
error_prefix = f"Input data missing for {node_block.name}:"
if isinstance(node_block, AgentExecutorBlock):
# Validate the execution metadata for the agent executor block.
try:
exec_data = AgentExecutorBlock.Input(**node.input_default)
except Exception as e:
return None, f"Input data doesn't match {node_block.name}: {str(e)}"
# Validation input
input_schema = exec_data.input_schema
required_fields = set(input_schema["required"])
input_default = exec_data.data
else:
# Convert non-matching data types to the expected input schema.
for name, data_type in node_block.input_schema.__annotations__.items():
if (value := data.get(name)) and (type(value) is not data_type):
data[name] = convert(value, data_type)
# Validation input
input_schema = node_block.input_schema.jsonschema()
required_fields = node_block.input_schema.get_required_fields()
input_default = node.input_default
# Input data (without default values) should contain all required fields.
error_prefix = f"Input data missing or mismatch for `{node_block.name}`:"
input_fields_from_nodes = {link.sink_name for link in node.input_links}
if not input_fields_from_nodes.issubset(data):
return None, f"{error_prefix} {input_fields_from_nodes - set(data)}"
# Merge input data with default values and resolve dynamic dict/list/object pins.
data = {**node.input_default, **data}
data = {**input_default, **data}
if resolve_input:
data = merge_execution_input(data)
# Input data post-merge should contain all required fields from the schema.
input_fields_from_schema = node_block.input_schema.get_required_fields()
if not input_fields_from_schema.issubset(data):
return None, f"{error_prefix} {input_fields_from_schema - set(data)}"
# Convert non-matching data types to the expected input schema.
for name, data_type in node_block.input_schema.__annotations__.items():
if (value := data.get(name)) and (type(value) is not data_type):
data[name] = convert(value, data_type)
if not required_fields.issubset(data):
return None, f"{error_prefix} {required_fields - set(data)}"
# Last validation: Validate the input values against the schema.
if error := node_block.input_schema.validate_data(data):
error_message = f"Input data doesn't match {node_block.name}: {error}"
if error := json.validate_with_jsonschema(schema=input_schema, data=data):
error_message = f"{error_prefix} {error}"
logger.error(error_message)
return None, error_message
@@ -572,10 +595,11 @@ class Executor:
exec_stats["walltime"] = timing_info.wall_time
exec_stats["cputime"] = timing_info.cpu_time
exec_stats["error"] = str(error) if error else None
cls.db_client.update_graph_execution_stats(
result = cls.db_client.update_graph_execution_stats(
graph_exec_id=graph_exec.graph_exec_id,
stats=exec_stats,
)
cls.db_client.send_execution_update(result)
@classmethod
@time_measured
@@ -688,7 +712,6 @@ class Executor:
class ExecutionManager(AppService):
def __init__(self):
super().__init__()
self.use_redis = True
@@ -702,13 +725,9 @@ class ExecutionManager(AppService):
return settings.config.execution_manager_port
def run_service(self):
from autogpt_libs.supabase_integration_credentials_store import (
SupabaseIntegrationCredentialsStore,
)
from backend.integrations.credentials_store import IntegrationCredentialsStore
self.credentials_store = SupabaseIntegrationCredentialsStore(
redis=redis.get_redis()
)
self.credentials_store = IntegrationCredentialsStore()
self.executor = ProcessPoolExecutor(
max_workers=self.pool_size,
initializer=Executor.on_graph_executor_start,
@@ -729,7 +748,7 @@ class ExecutionManager(AppService):
)
self.active_graph_runs[graph_exec_id] = (future, cancel_event)
future.add_done_callback(
lambda _: self.active_graph_runs.pop(graph_exec_id)
lambda _: self.active_graph_runs.pop(graph_exec_id, None)
)
def cleanup(self):
@@ -744,11 +763,17 @@ class ExecutionManager(AppService):
@expose
def add_execution(
self, graph_id: str, data: BlockInput, user_id: str
) -> dict[str, Any]:
graph: Graph | None = self.db_client.get_graph(graph_id, user_id=user_id)
self,
graph_id: str,
data: BlockInput,
user_id: str,
graph_version: int | None = None,
) -> GraphExecution:
graph: GraphModel | None = self.db_client.get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
if not graph:
raise Exception(f"Graph #{graph_id} not found.")
raise ValueError(f"Graph #{graph_id} not found.")
graph.validate_graph(for_run=True)
self._validate_node_input_credentials(graph, user_id)
@@ -768,9 +793,18 @@ class ExecutionManager(AppService):
if name and name in data:
input_data = {"value": data[name]}
# Extract webhook payload, and assign it to the input pin
webhook_payload_key = f"webhook_{node.webhook_id}_payload"
if (
block.block_type == BlockType.WEBHOOK
and node.webhook_id
and webhook_payload_key in data
):
input_data = {"payload": data[webhook_payload_key]}
input_data, error = validate_exec(node, input_data)
if input_data is None:
raise Exception(error)
raise ValueError(error)
else:
nodes_input.append((node.id, input_data))
@@ -796,7 +830,7 @@ class ExecutionManager(AppService):
exec_update = self.db_client.update_execution_status(
node_exec.node_exec_id, ExecutionStatus.QUEUED, node_exec.input_data
)
self.db_client.send_execution_update(exec_update.model_dump())
self.db_client.send_execution_update(exec_update)
graph_exec = GraphExecution(
user_id=user_id,
@@ -806,7 +840,7 @@ class ExecutionManager(AppService):
)
self.queue.add(graph_exec)
return graph_exec.model_dump()
return graph_exec
@expose
def cancel_execution(self, graph_exec_id: str) -> None:
@@ -843,9 +877,9 @@ class ExecutionManager(AppService):
exec_update = self.db_client.update_execution_status(
node_exec.node_exec_id, ExecutionStatus.FAILED
)
self.db_client.send_execution_update(exec_update.model_dump())
self.db_client.send_execution_update(exec_update)
def _validate_node_input_credentials(self, graph: Graph, user_id: str):
def _validate_node_input_credentials(self, graph: GraphModel, user_id: str):
"""Checks all credentials for all nodes of the graph"""
for node in graph.nodes:

View File

@@ -1,41 +1,103 @@
import logging
import time
from datetime import datetime
import os
from urllib.parse import parse_qs, urlencode, urlparse, urlunparse
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_EXECUTED
from apscheduler.job import Job as JobObj
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.triggers.cron import CronTrigger
from autogpt_libs.utils.cache import thread_cached
from dotenv import load_dotenv
from pydantic import BaseModel
from sqlalchemy import MetaData, create_engine
from backend.data.block import BlockInput
from backend.data.schedule import (
ExecutionSchedule,
add_schedule,
get_active_schedules,
get_schedules,
update_schedule,
)
from backend.executor.manager import ExecutionManager
from backend.util.service import AppService, expose, get_service_client
from backend.util.settings import Config
def _extract_schema_from_url(database_url) -> tuple[str, str]:
"""
Extracts the schema from the DATABASE_URL and returns the schema and cleaned URL.
"""
parsed_url = urlparse(database_url)
query_params = parse_qs(parsed_url.query)
# Extract the 'schema' parameter
schema_list = query_params.pop("schema", None)
schema = schema_list[0] if schema_list else "public"
# Reconstruct the query string without the 'schema' parameter
new_query = urlencode(query_params, doseq=True)
new_parsed_url = parsed_url._replace(query=new_query)
database_url_clean = str(urlunparse(new_parsed_url))
return schema, database_url_clean
logger = logging.getLogger(__name__)
config = Config()
def log(msg, **kwargs):
logger.warning("[ExecutionScheduler] " + msg, **kwargs)
logger.info("[ExecutionScheduler] " + msg, **kwargs)
def job_listener(event):
"""Logs job execution outcomes for better monitoring."""
if event.exception:
log(f"Job {event.job_id} failed.")
else:
log(f"Job {event.job_id} completed successfully.")
@thread_cached
def get_execution_client() -> ExecutionManager:
return get_service_client(ExecutionManager)
def execute_graph(**kwargs):
args = JobArgs(**kwargs)
try:
log(f"Executing recurring job for graph #{args.graph_id}")
get_execution_client().add_execution(
args.graph_id, args.input_data, args.user_id
)
except Exception as e:
logger.exception(f"Error executing graph {args.graph_id}: {e}")
class JobArgs(BaseModel):
graph_id: str
input_data: BlockInput
user_id: str
graph_version: int
cron: str
class JobInfo(JobArgs):
id: str
name: str
next_run_time: str
@staticmethod
def from_db(job_args: JobArgs, job_obj: JobObj) -> "JobInfo":
return JobInfo(
id=job_obj.id,
name=job_obj.name,
next_run_time=job_obj.next_run_time.isoformat(),
**job_args.model_dump(),
)
class ExecutionScheduler(AppService):
def __init__(self, refresh_interval=10):
super().__init__()
self.use_db = True
self.last_check = datetime.min
self.refresh_interval = refresh_interval
scheduler: BlockingScheduler
@classmethod
def get_port(cls) -> int:
return Config().execution_scheduler_port
return config.execution_scheduler_port
@property
@thread_cached
@@ -43,43 +105,18 @@ class ExecutionScheduler(AppService):
return get_service_client(ExecutionManager)
def run_service(self):
scheduler = BackgroundScheduler()
scheduler.start()
while True:
self.__refresh_jobs_from_db(scheduler)
time.sleep(self.refresh_interval)
def __refresh_jobs_from_db(self, scheduler: BackgroundScheduler):
schedules = self.run_and_wait(get_active_schedules(self.last_check))
for schedule in schedules:
if schedule.last_updated:
self.last_check = max(self.last_check, schedule.last_updated)
if not schedule.is_enabled:
log(f"Removing recurring job {schedule.id}: {schedule.schedule}")
scheduler.remove_job(schedule.id)
continue
log(f"Adding recurring job {schedule.id}: {schedule.schedule}")
scheduler.add_job(
self.__execute_graph,
CronTrigger.from_crontab(schedule.schedule),
id=schedule.id,
args=[schedule.graph_id, schedule.input_data, schedule.user_id],
replace_existing=True,
)
def __execute_graph(self, graph_id: str, input_data: dict, user_id: str):
try:
log(f"Executing recurring job for graph #{graph_id}")
self.execution_client.add_execution(graph_id, input_data, user_id)
except Exception as e:
logger.exception(f"Error executing graph {graph_id}: {e}")
@expose
def update_schedule(self, schedule_id: str, is_enabled: bool, user_id: str) -> str:
self.run_and_wait(update_schedule(schedule_id, is_enabled, user_id))
return schedule_id
load_dotenv()
db_schema, db_url = _extract_schema_from_url(os.getenv("DATABASE_URL"))
self.scheduler = BlockingScheduler(
jobstores={
"default": SQLAlchemyJobStore(
engine=create_engine(db_url),
metadata=MetaData(schema=db_schema),
)
}
)
self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
self.scheduler.start()
@expose
def add_execution_schedule(
@@ -89,17 +126,50 @@ class ExecutionScheduler(AppService):
cron: str,
input_data: BlockInput,
user_id: str,
) -> str:
schedule = ExecutionSchedule(
) -> JobInfo:
job_args = JobArgs(
graph_id=graph_id,
input_data=input_data,
user_id=user_id,
graph_version=graph_version,
schedule=cron,
input_data=input_data,
cron=cron,
)
return self.run_and_wait(add_schedule(schedule)).id
job = self.scheduler.add_job(
execute_graph,
CronTrigger.from_crontab(cron),
kwargs=job_args.model_dump(),
replace_existing=True,
)
log(f"Added job {job.id} with cron schedule '{cron}' input data: {input_data}")
return JobInfo.from_db(job_args, job)
@expose
def get_execution_schedules(self, graph_id: str, user_id: str) -> dict[str, str]:
schedules = self.run_and_wait(get_schedules(graph_id, user_id=user_id))
return {v.id: v.schedule for v in schedules}
def delete_schedule(self, schedule_id: str, user_id: str) -> JobInfo:
job = self.scheduler.get_job(schedule_id)
if not job:
log(f"Job {schedule_id} not found.")
raise ValueError(f"Job #{schedule_id} not found.")
job_args = JobArgs(**job.kwargs)
if job_args.user_id != user_id:
raise ValueError("User ID does not match the job's user ID.")
log(f"Deleting job {schedule_id}")
job.remove()
return JobInfo.from_db(job_args, job)
@expose
def get_execution_schedules(
self, graph_id: str | None = None, user_id: str | None = None
) -> list[JobInfo]:
schedules = []
for job in self.scheduler.get_jobs():
job_args = JobArgs(**job.kwargs)
if (
job.next_run_time is not None
and (graph_id is None or job_args.graph_id == graph_id)
and (user_id is None or job_args.user_id == user_id)
):
schedules.append(JobInfo.from_db(job_args, job))
return schedules

View File

@@ -5,20 +5,18 @@ from typing import TYPE_CHECKING
from pydantic import SecretStr
if TYPE_CHECKING:
from redis import Redis
from backend.executor.database import DatabaseManager
from autogpt_libs.utils.cache import thread_cached
from autogpt_libs.utils.synchronize import RedisKeyedMutex
from .types import (
from backend.data.model import (
APIKeyCredentials,
Credentials,
OAuth2Credentials,
OAuthState,
UserIntegrations,
)
from backend.util.settings import Settings
settings = Settings()
@@ -46,21 +44,21 @@ replicate_credentials = APIKeyCredentials(
)
openai_credentials = APIKeyCredentials(
id="53c25cb8-e3ee-465c-a4d1-e75a4c899c2a",
provider="llm",
provider="openai",
api_key=SecretStr(settings.secrets.openai_api_key),
title="Use Credits for OpenAI",
expires_at=None,
)
anthropic_credentials = APIKeyCredentials(
id="24e5d942-d9e3-4798-8151-90143ee55629",
provider="llm",
provider="anthropic",
api_key=SecretStr(settings.secrets.anthropic_api_key),
title="Use Credits for Anthropic",
expires_at=None,
)
groq_credentials = APIKeyCredentials(
id="4ec22295-8f97-4dd1-b42b-2c6957a02545",
provider="llm",
provider="groq",
api_key=SecretStr(settings.secrets.groq_api_key),
title="Use Credits for Groq",
expires_at=None,
@@ -72,6 +70,28 @@ did_credentials = APIKeyCredentials(
title="Use Credits for D-ID",
expires_at=None,
)
jina_credentials = APIKeyCredentials(
id="7f26de70-ba0d-494e-ba76-238e65e7b45f",
provider="jina",
api_key=SecretStr(settings.secrets.jina_api_key),
title="Use Credits for Jina",
expires_at=None,
)
unreal_credentials = APIKeyCredentials(
id="66f20754-1b81-48e4-91d0-f4f0dd82145f",
provider="unreal",
api_key=SecretStr(settings.secrets.unreal_speech_api_key),
title="Use Credits for Unreal",
expires_at=None,
)
open_router_credentials = APIKeyCredentials(
id="b5a0e27d-0c98-4df3-a4b9-10193e1f3c40",
provider="open_router",
api_key=SecretStr(settings.secrets.open_router_api_key),
title="Use Credits for Open Router",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
revid_credentials,
@@ -81,12 +101,17 @@ DEFAULT_CREDENTIALS = [
anthropic_credentials,
groq_credentials,
did_credentials,
jina_credentials,
unreal_credentials,
open_router_credentials,
]
class SupabaseIntegrationCredentialsStore:
def __init__(self, redis: "Redis"):
self.locks = RedisKeyedMutex(redis)
class IntegrationCredentialsStore:
def __init__(self):
from backend.data.redis import get_redis
self.locks = RedisKeyedMutex(get_redis())
@property
@thread_cached
@@ -124,6 +149,12 @@ class SupabaseIntegrationCredentialsStore:
all_credentials.append(anthropic_credentials)
if settings.secrets.did_api_key:
all_credentials.append(did_credentials)
if settings.secrets.jina_api_key:
all_credentials.append(jina_credentials)
if settings.secrets.unreal_speech_api_key:
all_credentials.append(unreal_credentials)
if settings.secrets.open_router_api_key:
all_credentials.append(open_router_credentials)
return all_credentials
def get_creds_by_id(self, user_id: str, credentials_id: str) -> Credentials | None:
@@ -274,5 +305,5 @@ class SupabaseIntegrationCredentialsStore:
return integrations
def locked_user_integrations(self, user_id: str):
key = (self.db_manager, f"user:{user_id}", "integrations")
key = (f"user:{user_id}", "integrations")
return self.locks.locked(key)

View File

@@ -2,15 +2,14 @@ import logging
from contextlib import contextmanager
from datetime import datetime
from autogpt_libs.supabase_integration_credentials_store import (
Credentials,
SupabaseIntegrationCredentialsStore,
)
from autogpt_libs.utils.synchronize import RedisKeyedMutex
from redis.lock import Lock as RedisLock
from backend.data import redis
from backend.data.model import Credentials
from backend.integrations.credentials_store import IntegrationCredentialsStore
from backend.integrations.oauth import HANDLERS_BY_NAME, BaseOAuthHandler
from backend.util.exceptions import MissingConfigError
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
@@ -52,7 +51,7 @@ class IntegrationCredentialsManager:
def __init__(self):
redis_conn = redis.get_redis()
self._locks = RedisKeyedMutex(redis_conn)
self.store = SupabaseIntegrationCredentialsStore(redis=redis_conn)
self.store = IntegrationCredentialsStore()
def create(self, user_id: str, credentials: Credentials) -> None:
return self.store.add_creds(user_id, credentials)
@@ -129,7 +128,6 @@ class IntegrationCredentialsManager:
def _acquire_lock(self, user_id: str, credentials_id: str, *args: str) -> RedisLock:
key = (
self.store.db_manager,
f"user:{user_id}",
f"credentials:{credentials_id}",
*args,
@@ -157,12 +155,14 @@ def _get_provider_oauth_handler(provider_name: str) -> BaseOAuthHandler:
client_id = getattr(settings.secrets, f"{provider_name}_client_id")
client_secret = getattr(settings.secrets, f"{provider_name}_client_secret")
if not (client_id and client_secret):
raise Exception( # TODO: ConfigError
raise MissingConfigError(
f"Integration with provider '{provider_name}' is not configured",
)
handler_class = HANDLERS_BY_NAME[provider_name]
frontend_base_url = settings.config.frontend_base_url
frontend_base_url = (
settings.config.frontend_base_url or settings.config.platform_base_url
)
return handler_class(
client_id=client_id,
client_secret=client_secret,

View File

@@ -3,7 +3,7 @@ import time
from abc import ABC, abstractmethod
from typing import ClassVar
from autogpt_libs.supabase_integration_credentials_store import OAuth2Credentials
from backend.data.model import OAuth2Credentials
logger = logging.getLogger(__name__)

View File

@@ -2,8 +2,8 @@ import time
from typing import Optional
from urllib.parse import urlencode
import requests
from autogpt_libs.supabase_integration_credentials_store import OAuth2Credentials
from backend.data.model import OAuth2Credentials
from backend.util.request import requests
from .base import BaseOAuthHandler
@@ -56,13 +56,12 @@ class GitHubOAuthHandler(BaseOAuthHandler):
"X-GitHub-Api-Version": "2022-11-28",
}
response = requests.delete(
requests.delete(
url=self.revoke_url.format(client_id=self.client_id),
auth=(self.client_id, self.client_secret),
headers=headers,
json={"access_token": credentials.access_token.get_secret_value()},
)
response.raise_for_status()
return True
def _refresh_tokens(self, credentials: OAuth2Credentials) -> OAuth2Credentials:
@@ -88,7 +87,6 @@ class GitHubOAuthHandler(BaseOAuthHandler):
}
headers = {"Accept": "application/json"}
response = requests.post(self.token_url, data=request_body, headers=headers)
response.raise_for_status()
token_data: dict = response.json()
username = self._request_username(token_data["access_token"])

View File

@@ -1,6 +1,5 @@
import logging
from autogpt_libs.supabase_integration_credentials_store import OAuth2Credentials
from google.auth.external_account_authorized_user import (
Credentials as ExternalAccountCredentials,
)
@@ -9,6 +8,8 @@ from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import Flow
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
from .base import BaseOAuthHandler
logger = logging.getLogger(__name__)
@@ -103,12 +104,11 @@ class GoogleOAuthHandler(BaseOAuthHandler):
def revoke_tokens(self, credentials: OAuth2Credentials) -> bool:
session = AuthorizedSession(credentials)
response = session.post(
session.post(
self.revoke_uri,
params={"token": credentials.access_token.get_secret_value()},
headers={"content-type": "application/x-www-form-urlencoded"},
)
response.raise_for_status()
return True
def _request_email(

Some files were not shown because too many files have changed in this diff Show More