Compare commits

..

61 Commits

Author SHA1 Message Date
Bentlybro
076f526c0c change logging type to debug 2025-04-16 20:17:06 +01:00
Zamil Majdy
44e3770003 fix(backend): Fix execution manager message consuming pattern (#9829)
We have seen instances where the executor gets stuck in a failing
message-consuming loop due to the upstream RabbitMQ being down. The
current message-consuming pattern is not optimal for handling this.

### Changes 🏗️

* Add a retry limit to the execution loop limit.
* Use `basic_consume` instead of `basic_get` for handling message
consumption.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run agents cancel them
2025-04-16 22:54:26 +07:00
Zamil Majdy
c0ee71fb27 fix(frontend/builder): Fix key-value pair input for any non-string types (#9826)
- Resolves #9823 

The key-value pairs input, like those used in CreateDictionaryBlock, are
assumed to be either a numeric or a string type.
When it has `any` type, it was randomly assumed to be a numeric type. 

### Changes 🏗️

Only convert to number when it's explicitly defined to do so on
key-value pair input.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tried two different key-value pair input: AiTextGenerator &
CreateDictionary
2025-04-16 11:10:50 +00:00
Zamil Majdy
71cdc18674 fix(backend): Fix cancel_execution can only work once (#9825)
### Changes 🏗️

The recent change to the execution cancelation fix turns out to only
work on the first request.
This PR change fixes it by reworking how the thread_cached work on async
functions.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Cancel agent executions multiple times
2025-04-16 10:33:49 +00:00
Zamil Majdy
dc9348ec26 fix(frontend): Fix Input value mixup on Library page (#9821)
### Changes 🏗️

Fix this broken behaviors:
Input data mix-up caused by running two different executions of the same
agent with the same input.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run agent with old user
- [x] Running two different executions of the same agent with the same
input.
2025-04-16 09:31:07 +00:00
Zamil Majdy
3ccbc31705 Revert: fix(frontend): Fix Input value mixup on Library page & broken marketplace on no onboarding data 2025-04-15 21:28:43 +02:00
Zamil Majdy
7cf0c6fe46 fix(frontend): Fix Input value mixup on Library page & broken marketplace on no onboarding data 2025-04-15 21:25:25 +02:00
Zamil Majdy
c69faa2a94 fix(frontend): Fix Input value mixup on Library page & broken marketplace on no onboarding data 2025-04-15 21:24:39 +02:00
Nicholas Tindle
0c9dbbbe24 Merge branch 'master' into dev 2025-04-15 12:00:02 -05:00
Nicholas Tindle
3e0742f9c5 Spike/infra pooling (#9812)
<!-- Clearly explain the need for these changes: -->
Swap to pooling supabase connections rather than depending on x number
of max open connections

### Changes 🏗️
Adds direct connect URL to be used throughout the system
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test thoroughly all of the endpoints in the dev env with switched
infra matching pr
  - [x] Follow the new release plan tests
  - [x] Follow the old release plan tests

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>configuration changes</summary>

- Change how we connect to the database to use direct when configured
and database URL when not
  - update prisma for this
  - have default matching database and default
</details>
2025-04-15 15:40:15 +00:00
Krzysztof Czerwinski
d791cdea76 feat(platform): Onboarding Phase 2 (#9736)
### Changes 🏗️

- Update onboarding to give user rewards for completing steps
- Remove `canvas-confetti` lib and add `party-js` instead; the former
didn't allow to play confetti from a component
- Add onboarding videos in `frontend/public/onboarding/`
- Remove Balance (`CreditsCard.tsx`) and add openable `Wallet.tsx` (and
accompanying `WalletTaskGroup.tsx`) instead that displays grouped
onboarding tasks with descriptions and short instructional videos
- Further relevant updates to `useOnboarding`, `types.ts`
- Implement onboarding rewards
- Add `onboarding_reward` function in `credit.py` that is used to reward
user for finished onboarding tasks safely - transaction key is
deterministic, so the same user won't be rewarded twice for the same
step.
  - Add `reward_user` in `onboarding.py`
- Update `UserOnboarding` model and add a migration

<img width="464" alt="Screenshot 2025-04-05 at 6 06 29 PM"
src="https://github.com/user-attachments/assets/fca8d09e-0139-466b-b679-d24117ad01f0"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding works
  - [x] Tasks can be completed
  - [x] Rewards are added correctly for all completed tasks
2025-04-12 10:56:59 +00:00
Zamil Majdy
bb92226f5d feat(backend): Remove RPC service from Agent Executor (#9804)
Currently the execution task is not properly distributed between
executors because we need to send the execution request to the execution
server.

The execution manager now accepts the execution request from the message
queue. Thus, we can remove the synchronous RPC system from this service,
let the system focus on executing the agent, and not spare any process
for the HTTP API interface.

This will also reduce the risk of the execution service being too busy
and not able to accept any add execution requests.

### Changes 🏗️

* Remove the RPC system in Agent Executor
* Allow the cancellation of the execution that is still waiting in the
queue (by avoiding it from being executed).
* Make a unified helper for adding an execution request to the system
and move other execution-related helper functions into
`executor/utils.py`.
* Remove non-db connections (redis / rabbitmq) in Database Manager and
let the client manage this by themselves.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Existing CI, some agent runs
2025-04-11 19:03:47 +00:00
Zamil Majdy
f7ca5ac1ba feat(backend/executor): Move execution queue + cancel mechanism to RabbitMQ (#9759)
The graph execution queue is not disk-persisted; when the executor dies,
the executions are lost.

The scope of this issue is migrating the execution queue from an
inter-process queue to a RabbitMQ message queue. A sync client should be
used for this.

- Resolves #9746
- Resolves #9714

### Changes 🏗️

Move the execution manager from multiprocess.Queue into persisted
Rabbit-MQ.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Execute agents.

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-04-11 14:15:39 +00:00
Abhimanyu Yadav
4621a95bf3 fix(marketplace): Fix small UI bugs (#9800)
Resolving the bugs listed below
- #9796 
- #9797 
- #9798 
- #8998 
- #9799 

### Changes I have made 
- Removed border and set border-radius to `24px` in FeaturedCard
- Removed `white` background from breadcrumbs
- Changed distance between featured section arrow from `28px` to `12px`
- Added `1.5rem` spacing and changed color to `gray-200` on the
creator’s page separator
- Removed focus ring from the Search Library input
- And some small UI changes on marketplace

### Screenshots

<img width="658" alt="Screenshot 2025-04-10 at 3 26 56 PM"
src="https://github.com/user-attachments/assets/22bef6f0-19b9-42a6-8227-fedca33141ba"
/>

<img width="505" alt="Screenshot 2025-04-10 at 3 27 07 PM"
src="https://github.com/user-attachments/assets/2a5409a1-94c6-4d15-a35d-e4ed9b075055"
/>

<img width="1373" alt="Screenshot 2025-04-10 at 3 28 39 PM"
src="https://github.com/user-attachments/assets/046ea726-2a98-4000-abc8-9139fffe80dc"
/>

<img width="368" alt="Screenshot 2025-04-10 at 3 29 07 PM"
src="https://github.com/user-attachments/assets/4e0510ad-f535-4760-a703-651766ff522b"
/>
2025-04-11 13:09:35 +00:00
Abhimanyu Yadav
8d8a6e450f fix(marketplace): Render newline in marketplace description text (#9808)
- fix #9177 

Add `whitespace-pre-line` tailwind property to allow newline rendering
in marketplace description text

### Before

![Screenshot 2025-04-11 at 10 32
23 AM](https://github.com/user-attachments/assets/b07f58b6-218e-4b33-a018-93757e59cd8d)

### After

![Screenshot 2025-04-11 at 10 32
59 AM](https://github.com/user-attachments/assets/f1086ee4-aef3-491a-ba81-cf681086f67b)
2025-04-11 10:50:32 +00:00
Reinier van der Leer
8ea3bfabc4 fix(backend/db): Fix unchecked Prisma statements (#9805) 2025-04-10 23:04:42 +02:00
Nicholas Tindle
cda07e81d1 feat(frontend, backend): track sentry environment on frontend + sentry init in app services (#9773)
<!-- Clearly explain the need for these changes: -->
We want to be able to filter errors according to where they occur in
sentry so we need to track and include that data. We also are not
logging everything from app services correctly so fix that up

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Adds env tracking for frontend
- adds sentry init in app service spawn

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested by running and making sure all events + logs are inserted
into sentry correctly
2025-04-10 16:28:07 +01:00
Abhimanyu Yadav
6156fbb731 fix(marketplace): Fixing margins between headers, divider and content (#9757)
- fix #9003 
- fix - #8969 
- fix #8970 

Adding correct margins in between headers, divider and content.

### Changes made

- Remove any vertical padding or margin from the section.
- Add top and bottom margins to the separator, so the spacing between
sections is handled only by the separator.
- Also, add a size prop in AvatarFallback because its size is currently
broken. It’s not able to extract the size properly from the className.
2025-04-10 16:28:01 +01:00
Nicholas Tindle
2ca18d77a4 feat(frontend, backend): track sentry environment on frontend + sentry init in app services (#9773)
<!-- Clearly explain the need for these changes: -->
We want to be able to filter errors according to where they occur in
sentry so we need to track and include that data. We also are not
logging everything from app services correctly so fix that up

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Adds env tracking for frontend
- adds sentry init in app service spawn

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested by running and making sure all events + logs are inserted
into sentry correctly
2025-04-10 14:34:26 +00:00
Abhimanyu Yadav
3e6d9bf963 fix(marketplace): Fixing margins between headers, divider and content (#9757)
- fix #9003 
- fix - #8969 
- fix #8970 

Adding correct margins in between headers, divider and content.

### Changes made

- Remove any vertical padding or margin from the section.
- Add top and bottom margins to the separator, so the spacing between
sections is handled only by the separator.
- Also, add a size prop in AvatarFallback because its size is currently
broken. It’s not able to extract the size properly from the className.
2025-04-10 13:04:16 +00:00
Abhimanyu Yadav
07a09d802c fix(marketplace): Fix store card style (#9769)
- fix #9222 
- fix #9221 
- fix #8966

### Changes made
- Standardized the height of store cards.
- Corrected spacing and responsiveness behavior.
- Removed horizontal margin and max-width from the featured section.
- Fixed the aspect ratio of the agent image in the store card.
- Now, a normal desktop screen displays 3 columns of agents instead of
4.

<img width="1512" alt="Screenshot 2025-04-07 at 7 09 40 AM"
src="https://github.com/user-attachments/assets/50d3b5c9-4e7c-456e-b5f1-7c0093509bd3"
/>
2025-04-10 12:01:42 +01:00
Reinier van der Leer
353396110c refactor(backend): Clean up Library & Store DB schema (#9774)
Distilled from #9541 to reduce the scope of that PR.

- Part of #9307

-  Blocks #9786
  -  Blocks #9541

### Changes 🏗️

- Fix `LibraryAgent` schema (for #9786)
- Fix relationships between `LibraryAgent`, `AgentGraph`, and
`AgentPreset`
  - Impose uniqueness constraint on `LibraryAgent`

- Rename things that are called `agent` that actually refer to a
`graph`/`agentGraph`
- Fix singular/plural forms in DB schema
- Simplify reference names of closely related objects (e.g.
`AgentGraph.AgentGraphExecutions` -> `AgentGraph.Executions`)

- Eliminate use of `# type: ignore` in DB statements
  - Add `typed` and `typed_cast` utilities to `backend.util.type`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI static type checking (with all risky `# type: ignore` removed)
  - [x] Check that column references in views are updated
2025-04-10 10:40:25 +00:00
Abhimanyu Yadav
70890dee43 fix(marketplace): Fix store card style (#9769)
- fix #9222 
- fix #9221 
- fix #8966

### Changes made
- Standardized the height of store cards.
- Corrected spacing and responsiveness behavior.
- Removed horizontal margin and max-width from the featured section.
- Fixed the aspect ratio of the agent image in the store card.
- Now, a normal desktop screen displays 3 columns of agents instead of
4.

<img width="1512" alt="Screenshot 2025-04-07 at 7 09 40 AM"
src="https://github.com/user-attachments/assets/50d3b5c9-4e7c-456e-b5f1-7c0093509bd3"
/>
2025-04-10 10:31:14 +00:00
Nicholas Tindle
62361ccc48 feat: deep copy the schema (#9794)
<!-- Clearly explain the need for these changes: -->
We were duplicating placeholder values across all agents 😨 

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Deep copies the schema instead

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the broken agent in dev
2025-04-09 21:32:50 +00:00
Reinier van der Leer
755a80c87a fix(blocks): Fix block I/O value sharing (#9793)
- Resolves #9792

### Changes 🏗️

- Replace all `default=[]` -> `default_factory=list`
- Replace all `default={}` -> `default_factory=dict`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] CI

---------

Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
2025-04-09 19:15:41 +00:00
Bentlybro
2a6676a5b8 Merge branch 'master' into dev 2025-04-09 14:49:09 +01:00
Reinier van der Leer
5a83b233f8 fix(backend): Add required method cleanup to MainApp
The absence of this method caused type checking errors.
2025-04-09 13:09:21 +02:00
Reinier van der Leer
cb1a3703ad fix(ci): Fix linter exit code on failure (#9777)
The linter currently exits with exit code 0 even if linting fails. This
makes the CI linter permissive which isn't good.

Changes:
- Make linter exit with an error code if a linting step fails
- Fix existing formatting issues
2025-04-09 11:30:04 +02:00
Bently
91f62c47f9 feat(backend): Add new llama 4 maverick & scout models (#9788)
This PR is to add the new [Meta: Llama 4
Maverick](https://openrouter.ai/meta-llama/llama-4-maverick) and [Meta:
Llama 4 Scout](https://openrouter.ai/meta-llama/llama-4-scout) models
via [OpenRouter](https://openrouter.ai/)


### Changes 🏗️

Added the model names to ``llm.py``
```
    META_LLAMA_4_SCOUT = "meta-llama/llama-4-scout"
    META_LLAMA_4_MAVERICK = "meta-llama/llama-4-maverick"
```
and the modela metadata
```
    LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072),
    LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000),
```

and i have added the model price to ``block_cost_config.py``
```
    LlmModel.META_LLAMA_4_SCOUT: 1,
    LlmModel.META_LLAMA_4_MAVERICK: 1,
```

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open the build page and place a ai text block, open the model
select and scroll to the bottom and select either of the 2 models
  - [x] test them with a prompt and wait for a reply!
2025-04-08 21:59:53 +00:00
Zamil Majdy
7fedb5e2fd refactor(backend): Un-share resource initializations from AppService + Remove Pyro (#9750)
This is a prerequisite infra change for
https://github.com/Significant-Gravitas/AutoGPT/issues/9714.

We will need a service where we can maintain our own client (db, redis,
rabbitmq, be it async/sync) and configure our own cadence of
initialization and cleanup.

While refactoring the service.py, an option to use Pyro as an RPC
protocol is also removed.

### Changes 🏗️

* Decouple resource initialization and cleanup from the parent
AppService logic.
* Removed Pyro.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] CI
2025-04-08 19:47:22 +00:00
Nicholas Tindle
d316ed23d4 [Snyk] Security upgrade next from 14.2.25 to 14.2.26 (#9767)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`
- `autogpt_platform/frontend/yarn.lock`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.




#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue | Score | 

:-------------------------:|:-------------------------|:-------------------------
![medium
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/m.png
'medium severity') | Information Exposure
<br/>[SNYK-JS-NEXT-9634163](https://snyk.io/vuln/SNYK-JS-NEXT-9634163) |
&nbsp;&nbsp;**601**&nbsp;&nbsp;




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiI5MzYyNGJiZC1jMTE3LTQ3NDYtOGFlOC1hYjIyMGE4OGI4M2UiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6IjkzNjI0YmJkLWMxMTctNDc0Ni04YWU4LWFiMjIwYTg4YjgzZSJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Learn about vulnerability in an interactive lesson of Snyk
Learn.](https://learn.snyk.io/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"14.2.25","to":"14.2.26"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-9634163"],"prId":"93624bbd-c117-4746-8ae8-ab220a88b83e","prPublicId":"93624bbd-c117-4746-8ae8-ab220a88b83e","packageManager":"yarn","priorityScoreList":[601],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","priorityScore"],"type":"auto","upgrade":["SNYK-JS-NEXT-9634163"],"vulns":["SNYK-JS-NEXT-9634163"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-07 17:15:50 +00:00
Nicholas Tindle
3c14861d8e fix(backend): reduce log level for retrying connection (#9765)
<!-- Clearly explain the need for these changes: -->
Now that we are trying to use Sentry more, cleaning up some errors ->
warnings is a good idea

### Changes 🏗️
- reduces log level of retry to warning
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] check it comes through in sentry
2025-04-07 17:00:18 +00:00
Nicholas Tindle
074a00ce86 fix(backend): ProviderName behavior when loading secrets (#9764)
<!-- Clearly explain the need for these changes: -->
We got this error in
sentry:[AUTOGPT-SERVER-33P](https://significant-gravitas.sentry.io/issues/6462614597/events/bb4871d796b04e759ade55197498cff9/)
```
Level: Error
'Secrets' object has no attribute 'ProviderName.GOOGLE_client_id'
```

### Changes 🏗️
- Follows pattern used when accessing these in
`_get_provider_oauth_handler` in the router
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test to make sure getting works
2025-04-07 16:59:48 +00:00
Krzysztof Czerwinski
0aeaaa7801 fix(frontend): Fill defaults from schema to hardcodedValues in CustomNode.tsx (#9772)
Fix https://github.com/Significant-Gravitas/AutoGPT/pull/9632

### Changes 🏗️

- Set default values from input schema to `hardcodedValues` in
`CustomNode.tsx`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Default values are correctly applied to newly created node
2025-04-07 16:53:57 +00:00
Abhimanyu Yadav
2e5a770f35 fix(marketplace): Fix typography of heading in marketplace (#9737)
- fix #8956

### Changes:
- Updated line height from 28px to 36px for improved readability.
- Ensured that all section headings (“Featured agents”, “Top agents”,
“Featured creators”, and “Become a creator”) now have a uniform style.
- Verified that font-poppins is correctly set in the Tailwind config
file and layout.tsx.
- Color changed from #282828 to #262626

### Scope:
- This PR only includes typography-related adjustments.

![Screenshot 2025-04-02 at 5 29
05 PM](https://github.com/user-attachments/assets/e27b0d52-d8c7-4921-ae18-e3f75264e74d)
2025-04-07 10:28:28 +00:00
Abhimanyu Yadav
8b2265c996 feat(frontend): Add advanced block search with relevance ranking (#9711)
- fix #9425 

- Enhancing the functionality of searching blocks on the build page

Currently, it only performs exact matching on the block name and
description. I added a scoring mechanism for searching.

- The scoring algorithm works as follows:
     - Returns 1 if no query (all blocks match equally)
     - Normalized query for case-insensitive matching
- Returns 3 for exact substring matches in block name (highest priority)
- Returns 2 when all query words appear in the block name (regardless of
order)
- Returns 1.X for blocks with names similar to query using Jaro-Winkler
distance (X is similarity score)
- Returns 0.5 when all query words appear in the block description
(lowest priority)
     - Returns 0 for no match

Higher scores will appear first in search results.

> I have used an external library for Jaro-Winkler distance -
[link](https://www.npmjs.com/package/jaro-winkler)

Before
![Screenshot 2025-03-28 at 12 09
24 PM](https://github.com/user-attachments/assets/e135c007-cd9a-4692-88fc-3ad42b097c22)

After
![Screenshot 2025-03-28 at 12 09
17 PM](https://github.com/user-attachments/assets/28cd01c1-0d8e-44fa-8e04-ba9796118ba3)
2025-04-07 08:54:00 +00:00
Krzysztof Czerwinski
73d43312d1 feat(frontend): Use TypeBasedInput for onboarding agent input (#9762)
### Changes 🏗️

- Use the same code as in Library to display inputs for onboarding agent
- Fixes bug that crashes frontend when showing onboarding inputs
- Remove no longer needed `OnboardingAgentInput` component

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All input types display correctly
  - [x] Onboarding agent runs

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-04-04 16:54:58 +00:00
Zamil Majdy
3771a0924c fix(backend): Update deprecated code caused by upgrades (#9758)
This series of upgrades:
https://github.com/significant-gravitas/autogpt/pull/9727
https://github.com/Significant-Gravitas/AutoGPT/pull/9728
https://github.com/Significant-Gravitas/AutoGPT/pull/9560

Caused some code in the repo being deprecated, this PR addresses those.

### Changes 🏗️

Fix pydantic config, usage of field, usage of proper prisma
`CreateInput` type, pytest loop-scope.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI, manual test on running some agents.

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-04-04 16:34:40 +00:00
Nicholas Tindle
4397746a87 feat(backend): baseline sentry logging (#9756)
<!-- Clearly explain the need for these changes: -->
Sentry just released logs so lets enrich our details there too

### Changes 🏗️
- Adds sentry logging
- Adds dependencies tracking all of our sentry integrations
- Adds environment tracking to sentry
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested to make sure events show up in sentry with the correct
environment logging
2025-04-04 15:31:52 +00:00
Nicholas Tindle
2e871b0761 fix(frontend): bad handling on error prompts (#9754)
<!-- Clearly explain the need for these changes: -->

I oopsed and had an extra unneeded parameter (as @majdyz pointed out)
and wasn't respected everywhere it was used.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Remove parameter
- update all the places AuthFeedback is called

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Test all pages with authfeedback on it

Co-authored-by: Bently <tomnoon9@gmail.com>
2025-04-03 22:16:39 +00:00
Reinier van der Leer
8ceb03ce1a feat(frontend/library): Add "Open in builder" run action (#9755)
- Resolves #9730

### Changes 🏗️

- feat: Add "Open in builder" run action

- refactor: Add `ActionButtonGroup` to replace boilerplate code in
`AgentRunDetailsView`, `AgentRunDraftView`, `AgentScheduleDetailsView`
  - feat: Add link support to `ActionButtonGroup`, `ButtonAction`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]`
    - [x] "Run again" button works
    - [x] "Open in builder" button-link works
2025-04-03 21:34:33 +00:00
Bently
ce98925d58 update(docs): Remove out dated tutorial video from docs & readme (#9753)
This is to remove the out dated tutorial video from docs & readme and
add a direct link to the docs in the readme

### Changes 🏗️

Remove video link from readme.md
Remove video link from
https://github.com/Significant-Gravitas/AutoGPT/blob/dev/docs/content/platform/getting-started.md
Add direct link to docs in readme.me
2025-04-03 19:16:54 +00:00
Reinier van der Leer
1fc984f7fd feat(platform/library): Add real-time "Steps" count to agent run view (#9740)
- Resolves #9731

### Changes 🏗️

- feat: Add "Steps" showing `node_execution_count` to agent run view
  - Add `GraphExecutionMeta.stats.node_exec_count` attribute

- feat(backend/executor): Send graph execution update after *every* node
execution (instead of only I/O node executions)
  - Update graph execution stats after every node execution

- refactor: Move `GraphExecutionMeta` stats into sub-object
(`cost`, `duration`, `total_run_time` -> `stats.cost`, `stats.duration`,
`stats.node_exec_time`)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - View an agent run with 1+ steps on `/library/agents/[id]`
    - [x] "Info" section layout doesn't break
    - [x] Number of steps is shown
  - Initiate a new agent run
    - [x] "Steps" increments in real time during execution
2025-04-03 18:58:21 +00:00
Madura Herath
d0d610720c docs(platform): Add WSL 2 recommendation for Docker on Windows (#9749)
In this pull request, the following changes have been made in response
to Issue #9190:

Documentation Changes:

- Added a note to the AutoGPT documentation regarding Docker
installation on Windows.

- Specifically, the note advises users to opt for WSL2 (Windows
Subsystem for Linux version 2) instead of Hyper-V during Docker setup to
prevent issues with Supabase, such as the "unhealthy" status for
supabase-db.

---------

Co-authored-by: Madura Herath <madurah@verdentra.com>
Co-authored-by: Bently <tomnoon9@gmail.com>
2025-04-03 18:35:07 +00:00
Reinier van der Leer
77a44b1213 fix(platform/library): Fix UX for webhook-triggered runs (#9680)
- Resolves #9679

### Changes 🏗️

Frontend:
- Fix crash on `payload` graph input
- Fix crash on object type agent I/O values
- Hide "+ New run" if `graph.webhook_id` is set

Backend:
- Add computed field `webhook_id` to `GraphModel`
  - Add computed property `webhook_input_node` to `GraphModel`
- Refactor:
  - Move `Node.webhook_id` -> `NodeModel.webhook_id`
  - Move `NodeModel.block` -> `Node.block` (computed property)
  - Replace `get_block(node.block_id)` with `node.block` where sensible

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create and run a simple graph
  - [x] Create a graph with a webhook trigger and ensure it works
- [x] Check out the runs of a webhook-triggered graph and ensure the
page works
2025-04-03 17:31:02 +00:00
Nicholas Tindle
7179f9cea0 feat(backend, libs): Tell uvicorn to use our logger + always log to stdout+stderr (#9742)
<!-- Clearly explain the need for these changes: -->

Uvicorn and our logs were ending up in different places, this pr enures
uvicorn using our logging config, not their own.

### Changes 🏗️
- Clears uvicorn's loggers for rest, ws
- always log to stdout,stderr and additionally log to gcp is appropriate
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test all possible variants of the log cloud vs not and ensure that
uvicorn logs show up in the same place that rest of the system logs do
for all

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-03 16:42:20 +00:00
Reinier van der Leer
698af4e16a refactor(frontend): Clean up graph import & export logic (#9717)
- Resolves #9716
- Builds on the work done in #9627

### Changes 🏗️

- Remove `safeCopyGraph`; export directly from backend instead
- Explicitly name sanitization functions for *importing* graphs; move to
`@/lib/autogpt-server-api/utils`
- Amend `BackendAPI.getGraph(..)` to delete `.user_id` if `for_export ==
true`

Out-of-scope improvements:
- Add missing `user_id` to frontend `Graph` types
- Add `UserID` branded type for `User.id` + all `user_id` properties

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create and configure an agent with the Publish To Medium block, a
block that uses credentials, and a webhook trigger
  - Go to `/monitoring` and click the agent you just created
    - [x] -> "Export" button should work
      - [x] -> Exported file contains no credentials or secrets
      - [x] -> Exported file contains no user IDs
      - [x] -> Exported file contains no webhook IDs
2025-04-03 16:17:25 +00:00
Abhimanyu Yadav
7085d88b2c fix(marketplace): Add 58px bottom padding to creator page agents section on large screens (#9738)
- fix #9000 

Currently, we have a 32px bottom padding on the Creator’s page on larger
screen. I have added an extra 58px to make it 90px.
2025-04-03 16:03:50 +00:00
Abhimanyu Yadav
4a82edb0c3 fix(marketplace): Fix margin between divider and section on creators page (#9744)
- fix #8998 

Replace padding with margin top and update UI spacing from 32px to 25px
2025-04-03 16:02:03 +00:00
Abhimanyu Yadav
0fc423fd55 fix(marketplace): Fix margin between arrows and carousel (#9745)
- fix #8958 

Currently, the arrow button and carousel have a 16px margin, and the
button is placed 12px below the top of the container. This makes the
spacing appear to be 28px. Therefore, place the button and indicator at
the top of the container.
2025-04-03 16:01:27 +00:00
Abhimanyu Yadav
adb3263211 fix(marketplace): Reduce margin between search bar and chips to 20px (#9748)
- fix #8955 

Reduce the margin between the search bar and chips from 24px to 20px.
2025-04-03 16:00:28 +00:00
Abhimanyu Yadav
3b5feb2c25 fix(marketplace): Fix store card typography (#9739)
- fix #8965 

### Changes Made:
- **Title**: Increased line height from 20px to 32px.
- **Creator Name:**
   - Changed font to Geist Sans.
   - Updated font size to 20px and leading to 28px.
- **Description**: Applied Geist Sans font.
- **Stats Line:** Applied Geist Sans font.
   - Font Configuration Fix:

> Previously, we were using font-gist, which is not defined in the
tailwind config file, hence Updated to use font-sans instead.

I have also fixed the height and width of the profile picture in the
creator card in this PR. The issue is linked below:
- #9314

![Screenshot 2025-04-02 at 6 32
10 PM](https://github.com/user-attachments/assets/1c2d9779-0a5e-4269-b3d2-37526a0949d3)

The margin is perfectly set to 24px; only the height and width of the
image need to be changed.

---------

Co-authored-by: Bently <tomnoon9@gmail.com>
2025-04-03 15:55:20 +00:00
Nicholas Tindle
6f3da1b7d0 refactor(backend): move the router files for postmark to not the v2 folder (#9597)
<!-- Clearly explain the need for these changes: -->
One of the pull request review notes from when these were first made is
that they don't belong in the v2 folder. This pr fixes where they are.

### Changes 🏗️
- Moves from v2 to routers for the postmark tooling
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Check that linting and tests pass

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-03 15:52:22 +00:00
Nicholas Tindle
6b7c8d5234 fix(backend): handle notification service errors more elegently (#9734)
<!-- Clearly explain the need for these changes: -->
We have logged 272k timeout errors in the past week from the event loop.
Don't raise those as errors.

Also along the way for diagnosing this we found that some items were
inserted into batches with incomplete datasets so handle that too.

### Changes 🏗️
- Handle timeout errors explicitly 
- Add better messaging for other error types
- Add filtering for queueing bad mezsaging
- add filtering for reading bad batches
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Pull dev db
  - [x] Test new code to check stability + error reduction

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 14:57:06 +00:00
Reinier van der Leer
8e912a016f fix(ci/backend): Use Poetry version from lockfile (#9729)
Currently, our CI always uses the latest version of Poetry. This causes
issues with the lockfile check whenever a new Poetry version is
released, especially if that new version has different lockfile
generation behavior.

This new mechanism determines the Poetry version to use as follows:
- Get Poetry version from backend/poetry.lock in the current branch
- Get Poetry version from backend/poetry.lock on the base branch
- Use the newest version out of the two found versions

This way, we don't automatically update to new Poetry versions, but it
is still possible to update to newer versions through pull requests.
2025-04-03 12:43:10 +00:00
Reinier van der Leer
824da5e58c rename autogpt_platform license file 2025-04-03 14:10:44 +02:00
Zamil Majdy
378f49a2d9 fix(frontend): Fix toggle input label & time picker margin 2025-04-03 15:46:51 +04:00
Zamil Majdy
ad303d69d1 fix(frontend): Add border on opened select input-button 2025-04-03 11:44:08 +04:00
Zamil Majdy
200e5814b3 fix(backend): Cleanup service on service closure (#9735)
The cleanup command was only called on SIGTERM, making it possible for
the service to close without being cleaned. Risking the connection not
being proactively closed when the service is unused.

### Changes 🏗️

Call the cleanup command on the service finally block.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the service, stop it, see the log is printed (locally)
2025-04-02 04:21:40 +00:00
Nicholas Tindle
d879df062e feat(blocks): add a generic webhook block (#9584)
<!-- Clearly explain the need for these changes: -->
I want to be able to insert data into the graph as a webhook from
various services without making a provider specific webhook for things
like discord, slack, uptime bots, etc.

### Changes 🏗️
- Adds a generic webhook block that others can use
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test the endpoint that is generated with a graph, making sure to
pass data and consts to it
2025-04-02 03:22:59 +00:00
dependabot[bot]
6e595e6e28 chore(frontend/deps): bump @sentry/nextjs from 8.54.0 to 9.6.0 in /autogpt_platform/frontend (#9646)
Bumps [@sentry/nextjs](https://github.com/getsentry/sentry-javascript)
from 8.54.0 to 9.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@​sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>9.6.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(tanstackstart): Add
<code>@sentry/tanstackstart-react</code> package and make
<code>@sentry/tanstackstart</code> package a utility package (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15629">#15629</a>)</strong></p>
<p>Since TanStack Start is supposed to be a generic framework that
supports libraries like React and Solid, the
<code>@sentry/tanstackstart</code> SDK package was renamed to
<code>@sentry/tanstackstart-react</code> to reflect that the SDK is
specifically intended to be used for React TanStack Start applications.
Note that the TanStack Start SDK is still in alpha status and may be
subject to breaking changes in non-major package updates.</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>feat(astro): Accept all vite-plugin options (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15638">#15638</a>)</li>
<li>feat(deps): bump <code>@​sentry/webpack-plugin</code> from 3.2.1 to
3.2.2 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15627">#15627</a>)</li>
<li>feat(tanstackstart): Refine initial API (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15574">#15574</a>)</li>
<li>fix(core): Ensure <code>fill</code> only patches functions (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15632">#15632</a>)</li>
<li>fix(nextjs): Consider <code>pageExtensions</code> when looking for
instrumentation file (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15701">#15701</a>)</li>
<li>fix(remix): Null-check <code>options</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15610">#15610</a>)</li>
<li>fix(sveltekit): Correctly parse angle bracket type assertions for
auto instrumentation (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15578">#15578</a>)</li>
<li>fix(sveltekit): Guard process variable (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15605">#15605</a>)</li>
</ul>
<p>Work in this release was contributed by <a
href="https://github.com/angelikatyborska"><code>@​angelikatyborska</code></a>
and <a
href="https://github.com/nwalters512"><code>@​nwalters512</code></a>.
Thank you for your contributions!</p>
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.15 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>22.94 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>36.21 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>73.39 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>66.8 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay with
Canvas)</td>
<td>78.01 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay, Feedback)</td>
<td>90.57 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Feedback)</td>
<td>40.3 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. sendFeedback)</td>
<td>27.79 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. FeedbackAsync)</td>
<td>32.58 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code></td>
<td>24.97 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code> (incl. Tracing)</td>
<td>38.1 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code></td>
<td>27.4 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code> (incl. Tracing)</td>
<td>37.9 KB</td>
</tr>
<tr>
<td><code>@​sentry/svelte</code></td>
<td>23.18 KB</td>
</tr>
<tr>
<td>CDN Bundle</td>
<td>24.36 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing)</td>
<td>36.26 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay)</td>
<td>71.27 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback)</td>
<td>76.45 KB</td>
</tr>
<tr>
<td>CDN Bundle - uncompressed</td>
<td>71.19 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing) - uncompressed</td>
<td>107.57 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay) - uncompressed</td>
<td>218.84 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed</td>
<td>231.4 KB</td>
</tr>
<tr>
<td><code>@​sentry/nextjs</code> (client)</td>
<td>39.27 KB</td>
</tr>
<tr>
<td><code>@​sentry/sveltekit</code> (client)</td>
<td>36.63 KB</td>
</tr>
</tbody>
</table>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/9.6.0/CHANGELOG.md"><code>@​sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.6.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(tanstackstart): Add
<code>@sentry/tanstackstart-react</code> package and make
<code>@sentry/tanstackstart</code> package a utility package (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15629">#15629</a>)</strong></p>
<p>Since TanStack Start is supposed to be a generic framework that
supports libraries like React and Solid, the
<code>@sentry/tanstackstart</code> SDK package was renamed to
<code>@sentry/tanstackstart-react</code> to reflect that the SDK is
specifically intended to be used for React TanStack Start applications.
Note that the TanStack Start SDK is still in alpha status and may be
subject to breaking changes in non-major package updates.</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>feat(astro): Accept all vite-plugin options (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15638">#15638</a>)</li>
<li>feat(deps): bump <code>@​sentry/webpack-plugin</code> from 3.2.1 to
3.2.2 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15627">#15627</a>)</li>
<li>feat(tanstackstart): Refine initial API (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15574">#15574</a>)</li>
<li>fix(core): Ensure <code>fill</code> only patches functions (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15632">#15632</a>)</li>
<li>fix(nextjs): Consider <code>pageExtensions</code> when looking for
instrumentation file (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15701">#15701</a>)</li>
<li>fix(remix): Null-check <code>options</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15610">#15610</a>)</li>
<li>fix(sveltekit): Correctly parse angle bracket type assertions for
auto instrumentation (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15578">#15578</a>)</li>
<li>fix(sveltekit): Guard process variable (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15605">#15605</a>)</li>
</ul>
<p>Work in this release was contributed by <a
href="https://github.com/angelikatyborska"><code>@​angelikatyborska</code></a>
and <a
href="https://github.com/nwalters512"><code>@​nwalters512</code></a>.
Thank you for your contributions!</p>
<h2>9.5.0</h2>
<h3>Important Changes</h3>
<p>We found some issues with the new feedback screenshot annotation
where screenshots are not being generated properly. Due to this issue,
we are reverting the feature.</p>
<ul>
<li>Revert &quot;feat(feedback) Allowing annotation via highlighting
&amp; masking (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15484">#15484</a>)&quot;
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/15609">#15609</a>)</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>Add cloudflare adapter detection and path generation (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15603">#15603</a>)</li>
<li>deps(nextjs): Bump rollup to <code>4.34.9</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15589">#15589</a>)</li>
<li>feat(bun): Automatically add performance integrations (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15586">#15586</a>)</li>
<li>feat(replay): Bump rrweb to 2.34.0 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15580">#15580</a>)</li>
<li>fix(browser): Call original function on early return from patched
history API (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15576">#15576</a>)</li>
<li>fix(nestjs): Copy metadata in custom decorators (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15598">#15598</a>)</li>
<li>fix(react-router): Fix config type import (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15583">#15583</a>)</li>
<li>fix(remix): Use correct types export for
<code>@sentry/remix/cloudflare</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15599">#15599</a>)</li>
<li>fix(vue): Attach Pinia state only once per event (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15588">#15588</a>)</li>
</ul>
<p>Work in this release was contributed by <a
href="https://github.com/msurdi-a8c"><code>@​msurdi-a8c</code></a>, <a
href="https://github.com/namoscato"><code>@​namoscato</code></a>, and <a
href="https://github.com/rileyg98"><code>@​rileyg98</code></a>. Thank
you for your contributions!</p>
<h2>9.4.0</h2>
<ul>
<li>feat(core): Add types for logs protocol and envelope (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15530">#15530</a>)</li>
<li>feat(deps): Bump <code>@sentry/cli</code> from 2.41.1 to 2.42.2 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15510">#15510</a>)</li>
<li>feat(deps): Bump <code>@sentry/webpack-plugin</code> from 3.1.2 to
3.2.1 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15512">#15512</a>)</li>
<li>feat(feedback) Allowing annotation via highlighting &amp; masking
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15484">#15484</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6ec4602781"><code>6ec4602</code></a>
release: 9.6.0</li>
<li><a
href="5ba80bc5fd"><code>5ba80bc</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/15703">#15703</a>
from getsentry/prepare-release/9.6.0</li>
<li><a
href="8dc6e50597"><code>8dc6e50</code></a>
Remove unnecessary changelog item</li>
<li><a
href="7889768035"><code>7889768</code></a>
meta(changelog): Update changelog for 9.6.0</li>
<li><a
href="2b5526565c"><code>2b55265</code></a>
fix(nextjs): Consider <code>pageExtensions</code> when looking for
instrumentation file ...</li>
<li><a
href="7d88266a6e"><code>7d88266</code></a>
chore(ci): Remove <code>type</code> from canary failure template (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/15698">#15698</a>)</li>
<li><a
href="48ed271b6d"><code>48ed271</code></a>
chore(deps): bump esbuild from 0.20.0 to 0.25.0 in
/dev-packages/e2e-tests/te...</li>
<li><a
href="e15988c2ad"><code>e15988c</code></a>
chore: Add external contributor to CHANGELOG.md (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/15642">#15642</a>)</li>
<li><a
href="5c4cab7b34"><code>5c4cab7</code></a>
chore(deps): Deduplicate <code>@babel</code> dependencies (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/15639">#15639</a>)</li>
<li><a
href="ce1ced8172"><code>ce1ced8</code></a>
chore: Add external contributor to CHANGELOG.md (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/15640">#15640</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/8.54.0...9.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@sentry/nextjs&package-manager=npm_and_yarn&previous-version=8.54.0&new-version=9.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-02 02:51:36 +00:00
192 changed files with 4139 additions and 2976 deletions

View File

@@ -34,6 +34,7 @@ jobs:
python -m prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
DIRECT_URL: ${{ secrets.BACKEND_DATABASE_URL }}
trigger:

View File

@@ -36,6 +36,7 @@ jobs:
python -m prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.BACKEND_DATABASE_URL }}
DIRECT_URL: ${{ secrets.BACKEND_DATABASE_URL }}
trigger:
needs: migrate

View File

@@ -80,18 +80,35 @@ jobs:
- name: Install Poetry (Unix)
run: |
curl -sSL https://install.python-poetry.org | python3 -
# Extract Poetry version from backend/poetry.lock
HEAD_POETRY_VERSION=$(head -n 1 poetry.lock | grep -oP '(?<=Poetry )[0-9]+\.[0-9]+\.[0-9]+')
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
if [ -n "$BASE_REF" ]; then
BASE_BRANCH=${BASE_REF/refs\/heads\//}
BASE_POETRY_VERSION=$((git show "origin/$BASE_BRANCH":./poetry.lock; true) | head -n 1 | grep -oP '(?<=Poetry )[0-9]+\.[0-9]+\.[0-9]+')
echo "Found Poetry version ${BASE_POETRY_VERSION} in backend/poetry.lock on ${BASE_REF}"
POETRY_VERSION=$(printf '%s\n' "$HEAD_POETRY_VERSION" "$BASE_POETRY_VERSION" | sort -V | tail -n1)
else
POETRY_VERSION=$HEAD_POETRY_VERSION
fi
echo "Using Poetry version ${POETRY_VERSION}"
# Install Poetry
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$POETRY_VERSION python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
env:
BASE_REF: ${{ github.base_ref || github.event.merge_group.base_ref }}
- name: Check poetry.lock
run: |
poetry lock
if ! git diff --quiet poetry.lock; then
if ! git diff --quiet --ignore-matching-lines="^# " poetry.lock; then
echo "Error: poetry.lock not up to date."
echo
git diff poetry.lock
@@ -118,6 +135,7 @@ jobs:
run: poetry run prisma migrate dev --name updates
env:
DATABASE_URL: ${{ steps.supabase.outputs.DB_URL }}
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
- id: lint
name: Run Linter
@@ -134,12 +152,13 @@ jobs:
env:
LOG_LEVEL: ${{ runner.debug && 'DEBUG' || 'INFO' }}
DATABASE_URL: ${{ steps.supabase.outputs.DB_URL }}
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
SUPABASE_URL: ${{ steps.supabase.outputs.API_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
SUPABASE_JWT_SECRET: ${{ steps.supabase.outputs.JWT_SECRET }}
REDIS_HOST: 'localhost'
REDIS_PORT: '6379'
REDIS_PASSWORD: 'testpassword'
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"
env:
CI: true
@@ -152,8 +171,8 @@ jobs:
# If you want to replace this, you can do so by making our entire system generate
# new credentials for each local user and update the environment variables in
# the backend service, docker composes, and examples
RABBITMQ_DEFAULT_USER: 'rabbitmq_user_default'
RABBITMQ_DEFAULT_PASS: 'k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7'
RABBITMQ_DEFAULT_USER: "rabbitmq_user_default"
RABBITMQ_DEFAULT_PASS: "k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7"
# - name: Upload coverage reports to Codecov
# uses: codecov/codecov-action@v4

173
LICENSE
View File

@@ -1,8 +1,5 @@
All portions of this repository are under one of two licenses.
The all files outside of the autogpt_platform folder are under the MIT License below.
The autogpt_platform folder is under the Polyform Shield License below.
All portions of this repository are under one of two licenses. The majority of the AutoGPT repository is under the MIT License below. The autogpt_platform folder is under the
Polyform Shield License.
MIT License
@@ -30,169 +27,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# PolyForm Shield License 1.0.0
<https://polyformproject.org/licenses/shield/1.0.0>
## Acceptance
In order to get any license under these terms, you must agree
to them as both strict obligations and conditions to all
your licenses.
## Copyright License
The licensor grants you a copyright license for the
software to do everything you might do with the software
that would otherwise infringe the licensor's copyright
in it for any permitted purpose. However, you may
only distribute the software according to [Distribution
License](#distribution-license) and make changes or new works
based on the software according to [Changes and New Works
License](#changes-and-new-works-license).
## Distribution License
The licensor grants you an additional copyright license
to distribute copies of the software. Your license
to distribute covers distributing the software with
changes and new works permitted by [Changes and New Works
License](#changes-and-new-works-license).
## Notices
You must ensure that anyone who gets a copy of any part of
the software from you also gets a copy of these terms or the
URL for them above, as well as copies of any plain-text lines
beginning with `Required Notice:` that the licensor provided
with the software. For example:
> Required Notice: Copyright Yoyodyne, Inc. (http://example.com)
## Changes and New Works License
The licensor grants you an additional copyright license to
make changes and new works based on the software for any
permitted purpose.
## Patent License
The licensor grants you a patent license for the software that
covers patent claims the licensor can license, or becomes able
to license, that you would infringe by using the software.
## Noncompete
Any purpose is a permitted purpose, except for providing any
product that competes with the software or any product the
licensor or any of its affiliates provides using the software.
## Competition
Goods and services compete even when they provide functionality
through different kinds of interfaces or for different technical
platforms. Applications can compete with services, libraries
with plugins, frameworks with development tools, and so on,
even if they're written in different programming languages
or for different computer architectures. Goods and services
compete even when provided free of charge. If you market a
product as a practical substitute for the software or another
product, it definitely competes.
## New Products
If you are using the software to provide a product that does
not compete, but the licensor or any of its affiliates brings
your product into competition by providing a new version of
the software or another product using the software, you may
continue using versions of the software available under these
terms beforehand to provide your competing product, but not
any later versions.
## Discontinued Products
You may begin using the software to compete with a product
or service that the licensor or any of its affiliates has
stopped providing, unless the licensor includes a plain-text
line beginning with `Licensor Line of Business:` with the
software that mentions that line of business. For example:
> Licensor Line of Business: YoyodyneCMS Content Management
System (http://example.com/cms)
## Sales of Business
If the licensor or any of its affiliates sells a line of
business developing the software or using the software
to provide a product, the buyer can also enforce
[Noncompete](#noncompete) for that product.
## Fair Use
You may have "fair use" rights for the software under the
law. These terms do not limit them.
## No Other Rights
These terms do not allow you to sublicense or transfer any of
your licenses to anyone else, or prevent the licensor from
granting licenses to anyone else. These terms do not imply
any other licenses.
## Patent Defense
If you make any written claim that the software infringes or
contributes to infringement of any patent, your patent license
for the software granted under these terms ends immediately. If
your company makes such a claim, your patent license ends
immediately for work on behalf of your company.
## Violations
The first time you are notified in writing that you have
violated any of these terms, or done anything with the software
not covered by your licenses, your licenses can nonetheless
continue if you come into full compliance with these terms,
and take practical steps to correct past violations, within
32 days of receiving notice. Otherwise, all your licenses
end immediately.
## No Liability
***As far as the law allows, the software comes as is, without
any warranty or condition, and the licensor will not be liable
to you for any damages arising out of these terms or the use
or nature of the software, under any kind of legal claim.***
## Definitions
The **licensor** is the individual or entity offering these
terms, and the **software** is the software the licensor makes
available under these terms.
A **product** can be a good or service, or a combination
of them.
**You** refers to the individual or entity agreeing to these
terms.
**Your company** is any legal entity, sole proprietorship,
or other kind of organization that you work for, plus all
its affiliates.
**Affiliates** means the other organizations than an
organization has control over, is under the control of, or is
under common control with.
**Control** means ownership of substantially all the assets of
an entity, or the power to direct its management and policies
by vote, contract, or otherwise. Control can be direct or
indirect.
**Your licenses** are all the licenses granted to you for the
software under these terms.
**Use** means anything you do with the software requiring one
of your licenses.

View File

@@ -2,6 +2,7 @@
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) &ensp;
[![Twitter Follow](https://img.shields.io/twitter/follow/Auto_GPT?style=social)](https://twitter.com/Auto_GPT) &ensp;
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
**AutoGPT** is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
@@ -14,7 +15,11 @@
> Setting up and hosting the AutoGPT Platform yourself is a technical process.
> If you'd rather something that just works, we recommend [joining the waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta.
https://github.com/user-attachments/assets/d04273a5-b36a-4a37-818e-f631ce72d603
### Updated Setup Instructions:
Weve moved to a fully maintained and regularly updated documentation site.
👉 [Follow the official self-hosting guide here](https://docs.agpt.co/platform/getting-started/)
This tutorial assumes you have Docker, VSCode, git and npm installed.
@@ -79,7 +84,7 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
**Licensing:**
MIT License: All files outside of autogpt_platform folder are under the MIT License.
MIT License: The majority of the AutoGPT repository is under the MIT License.
Polyform Shield License: This license applies to the autogpt_platform folder.

View File

@@ -2,7 +2,7 @@
**Contributor License Agreement (“Agreement”)**
Thank you for your interest in the AutoGPT project at [https://github.com/Significant-Gravitas/AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) stewarded by Determinist Ltd (“**Determinist**”), with offices at 3rd Floor 1 Ashley Road, Altrincham, Cheshire, WA14 2DT, United Kingdom. The form of license below is a document that clarifies the terms under which You, the person listed below, may contribute software code described below (the “**Contribution**”) to the project. We appreciate your participation in our project, and your help in improving our products, so we want you to understand what will be done with the Contributions. This license is for your protection as well as the protection of Determinist and its licensees; it does not change your rights to use your own Contributions for any other purpose.
Thank you for your interest in the AutoGPT open source project at [https://github.com/Significant-Gravitas/AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) stewarded by Determinist Ltd (“**Determinist**”), with offices at 3rd Floor 1 Ashley Road, Altrincham, Cheshire, WA14 2DT, United Kingdom. The form of license below is a document that clarifies the terms under which You, the person listed below, may contribute software code described below (the “**Contribution**”) to the project. We appreciate your participation in our project, and your help in improving our products, so we want you to understand what will be done with the Contributions. This license is for your protection as well as the protection of Determinist and its licensees; it does not change your rights to use your own Contributions for any other purpose.
By submitting a Pull Request which modifies the content of the “autogpt\_platform” folder at [https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt\_platform](https://github.com/Significant-Gravitas/AutoGPT/tree/master/autogpt_platform), You hereby agree:

View File

@@ -8,7 +8,7 @@ from pydantic import Field, field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
from .filters import BelowLevelFilter
from .formatters import AGPTFormatter, StructuredLoggingFormatter
from .formatters import AGPTFormatter
LOG_DIR = Path(__file__).parent.parent.parent.parent / "logs"
LOG_FILE = "activity.log"
@@ -81,9 +81,26 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
"""
config = LoggingConfig()
log_handlers: list[logging.Handler] = []
# Console output handlers
stdout = logging.StreamHandler(stream=sys.stdout)
stdout.setLevel(config.level)
stdout.addFilter(BelowLevelFilter(logging.WARNING))
if config.level == logging.DEBUG:
stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
stderr = logging.StreamHandler()
stderr.setLevel(logging.WARNING)
if config.level == logging.DEBUG:
stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
log_handlers += [stdout, stderr]
# Cloud logging setup
if config.enable_cloud_logging or force_cloud_logging:
import google.cloud.logging
@@ -97,26 +114,7 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
transport=SyncTransport,
)
cloud_handler.setLevel(config.level)
cloud_handler.setFormatter(StructuredLoggingFormatter())
log_handlers.append(cloud_handler)
else:
# Console output handlers
stdout = logging.StreamHandler(stream=sys.stdout)
stdout.setLevel(config.level)
stdout.addFilter(BelowLevelFilter(logging.WARNING))
if config.level == logging.DEBUG:
stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
stderr = logging.StreamHandler()
stderr.setLevel(logging.WARNING)
if config.level == logging.DEBUG:
stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
log_handlers += [stdout, stderr]
# File logging setup
if config.enable_file_logging:

View File

@@ -1,7 +1,6 @@
import logging
from colorama import Fore, Style
from google.cloud.logging_v2.handlers import CloudLoggingFilter, StructuredLogHandler
from .utils import remove_color_codes
@@ -80,16 +79,3 @@ class AGPTFormatter(FancyConsoleFormatter):
return remove_color_codes(super().format(record))
else:
return super().format(record)
class StructuredLoggingFormatter(StructuredLogHandler, logging.Formatter):
def __init__(self):
# Set up CloudLoggingFilter to add diagnostic info to the log records
self.cloud_logging_filter = CloudLoggingFilter()
# Init StructuredLogHandler
super().__init__()
def format(self, record: logging.LogRecord) -> str:
self.cloud_logging_filter.filter(record)
return super().format(record)

View File

@@ -2,6 +2,7 @@ import logging
import re
from typing import Any
import uvicorn.config
from colorama import Fore
@@ -25,3 +26,14 @@ def print_attribute(
"color": value_color,
},
)
def generate_uvicorn_config():
"""
Generates a uvicorn logging config that silences uvicorn's default logging and tells it to use the native logging module.
"""
log_config = dict(uvicorn.config.LOGGING_CONFIG)
log_config["loggers"]["uvicorn"] = {"handlers": []}
log_config["loggers"]["uvicorn.error"] = {"handlers": []}
log_config["loggers"]["uvicorn.access"] = {"handlers": []}
return log_config

View File

@@ -1,20 +1,59 @@
import inspect
import threading
from typing import Callable, ParamSpec, TypeVar
from typing import Any, Awaitable, Callable, ParamSpec, TypeVar, cast, overload
P = ParamSpec("P")
R = TypeVar("R")
def thread_cached(func: Callable[P, R]) -> Callable[P, R]:
@overload
def thread_cached(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]: ...
@overload
def thread_cached(func: Callable[P, R]) -> Callable[P, R]: ...
def thread_cached(
func: Callable[P, R] | Callable[P, Awaitable[R]],
) -> Callable[P, R] | Callable[P, Awaitable[R]]:
thread_local = threading.local()
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
if inspect.iscoroutinefunction(func):
return wrapper
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (func, args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = await cast(Callable[P, Awaitable[R]], func)(
*args, **kwargs
)
return cache[key]
return async_wrapper
else:
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
# Include function in the key to prevent collisions between different functions
key = (func, args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
return sync_wrapper
def clear_thread_cache(func: Callable[..., Any]) -> None:
"""Clear the cache for a thread-cached function."""
thread_local = threading.local()
cache = getattr(thread_local, "cache", None)
if cache is not None:
# Clear all entries that match the function
for key in list(cache.keys()):
if key and len(key) > 0 and key[0] == func:
del cache[key]

View File

@@ -8,6 +8,7 @@ DB_CONNECT_TIMEOUT=60
DB_POOL_TIMEOUT=300
DB_SCHEMA=platform
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
DIRECT_URL="postgresql://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}?schema=${DB_SCHEMA}&connect_timeout=${DB_CONNECT_TIMEOUT}"
PRISMA_SCHEMA="postgres/schema.prisma"
# EXECUTOR

View File

@@ -73,7 +73,6 @@ FROM server_dependencies AS server
COPY autogpt_platform/backend /app/autogpt_platform/backend
RUN poetry install --no-ansi --only-root
ENV DATABASE_URL=""
ENV PORT=8000
CMD ["poetry", "run", "rest"]

View File

@@ -1,8 +1,6 @@
import logging
from typing import Any
from autogpt_libs.utils.cache import thread_cached
from backend.data.block import (
Block,
BlockCategory,
@@ -19,21 +17,6 @@ from backend.util import json
logger = logging.getLogger(__name__)
@thread_cached
def get_executor_manager_client():
from backend.executor import ExecutionManager
from backend.util.service import get_service_client
return get_service_client(ExecutionManager)
@thread_cached
def get_event_bus():
from backend.data.execution import RedisExecutionEventBus
return RedisExecutionEventBus()
class AgentExecutorBlock(Block):
class Input(BlockSchema):
user_id: str = SchemaField(description="User ID")
@@ -76,11 +59,11 @@ class AgentExecutorBlock(Block):
def run(self, input_data: Input, **kwargs) -> BlockOutput:
from backend.data.execution import ExecutionEventType
from backend.executor import utils as execution_utils
executor_manager = get_executor_manager_client()
event_bus = get_event_bus()
event_bus = execution_utils.get_execution_event_bus()
graph_exec = executor_manager.add_execution(
graph_exec = execution_utils.add_graph_execution(
graph_id=input_data.graph_id,
graph_version=input_data.graph_version,
user_id=input_data.user_id,

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel
from pydantic import BaseModel, ConfigDict
from backend.data.model import SchemaField
@@ -143,11 +143,12 @@ class ContactEmail(BaseModel):
class EmploymentHistory(BaseModel):
"""An employment history in Apollo"""
class Config:
extra = "allow"
arbitrary_types_allowed = True
from_attributes = True
populate_by_name = True
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
from_attributes=True,
populate_by_name=True,
)
_id: Optional[str] = None
created_at: Optional[str] = None
@@ -188,11 +189,12 @@ class TypedCustomField(BaseModel):
class Pagination(BaseModel):
"""Pagination in Apollo"""
class Config:
extra = "allow" # Allow extra fields
arbitrary_types_allowed = True # Allow any type
from_attributes = True # Allow from_orm
populate_by_name = True # Allow field aliases to work both ways
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
from_attributes=True,
populate_by_name=True,
)
page: int = 0
per_page: int = 0
@@ -230,11 +232,12 @@ class PhoneNumber(BaseModel):
class Organization(BaseModel):
"""An organization in Apollo"""
class Config:
extra = "allow"
arbitrary_types_allowed = True
from_attributes = True
populate_by_name = True
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
from_attributes=True,
populate_by_name=True,
)
id: Optional[str] = "N/A"
name: Optional[str] = "N/A"
@@ -268,11 +271,12 @@ class Organization(BaseModel):
class Contact(BaseModel):
"""A contact in Apollo"""
class Config:
extra = "allow"
arbitrary_types_allowed = True
from_attributes = True
populate_by_name = True
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
from_attributes=True,
populate_by_name=True,
)
contact_roles: list[Any] = []
id: Optional[str] = None
@@ -369,14 +373,14 @@ If a company has several office locations, results are still based on the headqu
To exclude companies based on location, use the organization_not_locations parameter.
""",
default=[],
default_factory=list,
)
organizations_not_locations: list[str] = SchemaField(
description="""Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude.
This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results.
""",
default=[],
default_factory=list,
)
q_organization_keyword_tags: list[str] = SchemaField(
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry."""
@@ -390,7 +394,7 @@ If the value you enter for this parameter does not match with a company's name,
description="""The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, identify the values for organization_id when you call this endpoint.""",
default=[],
default_factory=list,
)
max_results: int = SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
@@ -443,14 +447,14 @@ Results also include job titles with the same terms, even if they are not exact
Use this parameter in combination with the person_seniorities[] parameter to find people based on specific job functions and seniority levels.
""",
default=[],
default_factory=list,
placeholder="marketing manager",
)
person_locations: list[str] = SchemaField(
description="""The location where people live. You can search across cities, US states, and countries.
To find people based on the headquarters locations of their current employer, use the organization_locations parameter.""",
default=[],
default_factory=list,
)
person_seniorities: list[SenorityLevels] = SchemaField(
description="""The job seniority that people hold within their current employer. This enables you to find people that currently hold positions at certain reporting levels, such as Director level or senior IC level.
@@ -460,7 +464,7 @@ For a person to be included in search results, they only need to match 1 of the
Searches only return results based on their current job title, so searching for Director-level employees only returns people that currently hold a Director-level title. If someone was previously a Director, but is currently a VP, they would not be included in your search results.
Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels.""",
default=[],
default_factory=list,
)
organization_locations: list[str] = SchemaField(
description="""The location of the company headquarters for a person's current employer. You can search across cities, US states, and countries.
@@ -468,7 +472,7 @@ Use this parameter in combination with the person_titles[] parameter to find peo
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters.
To find people based on their personal location, use the person_locations parameter.""",
default=[],
default_factory=list,
)
q_organization_domains: list[str] = SchemaField(
description="""The domain name for the person's employer. This can be the current employer or a previous employer. Do not include www., the @ symbol, or similar.
@@ -476,23 +480,23 @@ To find people based on their personal location, use the person_locations parame
You can add multiple domains to search across companies.
Examples: apollo.io and microsoft.com""",
default=[],
default_factory=list,
)
contact_email_statuses: list[ContactEmailStatuses] = SchemaField(
description="""The email statuses for the people you want to find. You can add multiple statuses to expand your search.""",
default=[],
default_factory=list,
)
organization_ids: list[str] = SchemaField(
description="""The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, call the Organization Search endpoint and identify the values for organization_id.""",
default=[],
default_factory=list,
)
organization_num_empoloyees_range: list[int] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
default=[],
default_factory=list,
)
q_keywords: str = SchemaField(
description="""A string of words over which we want to filter the results""",
@@ -522,11 +526,12 @@ Use the page parameter to search the different pages of data.""",
class SearchPeopleResponse(BaseModel):
"""Response from Apollo's search people API"""
class Config:
extra = "allow" # Allow extra fields
arbitrary_types_allowed = True # Allow any type
from_attributes = True # Allow from_orm
populate_by_name = True # Allow field aliases to work both ways
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
from_attributes=True,
populate_by_name=True,
)
breadcrumbs: list[Breadcrumb] = []
partial_results_only: bool = True

View File

@@ -32,18 +32,18 @@ If a company has several office locations, results are still based on the headqu
To exclude companies based on location, use the organization_not_locations parameter.
""",
default=[],
default_factory=list,
)
organizations_not_locations: list[str] = SchemaField(
description="""Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude.
This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results.
""",
default=[],
default_factory=list,
)
q_organization_keyword_tags: list[str] = SchemaField(
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry.""",
default=[],
default_factory=list,
)
q_organization_name: str = SchemaField(
description="""Filter search results to include a specific company name.
@@ -56,7 +56,7 @@ If the value you enter for this parameter does not match with a company's name,
description="""The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, identify the values for organization_id when you call this endpoint.""",
default=[],
default_factory=list,
)
max_results: int = SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
@@ -72,7 +72,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
class Output(BlockSchema):
organizations: list[Organization] = SchemaField(
description="List of organizations found",
default=[],
default_factory=list,
)
organization: Organization = SchemaField(
description="Each found organization, one at a time",

View File

@@ -26,14 +26,14 @@ class SearchPeopleBlock(Block):
Use this parameter in combination with the person_seniorities[] parameter to find people based on specific job functions and seniority levels.
""",
default=[],
default_factory=list,
advanced=False,
)
person_locations: list[str] = SchemaField(
description="""The location where people live. You can search across cities, US states, and countries.
To find people based on the headquarters locations of their current employer, use the organization_locations parameter.""",
default=[],
default_factory=list,
advanced=False,
)
person_seniorities: list[SenorityLevels] = SchemaField(
@@ -44,7 +44,7 @@ class SearchPeopleBlock(Block):
Searches only return results based on their current job title, so searching for Director-level employees only returns people that currently hold a Director-level title. If someone was previously a Director, but is currently a VP, they would not be included in your search results.
Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels.""",
default=[],
default_factory=list,
advanced=False,
)
organization_locations: list[str] = SchemaField(
@@ -53,7 +53,7 @@ class SearchPeopleBlock(Block):
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters.
To find people based on their personal location, use the person_locations parameter.""",
default=[],
default_factory=list,
advanced=False,
)
q_organization_domains: list[str] = SchemaField(
@@ -62,26 +62,26 @@ class SearchPeopleBlock(Block):
You can add multiple domains to search across companies.
Examples: apollo.io and microsoft.com""",
default=[],
default_factory=list,
advanced=False,
)
contact_email_statuses: list[ContactEmailStatuses] = SchemaField(
description="""The email statuses for the people you want to find. You can add multiple statuses to expand your search.""",
default=[],
default_factory=list,
advanced=False,
)
organization_ids: list[str] = SchemaField(
description="""The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, call the Organization Search endpoint and identify the values for organization_id.""",
default=[],
default_factory=list,
advanced=False,
)
organization_num_empoloyees_range: list[int] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
default=[],
default_factory=list,
advanced=False,
)
q_keywords: str = SchemaField(
@@ -104,7 +104,7 @@ class SearchPeopleBlock(Block):
class Output(BlockSchema):
people: list[Contact] = SchemaField(
description="List of people found",
default=[],
default_factory=list,
)
person: Contact = SchemaField(
description="Each found person, one at a time",

View File

@@ -151,7 +151,7 @@ class FindInDictionaryBlock(Block):
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
default={},
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
)
key: str = SchemaField(
@@ -167,7 +167,7 @@ class AddToDictionaryBlock(Block):
advanced=False,
)
entries: dict[Any, Any] = SchemaField(
default={},
default_factory=dict,
description="The entries to add to the dictionary. This is the batch version of the `key` and `value` fields.",
advanced=True,
)
@@ -229,7 +229,7 @@ class AddToDictionaryBlock(Block):
class AddToListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(
default=[],
default_factory=list,
advanced=False,
description="The list to add the entry to. If not provided, a new list will be created.",
)
@@ -239,7 +239,7 @@ class AddToListBlock(Block):
default=None,
)
entries: List[Any] = SchemaField(
default=[],
default_factory=lambda: list(),
description="The entries to add to the list. This is the batch version of the `entry` field.",
advanced=True,
)

View File

@@ -55,7 +55,7 @@ class CodeExecutionBlock(Block):
"These commands are executed with `sh`, in the foreground."
),
placeholder="pip install cowsay",
default=[],
default_factory=list,
advanced=False,
)
@@ -207,7 +207,7 @@ class InstantiationBlock(Block):
"These commands are executed with `sh`, in the foreground."
),
placeholder="pip install cowsay",
default=[],
default_factory=list,
advanced=False,
)

View File

@@ -34,7 +34,7 @@ class ReadCsvBlock(Block):
)
skip_columns: list[str] = SchemaField(
description="The columns to skip from the start of the row",
default=[],
default_factory=list,
)
class Output(BlockSchema):

View File

@@ -49,7 +49,7 @@ class ExaContentsBlock(Block):
class Output(BlockSchema):
results: list = SchemaField(
description="List of document contents",
default=[],
default_factory=list,
)
error: str = SchemaField(description="Error message if the request failed")

View File

@@ -38,11 +38,11 @@ class ExaSearchBlock(Block):
)
include_domains: List[str] = SchemaField(
description="Domains to include in search",
default=[],
default_factory=list,
)
exclude_domains: List[str] = SchemaField(
description="Domains to exclude from search",
default=[],
default_factory=list,
advanced=True,
)
start_crawl_date: datetime = SchemaField(
@@ -59,12 +59,12 @@ class ExaSearchBlock(Block):
)
include_text: List[str] = SchemaField(
description="Text patterns to include",
default=[],
default_factory=list,
advanced=True,
)
exclude_text: List[str] = SchemaField(
description="Text patterns to exclude",
default=[],
default_factory=list,
advanced=True,
)
contents: ContentSettings = SchemaField(
@@ -76,7 +76,7 @@ class ExaSearchBlock(Block):
class Output(BlockSchema):
results: list = SchemaField(
description="List of search results",
default=[],
default_factory=list,
)
def __init__(self):

View File

@@ -26,12 +26,12 @@ class ExaFindSimilarBlock(Block):
)
include_domains: List[str] = SchemaField(
description="Domains to include in search",
default=[],
default_factory=list,
advanced=True,
)
exclude_domains: List[str] = SchemaField(
description="Domains to exclude from search",
default=[],
default_factory=list,
advanced=True,
)
start_crawl_date: datetime = SchemaField(
@@ -48,12 +48,12 @@ class ExaFindSimilarBlock(Block):
)
include_text: List[str] = SchemaField(
description="Text patterns to include (max 1 string, up to 5 words)",
default=[],
default_factory=list,
advanced=True,
)
exclude_text: List[str] = SchemaField(
description="Text patterns to exclude (max 1 string, up to 5 words)",
default=[],
default_factory=list,
advanced=True,
)
contents: ContentSettings = SchemaField(
@@ -65,7 +65,7 @@ class ExaFindSimilarBlock(Block):
class Output(BlockSchema):
results: List[Any] = SchemaField(
description="List of similar documents with title, URL, published date, author, and score",
default=[],
default_factory=list,
)
def __init__(self):

View File

@@ -42,7 +42,7 @@ class AIVideoGeneratorBlock(Block):
description="Error message if video generation failed."
)
logs: list[str] = SchemaField(
description="Generation progress logs.", optional=True
description="Generation progress logs.",
)
def __init__(self):

View File

@@ -0,0 +1,51 @@
from backend.data.block import (
Block,
BlockCategory,
BlockManualWebhookConfig,
BlockOutput,
BlockSchema,
)
from backend.data.model import SchemaField
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks.generic import GenericWebhookType
class GenericWebhookTriggerBlock(Block):
class Input(BlockSchema):
payload: dict = SchemaField(hidden=True, default_factory=dict)
constants: dict = SchemaField(
description="The constants to be set when the block is put on the graph",
default_factory=dict,
)
class Output(BlockSchema):
payload: dict = SchemaField(
description="The complete webhook payload that was received from the generic webhook."
)
constants: dict = SchemaField(
description="The constants to be set when the block is put on the graph"
)
example_payload = {"message": "Hello, World!"}
def __init__(self):
super().__init__(
id="8fa8c167-2002-47ce-aba8-97572fc5d387",
description="This block will output the contents of the generic input for the webhook.",
categories={BlockCategory.INPUT},
input_schema=GenericWebhookTriggerBlock.Input,
output_schema=GenericWebhookTriggerBlock.Output,
webhook_config=BlockManualWebhookConfig(
provider=ProviderName.GENERIC_WEBHOOK,
webhook_type=GenericWebhookType.PLAIN,
),
test_input={"constants": {"key": "value"}, "payload": self.example_payload},
test_output=[
("constants", {"key": "value"}),
("payload", self.example_payload),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "constants", input_data.constants
yield "payload", input_data.payload

View File

@@ -37,7 +37,7 @@ class GitHubTriggerBase:
placeholder="{owner}/{repo}",
)
# --8<-- [start:example-payload-field]
payload: dict = SchemaField(hidden=True, default={})
payload: dict = SchemaField(hidden=True, default_factory=dict)
# --8<-- [end:example-payload-field]
class Output(BlockSchema):

View File

@@ -34,7 +34,7 @@ class SendWebRequestBlock(Block):
)
headers: dict[str, str] = SchemaField(
description="The headers to include in the request",
default={},
default_factory=dict,
)
json_format: bool = SchemaField(
title="JSON format",

View File

@@ -15,7 +15,8 @@ class HubSpotCompanyBlock(Block):
description="Operation to perform (create, update, get)", default="get"
)
company_data: dict = SchemaField(
description="Company data for create/update operations", default={}
description="Company data for create/update operations",
default_factory=dict,
)
domain: str = SchemaField(
description="Company domain for get/update operations", default=""

View File

@@ -15,7 +15,8 @@ class HubSpotContactBlock(Block):
description="Operation to perform (create, update, get)", default="get"
)
contact_data: dict = SchemaField(
description="Contact data for create/update operations", default={}
description="Contact data for create/update operations",
default_factory=dict,
)
email: str = SchemaField(
description="Email address for get/update operations", default=""

View File

@@ -19,7 +19,7 @@ class HubSpotEngagementBlock(Block):
)
email_data: dict = SchemaField(
description="Email data including recipient, subject, content",
default={},
default_factory=dict,
)
contact_id: str = SchemaField(
description="Contact ID for engagement tracking", default=""
@@ -27,7 +27,6 @@ class HubSpotEngagementBlock(Block):
timeframe_days: int = SchemaField(
description="Number of days to look back for engagement",
default=30,
optional=True,
)
class Output(BlockSchema):

View File

@@ -1,3 +1,4 @@
import copy
from datetime import date, time
from typing import Any, Optional
@@ -38,7 +39,7 @@ class AgentInputBlock(Block):
)
placeholder_values: list = SchemaField(
description="The placeholder values to be passed as input.",
default=[],
default_factory=list,
advanced=True,
hidden=True,
)
@@ -54,7 +55,7 @@ class AgentInputBlock(Block):
)
def generate_schema(self):
schema = self.get_field_schema("value")
schema = copy.deepcopy(self.get_field_schema("value"))
if possible_values := self.placeholder_values:
schema["enum"] = possible_values
return schema
@@ -467,7 +468,7 @@ class AgentDropdownInputBlock(AgentInputBlock):
)
placeholder_values: list = SchemaField(
description="Possible values for the dropdown.",
default=[],
default_factory=list,
advanced=False,
title="Dropdown Options",
)

View File

@@ -11,13 +11,13 @@ class StepThroughItemsBlock(Block):
advanced=False,
description="The list or dictionary of items to iterate over",
placeholder="[1, 2, 3, 4, 5] or {'key1': 'value1', 'key2': 'value2'}",
default=[],
default_factory=list,
)
items_object: dict = SchemaField(
advanced=False,
description="The list or dictionary of items to iterate over",
placeholder="[1, 2, 3, 4, 5] or {'key1': 'value1', 'key2': 'value2'}",
default={},
default_factory=dict,
)
items_str: str = SchemaField(
advanced=False,

View File

@@ -23,7 +23,7 @@ class JinaChunkingBlock(Block):
class Output(BlockSchema):
chunks: list = SchemaField(description="List of chunked texts")
tokens: list = SchemaField(
description="List of token information for each chunk", optional=True
description="List of token information for each chunk",
)
def __init__(self):

View File

@@ -1,4 +1,4 @@
from groq._utils._utils import quote
from urllib.parse import quote
from backend.blocks.jina._auth import (
TEST_CREDENTIALS,

View File

@@ -28,8 +28,8 @@ class LinearCreateIssueBlock(Block):
priority: int | None = SchemaField(
description="Priority of the issue",
default=None,
minimum=0,
maximum=4,
ge=0,
le=4,
)
project_name: str | None = SchemaField(
description="Name of the project to create the issue on",

View File

@@ -4,30 +4,25 @@ from abc import ABC
from enum import Enum, EnumMeta
from json import JSONDecodeError
from types import MappingProxyType
from typing import TYPE_CHECKING, Any, Iterable, List, Literal, NamedTuple, Optional
from pydantic import BaseModel, SecretStr
from backend.data.model import NodeExecutionStats
from backend.integrations.providers import ProviderName
if TYPE_CHECKING:
from enum import _EnumMemberT
from typing import Any, Iterable, List, Literal, NamedTuple, Optional
import anthropic
import ollama
import openai
from anthropic._types import NotGiven
from anthropic import NotGiven
from anthropic.types import ToolParam
from groq import Groq
from pydantic import BaseModel, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
NodeExecutionStats,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util import json
from backend.util.settings import BehaveAs, Settings
from backend.util.text import TextFormatter
@@ -77,12 +72,10 @@ class ModelMetadata(NamedTuple):
class LlmModelMeta(EnumMeta):
@property
def __members__(
self: type["_EnumMemberT"],
) -> MappingProxyType[str, "_EnumMemberT"]:
def __members__(self) -> MappingProxyType:
if Settings().config.behave_as == BehaveAs.LOCAL:
members = super().__members__
return members
return MappingProxyType(members)
else:
removed_providers = ["ollama"]
existing_members = super().__members__
@@ -142,6 +135,8 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
AMAZON_NOVA_PRO_V1 = "amazon/nova-pro-v1"
MICROSOFT_WIZARDLM_2_8X22B = "microsoft/wizardlm-2-8x22b"
GRYPHE_MYTHOMAX_L2_13B = "gryphe/mythomax-l2-13b"
META_LLAMA_4_SCOUT = "meta-llama/llama-4-scout"
META_LLAMA_4_MAVERICK = "meta-llama/llama-4-maverick"
@property
def metadata(self) -> ModelMetadata:
@@ -223,6 +218,8 @@ MODEL_METADATA = {
LlmModel.AMAZON_NOVA_PRO_V1: ModelMetadata("open_router", 300000, 5120),
LlmModel.MICROSOFT_WIZARDLM_2_8X22B: ModelMetadata("open_router", 65536, 4096),
LlmModel.GRYPHE_MYTHOMAX_L2_13B: ModelMetadata("open_router", 4096, 4096),
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000),
}
for model in LlmModel:
@@ -424,7 +421,7 @@ def llm_call(
response=(
resp.content[0].name
if isinstance(resp.content[0], anthropic.types.ToolUseBlock)
else resp.content[0].text
else getattr(resp.content[0], "text", "")
),
tool_calls=tool_calls,
prompt_tokens=resp.usage.input_tokens,
@@ -528,7 +525,7 @@ def llm_call(
class AIBlockBase(Block, ABC):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.prompt = ""
self.prompt = []
def merge_llm_stats(self, block: "AIBlockBase"):
self.merge_stats(block.execution_stats)
@@ -558,7 +555,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
description="The system prompt to provide additional context to the model.",
)
conversation_history: list[dict] = SchemaField(
default=[],
default_factory=list,
description="The conversation history to provide context for the prompt.",
)
retry: int = SchemaField(
@@ -568,7 +565,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
)
prompt_values: dict[str, str] = SchemaField(
advanced=False,
default={},
default_factory=dict,
description="Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}.",
)
max_tokens: int | None = SchemaField(
@@ -587,7 +584,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
response: dict[str, Any] = SchemaField(
description="The response object generated by the language model."
)
prompt: str = SchemaField(description="The prompt sent to the language model.")
prompt: list = SchemaField(description="The prompt sent to the language model.")
error: str = SchemaField(description="Error message if the API call failed.")
def __init__(self):
@@ -609,7 +606,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
test_credentials=TEST_CREDENTIALS,
test_output=[
("response", {"key1": "key1Value", "key2": "key2Value"}),
("prompt", str),
("prompt", list),
],
test_mock={
"llm_call": lambda *args, **kwargs: LLMResponse(
@@ -642,6 +639,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
Test mocks work only on class functions, this wraps the llm_call function
so that it can be mocked withing the block testing framework.
"""
self.prompt = prompt
return llm_call(
credentials=credentials,
llm_model=llm_model,
@@ -796,7 +794,7 @@ class AITextGeneratorBlock(AIBlockBase):
)
prompt_values: dict[str, str] = SchemaField(
advanced=False,
default={},
default_factory=dict,
description="Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}.",
)
ollama_host: str = SchemaField(
@@ -814,7 +812,7 @@ class AITextGeneratorBlock(AIBlockBase):
response: str = SchemaField(
description="The response generated by the language model."
)
prompt: str = SchemaField(description="The prompt sent to the language model.")
prompt: list = SchemaField(description="The prompt sent to the language model.")
error: str = SchemaField(description="Error message if the API call failed.")
def __init__(self):
@@ -831,7 +829,7 @@ class AITextGeneratorBlock(AIBlockBase):
test_credentials=TEST_CREDENTIALS,
test_output=[
("response", "Response text"),
("prompt", str),
("prompt", list),
],
test_mock={"llm_call": lambda *args, **kwargs: "Response text"},
)
@@ -850,7 +848,10 @@ class AITextGeneratorBlock(AIBlockBase):
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
object_input_data = AIStructuredResponseGeneratorBlock.Input(
**{attr: getattr(input_data, attr) for attr in input_data.model_fields},
**{
attr: getattr(input_data, attr)
for attr in AITextGeneratorBlock.Input.model_fields
},
expected_format={},
)
yield "response", self.llm_call(object_input_data, credentials)
@@ -907,7 +908,7 @@ class AITextSummarizerBlock(AIBlockBase):
class Output(BlockSchema):
summary: str = SchemaField(description="The final summary of the text.")
prompt: str = SchemaField(description="The prompt sent to the language model.")
prompt: list = SchemaField(description="The prompt sent to the language model.")
error: str = SchemaField(description="Error message if the API call failed.")
def __init__(self):
@@ -924,7 +925,7 @@ class AITextSummarizerBlock(AIBlockBase):
test_credentials=TEST_CREDENTIALS,
test_output=[
("summary", "Final summary of a long text"),
("prompt", str),
("prompt", list),
],
test_mock={
"llm_call": lambda input_data, credentials: (
@@ -1033,8 +1034,14 @@ class AITextSummarizerBlock(AIBlockBase):
class AIConversationBlock(AIBlockBase):
class Input(BlockSchema):
prompt: str = SchemaField(
description="The prompt to send to the language model.",
placeholder="Enter your prompt here...",
default="",
advanced=False,
)
messages: List[Any] = SchemaField(
description="List of messages in the conversation.", min_length=1
description="List of messages in the conversation.",
)
model: LlmModel = SchemaField(
title="LLM Model",
@@ -1057,7 +1064,7 @@ class AIConversationBlock(AIBlockBase):
response: str = SchemaField(
description="The model's response to the conversation."
)
prompt: str = SchemaField(description="The prompt sent to the language model.")
prompt: list = SchemaField(description="The prompt sent to the language model.")
error: str = SchemaField(description="Error message if the API call failed.")
def __init__(self):
@@ -1086,7 +1093,7 @@ class AIConversationBlock(AIBlockBase):
"response",
"The 2020 World Series was played at Globe Life Field in Arlington, Texas.",
),
("prompt", str),
("prompt", list),
],
test_mock={
"llm_call": lambda *args, **kwargs: "The 2020 World Series was played at Globe Life Field in Arlington, Texas."
@@ -1108,7 +1115,7 @@ class AIConversationBlock(AIBlockBase):
) -> BlockOutput:
response = self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
prompt="",
prompt=input_data.prompt,
credentials=input_data.credentials,
model=input_data.model,
conversation_history=input_data.messages,
@@ -1166,7 +1173,7 @@ class AIListGeneratorBlock(AIBlockBase):
list_item: str = SchemaField(
description="Each individual item in the list.",
)
prompt: str = SchemaField(description="The prompt sent to the language model.")
prompt: list = SchemaField(description="The prompt sent to the language model.")
error: str = SchemaField(
description="Error message if the list generation failed."
)
@@ -1198,7 +1205,7 @@ class AIListGeneratorBlock(AIBlockBase):
"generated_list",
["Zylora Prime", "Kharon-9", "Vortexia", "Oceara", "Draknos"],
),
("prompt", str),
("prompt", list),
("list_item", "Zylora Prime"),
("list_item", "Kharon-9"),
("list_item", "Vortexia"),

View File

@@ -65,7 +65,7 @@ class AddMemoryBlock(Block, Mem0Base):
default=Content(discriminator="content", content="I'm a vegetarian"),
)
metadata: dict[str, Any] = SchemaField(
description="Optional metadata for the memory", default={}
description="Optional metadata for the memory", default_factory=dict
)
limit_memory_to_run: bool = SchemaField(
@@ -173,7 +173,7 @@ class SearchMemoryBlock(Block, Mem0Base):
)
categories_filter: list[str] = SchemaField(
description="Categories to filter by",
default=[],
default_factory=list,
advanced=True,
)
limit_memory_to_run: bool = SchemaField(

View File

@@ -177,7 +177,8 @@ class PineconeInsertBlock(Block):
description="Namespace to use in Pinecone", default=""
)
metadata: dict = SchemaField(
description="Additional metadata to store with each vector", default={}
description="Additional metadata to store with each vector",
default_factory=dict,
)
class Output(BlockSchema):

View File

@@ -26,7 +26,7 @@ class Slant3DTriggerBase:
class Input(BlockSchema):
credentials: Slant3DCredentialsInput = Slant3DCredentialsField()
# Webhook URL is handled by the webhook system
payload: dict = SchemaField(hidden=True, default={})
payload: dict = SchemaField(hidden=True, default_factory=dict)
class Output(BlockSchema):
payload: dict = SchemaField(

View File

@@ -14,7 +14,6 @@ from backend.data.block import (
BlockOutput,
BlockSchema,
BlockType,
get_block,
)
from backend.data.model import SchemaField
from backend.util import json
@@ -155,7 +154,7 @@ class SmartDecisionMakerBlock(Block):
description="The system prompt to provide additional context to the model.",
)
conversation_history: list[dict] = SchemaField(
default=[],
default_factory=list,
description="The conversation history to provide context for the prompt.",
)
last_tool_output: Any = SchemaField(
@@ -169,7 +168,7 @@ class SmartDecisionMakerBlock(Block):
)
prompt_values: dict[str, str] = SchemaField(
advanced=False,
default={},
default_factory=dict,
description="Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}.",
)
max_tokens: int | None = SchemaField(
@@ -264,9 +263,7 @@ class SmartDecisionMakerBlock(Block):
Raises:
ValueError: If the block specified by sink_node.block_id is not found.
"""
block = get_block(sink_node.block_id)
if not block:
raise ValueError(f"Block not found: {sink_node.block_id}")
block = sink_node.block
tool_function: dict[str, Any] = {
"name": re.sub(r"[^a-zA-Z0-9_-]", "_", block.name).lower(),

View File

@@ -112,7 +112,7 @@ class AddLeadToCampaignBlock(Block):
lead_list: list[LeadInput] = SchemaField(
description="An array of JSON objects, each representing a lead's details. Can hold max 100 leads.",
max_length=100,
default=[],
default_factory=list,
advanced=False,
)
settings: LeadUploadSettings = SchemaField(
@@ -248,7 +248,7 @@ class SaveCampaignSequencesBlock(Block):
)
sequences: list[Sequence] = SchemaField(
description="The sequences to save",
default=[],
default_factory=list,
advanced=False,
)
credentials: SmartLeadCredentialsInput = SchemaField(

View File

@@ -39,7 +39,7 @@ class LeadCustomFields(BaseModel):
fields: dict[str, str] = SchemaField(
description="Custom fields for a lead (max 20 fields)",
max_length=20,
default={},
default_factory=dict,
)
@@ -85,7 +85,7 @@ class AddLeadsRequest(BaseModel):
lead_list: list[LeadInput] = SchemaField(
description="List of leads to add to the campaign",
max_length=100,
default=[],
default_factory=list,
)
settings: LeadUploadSettings
campaign_id: int

View File

@@ -156,7 +156,7 @@
# participant_ids: list[str] = SchemaField(
# description="Array of User IDs to create conversation with (max 50)",
# placeholder="Enter participant user IDs",
# default=[],
# default_factory=list,
# advanced=False
# )

View File

@@ -39,7 +39,6 @@ class TwitterGetListBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to lookup",
placeholder="Enter list ID",
required=True,
)
class Output(BlockSchema):
@@ -184,7 +183,6 @@ class TwitterGetOwnedListsBlock(Block):
user_id: str = SchemaField(
description="The user ID whose owned Lists to retrieve",
placeholder="Enter user ID",
required=True,
)
max_results: int | None = SchemaField(

View File

@@ -45,13 +45,11 @@ class TwitterRemoveListMemberBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to remove the member from",
placeholder="Enter list ID",
required=True,
)
user_id: str = SchemaField(
description="The ID of the user to remove from the List",
placeholder="Enter user ID to remove",
required=True,
)
class Output(BlockSchema):
@@ -120,13 +118,11 @@ class TwitterAddListMemberBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to add the member to",
placeholder="Enter list ID",
required=True,
)
user_id: str = SchemaField(
description="The ID of the user to add to the List",
placeholder="Enter user ID to add",
required=True,
)
class Output(BlockSchema):
@@ -195,7 +191,6 @@ class TwitterGetListMembersBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to get members from",
placeholder="Enter list ID",
required=True,
)
max_results: int | None = SchemaField(
@@ -376,7 +371,6 @@ class TwitterGetListMembershipsBlock(Block):
user_id: str = SchemaField(
description="The ID of the user whose List memberships to retrieve",
placeholder="Enter user ID",
required=True,
)
max_results: int | None = SchemaField(

View File

@@ -42,7 +42,6 @@ class TwitterGetListTweetsBlock(Block):
list_id: str = SchemaField(
description="The ID of the List whose Tweets you would like to retrieve",
placeholder="Enter list ID",
required=True,
)
max_results: int | None = SchemaField(

View File

@@ -28,7 +28,6 @@ class TwitterDeleteListBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to be deleted",
placeholder="Enter list ID",
required=True,
)
class Output(BlockSchema):

View File

@@ -39,7 +39,6 @@ class TwitterUnpinListBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to unpin",
placeholder="Enter list ID",
required=True,
)
class Output(BlockSchema):
@@ -103,7 +102,6 @@ class TwitterPinListBlock(Block):
list_id: str = SchemaField(
description="The ID of the List to pin",
placeholder="Enter list ID",
required=True,
)
class Output(BlockSchema):

View File

@@ -44,7 +44,7 @@ class SpaceList(BaseModel):
space_ids: list[str] = SchemaField(
description="List of Space IDs to lookup (up to 100)",
placeholder="Enter Space IDs",
default=[],
default_factory=list,
advanced=False,
)
@@ -54,7 +54,7 @@ class UserList(BaseModel):
user_ids: list[str] = SchemaField(
description="List of user IDs to lookup their Spaces (up to 100)",
placeholder="Enter user IDs",
default=[],
default_factory=list,
advanced=False,
)
@@ -227,7 +227,6 @@ class TwitterGetSpaceByIdBlock(Block):
space_id: str = SchemaField(
description="Space ID to lookup",
placeholder="Enter Space ID",
required=True,
)
class Output(BlockSchema):
@@ -389,7 +388,6 @@ class TwitterGetSpaceBuyersBlock(Block):
space_id: str = SchemaField(
description="Space ID to lookup buyers for",
placeholder="Enter Space ID",
required=True,
)
class Output(BlockSchema):
@@ -517,7 +515,6 @@ class TwitterGetSpaceTweetsBlock(Block):
space_id: str = SchemaField(
description="Space ID to lookup tweets for",
placeholder="Enter Space ID",
required=True,
)
class Output(BlockSchema):

View File

@@ -200,7 +200,7 @@ class UserIdList(BaseModel):
user_ids: list[str] = SchemaField(
description="List of user IDs to lookup (max 100)",
placeholder="Enter user IDs",
default=[],
default_factory=list,
advanced=False,
)
@@ -210,7 +210,7 @@ class UsernameList(BaseModel):
usernames: list[str] = SchemaField(
description="List of Twitter usernames/handles to lookup (max 100)",
placeholder="Enter usernames",
default=[],
default_factory=list,
advanced=False,
)

View File

@@ -8,7 +8,6 @@ import pathlib
import click
import psutil
from backend import app
from backend.util.process import AppProcess
@@ -42,8 +41,13 @@ def write_pid(pid: int):
class MainApp(AppProcess):
def run(self):
from backend import app
app.main(silent=True)
def cleanup(self):
pass
@click.group()
def main():

View File

@@ -12,12 +12,12 @@ async def log_raw_analytics(
data_index: str,
):
details = await prisma.models.AnalyticsDetails.prisma().create(
data={
"userId": user_id,
"type": type,
"data": prisma.Json(data),
"dataIndex": data_index,
}
data=prisma.types.AnalyticsDetailsCreateInput(
userId=user_id,
type=type,
data=prisma.Json(data),
dataIndex=data_index,
)
)
return details
@@ -32,12 +32,12 @@ async def log_raw_metric(
raise ValueError("metric_value must be non-negative")
result = await prisma.models.AnalyticsMetrics.prisma().create(
data={
"value": metric_value,
"analyticMetric": metric_name,
"userId": user_id,
"dataString": data_string,
},
data=prisma.types.AnalyticsMetricsCreateInput(
value=metric_value,
analyticMetric=metric_name,
userId=user_id,
dataString=data_string,
)
)
return result

View File

@@ -17,6 +17,7 @@ from typing import (
import jsonref
import jsonschema
from prisma.models import AgentBlock
from prisma.types import AgentBlockCreateInput
from pydantic import BaseModel
from backend.data.model import NodeExecutionStats
@@ -480,12 +481,12 @@ async def initialize_blocks() -> None:
)
if not existing_block:
await AgentBlock.prisma().create(
data={
"id": block.id,
"name": block.name,
"inputSchema": json.dumps(block.input_schema.jsonschema()),
"outputSchema": json.dumps(block.output_schema.jsonschema()),
}
data=AgentBlockCreateInput(
id=block.id,
name=block.name,
inputSchema=json.dumps(block.input_schema.jsonschema()),
outputSchema=json.dumps(block.output_schema.jsonschema()),
)
)
continue

View File

@@ -75,6 +75,8 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.AMAZON_NOVA_PRO_V1: 1,
LlmModel.MICROSOFT_WIZARDLM_2_8X22B: 1,
LlmModel.GRYPHE_MYTHOMAX_L2_13B: 1,
LlmModel.META_LLAMA_4_SCOUT: 1,
LlmModel.META_LLAMA_4_MAVERICK: 1,
}
for model in LlmModel:

View File

@@ -11,10 +11,15 @@ from prisma.enums import (
CreditRefundRequestStatus,
CreditTransactionType,
NotificationType,
OnboardingStep,
)
from prisma.errors import UniqueViolationError
from prisma.models import CreditRefundRequest, CreditTransaction, User
from prisma.types import CreditTransactionCreateInput, CreditTransactionWhereInput
from prisma.types import (
CreditRefundRequestCreateInput,
CreditTransactionCreateInput,
CreditTransactionWhereInput,
)
from tenacity import retry, stop_after_attempt, wait_exponential
from backend.data import db
@@ -117,6 +122,18 @@ class UserCreditBase(ABC):
"""
pass
@abstractmethod
async def onboarding_reward(self, user_id: str, credits: int, step: OnboardingStep):
"""
Reward the user with credits for completing an onboarding step.
Won't reward if the user has already received credits for the step.
Args:
user_id (str): The user ID.
step (OnboardingStep): The onboarding step.
"""
pass
@abstractmethod
async def top_up_intent(self, user_id: str, amount: int) -> str:
"""
@@ -209,7 +226,7 @@ class UserCreditBase(ABC):
"userId": user_id,
"createdAt": {"lte": top_time},
"isActive": True,
"runningBalance": {"not": None}, # type: ignore
"NOT": [{"runningBalance": None}],
},
order={"createdAt": "desc"},
)
@@ -331,15 +348,15 @@ class UserCreditBase(ABC):
amount = min(-user_balance, 0)
# Create the transaction
transaction_data: CreditTransactionCreateInput = {
"userId": user_id,
"amount": amount,
"runningBalance": user_balance + amount,
"type": transaction_type,
"metadata": metadata,
"isActive": is_active,
"createdAt": self.time_now(),
}
transaction_data = CreditTransactionCreateInput(
userId=user_id,
amount=amount,
runningBalance=user_balance + amount,
type=transaction_type,
metadata=metadata,
isActive=is_active,
createdAt=self.time_now(),
)
if transaction_key:
transaction_data["transactionKey"] = transaction_key
tx = await CreditTransaction.prisma().create(data=transaction_data)
@@ -404,6 +421,24 @@ class UserCredit(UserCreditBase):
async def top_up_credits(self, user_id: str, amount: int):
await self._top_up_credits(user_id, amount)
async def onboarding_reward(self, user_id: str, credits: int, step: OnboardingStep):
key = f"REWARD-{user_id}-{step.value}"
if not await CreditTransaction.prisma().find_first(
where={
"userId": user_id,
"transactionKey": key,
}
):
await self._add_transaction(
user_id=user_id,
amount=credits,
transaction_type=CreditTransactionType.GRANT,
transaction_key=key,
metadata=Json(
{"reason": f"Reward for completing {step.value} onboarding step."}
),
)
async def top_up_refund(
self, user_id: str, transaction_key: str, metadata: dict[str, str]
) -> int:
@@ -422,15 +457,15 @@ class UserCredit(UserCreditBase):
try:
refund_request = await CreditRefundRequest.prisma().create(
data={
"id": refund_key,
"transactionKey": transaction_key,
"userId": user_id,
"amount": amount,
"reason": metadata.get("reason", ""),
"status": CreditRefundRequestStatus.PENDING,
"result": "The refund request is under review.",
}
data=CreditRefundRequestCreateInput(
id=refund_key,
transactionKey=transaction_key,
userId=user_id,
amount=amount,
reason=metadata.get("reason", ""),
status=CreditRefundRequestStatus.PENDING,
result="The refund request is under review.",
)
)
except UniqueViolationError:
raise ValueError(
@@ -891,6 +926,9 @@ class DisabledUserCredit(UserCreditBase):
async def top_up_credits(self, *args, **kwargs):
pass
async def onboarding_reward(self, *args, **kwargs):
pass
async def top_up_intent(self, *args, **kwargs) -> str:
return ""

View File

@@ -62,10 +62,10 @@ async def connect():
# Connection acquired from a pool like Supabase somehow still possibly allows
# the db client obtains a connection but still reject query connection afterward.
try:
await prisma.execute_raw("SELECT 1")
except Exception as e:
raise ConnectionError("Failed to connect to Prisma.") from e
# try:
# await prisma.execute_raw("SELECT 1")
# except Exception as e:
# raise ConnectionError("Failed to connect to Prisma.") from e
@conn_retry("Prisma", "Releasing connection")
@@ -89,7 +89,7 @@ async def transaction():
async def locked_transaction(key: str):
lock_key = zlib.crc32(key.encode("utf-8"))
async with transaction() as tx:
await tx.execute_raw(f"SELECT pg_advisory_xact_lock({lock_key})")
await tx.execute_raw("SELECT pg_advisory_xact_lock($1)", lock_key)
yield tx

View File

@@ -23,7 +23,10 @@ from prisma.models import (
AgentNodeExecutionInputOutput,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
AgentNodeExecutionInputOutputCreateInput,
AgentNodeExecutionUpdateInput,
AgentNodeExecutionWhereInput,
)
@@ -31,11 +34,10 @@ from pydantic import BaseModel
from pydantic.fields import Field
from backend.server.v2.store.exceptions import DatabaseError
from backend.util import mock
from backend.util import type as type_utils
from backend.util.settings import Config
from .block import BlockData, BlockInput, BlockType, CompletedBlockOutput, get_block
from .block import BlockInput, BlockType, CompletedBlockOutput, get_block
from .db import BaseDbModel
from .includes import (
EXECUTION_RESULT_INCLUDE,
@@ -59,23 +61,27 @@ ExecutionStatus = AgentExecutionStatus
class GraphExecutionMeta(BaseDbModel):
user_id: str
started_at: datetime
ended_at: datetime
cost: Optional[int] = Field(..., description="Execution cost in credits")
duration: float = Field(..., description="Seconds from start to end of run")
total_run_time: float = Field(..., description="Seconds of node runtime")
status: ExecutionStatus
graph_id: str
graph_version: int
preset_id: Optional[str] = None
status: ExecutionStatus
started_at: datetime
ended_at: datetime
class Stats(BaseModel):
cost: int = Field(..., description="Execution cost (cents)")
duration: float = Field(..., description="Seconds from start to end of run")
node_exec_time: float = Field(..., description="Seconds of total node runtime")
node_exec_count: int = Field(..., description="Number of node executions")
stats: Stats | None
@staticmethod
def from_db(_graph_exec: AgentGraphExecution):
now = datetime.now(timezone.utc)
# TODO: make started_at and ended_at optional
start_time = _graph_exec.startedAt or _graph_exec.createdAt
end_time = _graph_exec.updatedAt or now
duration = (end_time - start_time).total_seconds()
total_run_time = duration
try:
stats = GraphExecutionStats.model_validate(_graph_exec.stats)
@@ -87,21 +93,25 @@ class GraphExecutionMeta(BaseDbModel):
)
stats = None
duration = stats.walltime if stats else duration
total_run_time = stats.nodes_walltime if stats else total_run_time
return GraphExecutionMeta(
id=_graph_exec.id,
user_id=_graph_exec.userId,
started_at=start_time,
ended_at=end_time,
cost=stats.cost if stats else None,
duration=duration,
total_run_time=total_run_time,
status=ExecutionStatus(_graph_exec.executionStatus),
graph_id=_graph_exec.agentGraphId,
graph_version=_graph_exec.agentGraphVersion,
preset_id=_graph_exec.agentPresetId,
status=ExecutionStatus(_graph_exec.executionStatus),
started_at=start_time,
ended_at=end_time,
stats=(
GraphExecutionMeta.Stats(
cost=stats.cost,
duration=stats.walltime,
node_exec_time=stats.nodes_walltime,
node_exec_count=stats.node_count,
)
if stats
else None
),
)
@@ -111,15 +121,16 @@ class GraphExecution(GraphExecutionMeta):
@staticmethod
def from_db(_graph_exec: AgentGraphExecution):
if _graph_exec.AgentNodeExecutions is None:
if _graph_exec.NodeExecutions is None:
raise ValueError("Node executions must be included in query")
graph_exec = GraphExecutionMeta.from_db(_graph_exec)
node_executions = sorted(
complete_node_executions = sorted(
[
NodeExecutionResult.from_db(ne, _graph_exec.userId)
for ne in _graph_exec.AgentNodeExecutions
for ne in _graph_exec.NodeExecutions
if ne.executionStatus != ExecutionStatus.INCOMPLETE
],
key=lambda ne: (ne.queue_time is None, ne.queue_time or ne.add_time),
)
@@ -128,7 +139,7 @@ class GraphExecution(GraphExecutionMeta):
**{
# inputs from Agent Input Blocks
exec.input_data["name"]: exec.input_data.get("value")
for exec in node_executions
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type == BlockType.INPUT
@@ -137,7 +148,7 @@ class GraphExecution(GraphExecutionMeta):
**{
# input from webhook-triggered block
"payload": exec.input_data["payload"]
for exec in node_executions
for exec in complete_node_executions
if (
(block := get_block(exec.block_id))
and block.block_type
@@ -147,7 +158,7 @@ class GraphExecution(GraphExecutionMeta):
}
outputs: CompletedBlockOutput = defaultdict(list)
for exec in node_executions:
for exec in complete_node_executions:
if (
block := get_block(exec.block_id)
) and block.block_type == BlockType.OUTPUT:
@@ -158,7 +169,7 @@ class GraphExecution(GraphExecutionMeta):
return GraphExecution(
**{
field_name: getattr(graph_exec, field_name)
for field_name in graph_exec.model_fields
for field_name in GraphExecutionMeta.model_fields
},
inputs=inputs,
outputs=outputs,
@@ -170,7 +181,7 @@ class GraphExecutionWithNodes(GraphExecution):
@staticmethod
def from_db(_graph_exec: AgentGraphExecution):
if _graph_exec.AgentNodeExecutions is None:
if _graph_exec.NodeExecutions is None:
raise ValueError("Node executions must be included in query")
graph_exec_with_io = GraphExecution.from_db(_graph_exec)
@@ -178,7 +189,7 @@ class GraphExecutionWithNodes(GraphExecution):
node_executions = sorted(
[
NodeExecutionResult.from_db(ne, _graph_exec.userId)
for ne in _graph_exec.AgentNodeExecutions
for ne in _graph_exec.NodeExecutions
],
key=lambda ne: (ne.queue_time is None, ne.queue_time or ne.add_time),
)
@@ -186,11 +197,31 @@ class GraphExecutionWithNodes(GraphExecution):
return GraphExecutionWithNodes(
**{
field_name: getattr(graph_exec_with_io, field_name)
for field_name in graph_exec_with_io.model_fields
for field_name in GraphExecution.model_fields
},
node_executions=node_executions,
)
def to_graph_execution_entry(self):
return GraphExecutionEntry(
user_id=self.user_id,
graph_id=self.graph_id,
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
start_node_execs=[
NodeExecutionEntry(
user_id=self.user_id,
graph_exec_id=node_exec.graph_exec_id,
graph_id=node_exec.graph_id,
node_exec_id=node_exec.node_exec_id,
node_id=node_exec.node_id,
block_id=node_exec.block_id,
data=node_exec.input_data,
)
for node_exec in self.node_executions
],
)
class NodeExecutionResult(BaseModel):
user_id: str
@@ -209,21 +240,21 @@ class NodeExecutionResult(BaseModel):
end_time: datetime | None
@staticmethod
def from_db(execution: AgentNodeExecution, user_id: Optional[str] = None):
if execution.executionData:
def from_db(_node_exec: AgentNodeExecution, user_id: Optional[str] = None):
if _node_exec.executionData:
# Execution that has been queued for execution will persist its data.
input_data = type_utils.convert(execution.executionData, dict[str, Any])
input_data = type_utils.convert(_node_exec.executionData, dict[str, Any])
else:
# For incomplete execution, executionData will not be yet available.
input_data: BlockInput = defaultdict()
for data in execution.Input or []:
for data in _node_exec.Input or []:
input_data[data.name] = type_utils.convert(data.data, type[Any])
output_data: CompletedBlockOutput = defaultdict(list)
for data in execution.Output or []:
for data in _node_exec.Output or []:
output_data[data.name].append(type_utils.convert(data.data, type[Any]))
graph_execution: AgentGraphExecution | None = execution.AgentGraphExecution
graph_execution: AgentGraphExecution | None = _node_exec.GraphExecution
if graph_execution:
user_id = graph_execution.userId
elif not user_id:
@@ -235,17 +266,17 @@ class NodeExecutionResult(BaseModel):
user_id=user_id,
graph_id=graph_execution.agentGraphId if graph_execution else "",
graph_version=graph_execution.agentGraphVersion if graph_execution else 0,
graph_exec_id=execution.agentGraphExecutionId,
block_id=execution.AgentNode.agentBlockId if execution.AgentNode else "",
node_exec_id=execution.id,
node_id=execution.agentNodeId,
status=execution.executionStatus,
graph_exec_id=_node_exec.agentGraphExecutionId,
block_id=_node_exec.Node.agentBlockId if _node_exec.Node else "",
node_exec_id=_node_exec.id,
node_id=_node_exec.agentNodeId,
status=_node_exec.executionStatus,
input_data=input_data,
output_data=output_data,
add_time=execution.addedTime,
queue_time=execution.queuedTime,
start_time=execution.startedTime,
end_time=execution.endedTime,
add_time=_node_exec.addedTime,
queue_time=_node_exec.queuedTime,
start_time=_node_exec.startedTime,
end_time=_node_exec.endedTime,
)
@@ -340,29 +371,29 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.QUEUED,
"AgentNodeExecutions": {
"create": [ # type: ignore
{
"agentNodeId": node_id,
"executionStatus": ExecutionStatus.QUEUED,
"queuedTime": datetime.now(tz=timezone.utc),
"Input": {
data=AgentGraphExecutionCreateInput(
agentGraphId=graph_id,
agentGraphVersion=graph_version,
executionStatus=ExecutionStatus.QUEUED,
NodeExecutions={
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
executionStatus=ExecutionStatus.QUEUED,
queuedTime=datetime.now(tz=timezone.utc),
Input={
"create": [
{"name": name, "data": Json(data)}
for name, data in node_input.items()
]
},
}
)
for node_id, node_input in nodes_input
]
},
"userId": user_id,
"agentPresetId": preset_id,
},
userId=user_id,
agentPresetId=preset_id,
),
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -409,11 +440,11 @@ async def upsert_execution_input(
if existing_execution:
await AgentNodeExecutionInputOutput.prisma().create(
data={
"name": input_name,
"data": json_input_data,
"referencedByInputExecId": existing_execution.id,
}
data=AgentNodeExecutionInputOutputCreateInput(
name=input_name,
data=json_input_data,
referencedByInputExecId=existing_execution.id,
)
)
return existing_execution.id, {
**{
@@ -425,12 +456,12 @@ async def upsert_execution_input(
elif not node_exec_id:
result = await AgentNodeExecution.prisma().create(
data={
"agentNodeId": node_id,
"agentGraphExecutionId": graph_exec_id,
"executionStatus": ExecutionStatus.INCOMPLETE,
"Input": {"create": {"name": input_name, "data": json_input_data}},
}
data=AgentNodeExecutionCreateInput(
agentNodeId=node_id,
agentGraphExecutionId=graph_exec_id,
executionStatus=ExecutionStatus.INCOMPLETE,
Input={"create": {"name": input_name, "data": json_input_data}},
)
)
return result.id, {input_name: input_data}
@@ -449,27 +480,35 @@ async def upsert_execution_output(
Insert AgentNodeExecutionInputOutput record for as one of AgentNodeExecution.Output.
"""
await AgentNodeExecutionInputOutput.prisma().create(
data={
"name": output_name,
"data": Json(output_data),
"referencedByOutputExecId": node_exec_id,
}
data=AgentNodeExecutionInputOutputCreateInput(
name=output_name,
data=Json(output_data),
referencedByOutputExecId=node_exec_id,
)
)
async def update_graph_execution_start_time(graph_exec_id: str) -> GraphExecution:
res = await AgentGraphExecution.prisma().update(
where={"id": graph_exec_id},
async def update_graph_execution_start_time(
graph_exec_id: str,
) -> GraphExecution | None:
count = await AgentGraphExecution.prisma().update_many(
where={
"id": graph_exec_id,
"executionStatus": ExecutionStatus.QUEUED,
},
data={
"executionStatus": ExecutionStatus.RUNNING,
"startedAt": datetime.now(tz=timezone.utc),
},
)
if count == 0:
return None
res = await AgentGraphExecution.prisma().find_unique(
where={"id": graph_exec_id},
include=GRAPH_EXECUTION_INCLUDE,
)
if not res:
raise ValueError(f"Graph execution #{graph_exec_id} not found")
return GraphExecution.from_db(res)
return GraphExecution.from_db(res) if res else None
async def update_graph_execution_stats(
@@ -480,7 +519,8 @@ async def update_graph_execution_stats(
data = stats.model_dump() if stats else {}
if isinstance(data.get("error"), Exception):
data["error"] = str(data["error"])
res = await AgentGraphExecution.prisma().update(
updated_count = await AgentGraphExecution.prisma().update_many(
where={
"id": graph_exec_id,
"OR": [
@@ -492,10 +532,15 @@ async def update_graph_execution_stats(
"executionStatus": status,
"stats": Json(data),
},
)
if updated_count == 0:
return None
graph_exec = await AgentGraphExecution.prisma().find_unique_or_raise(
where={"id": graph_exec_id},
include=GRAPH_EXECUTION_INCLUDE,
)
return GraphExecution.from_db(res) if res else None
return GraphExecution.from_db(graph_exec)
async def update_node_execution_stats(node_exec_id: str, stats: NodeExecutionStats):
@@ -589,7 +634,7 @@ async def get_node_execution_results(
"agentGraphExecutionId": graph_exec_id,
}
if block_ids:
where_clause["AgentNode"] = {"is": {"agentBlockId": {"in": block_ids}}}
where_clause["Node"] = {"is": {"agentBlockId": {"in": block_ids}}}
if statuses:
where_clause["OR"] = [{"executionStatus": status} for status in statuses]
@@ -631,7 +676,7 @@ async def get_latest_node_execution(
where={
"agentNodeId": node_id,
"agentGraphExecutionId": graph_eid,
"executionStatus": {"not": ExecutionStatus.INCOMPLETE}, # type: ignore
"NOT": [{"executionStatus": ExecutionStatus.INCOMPLETE}],
},
order=[
{"queuedTime": "desc"},
@@ -699,144 +744,6 @@ class ExecutionQueue(Generic[T]):
return self.queue.empty()
# ------------------- Execution Utilities -------------------- #
LIST_SPLIT = "_$_"
DICT_SPLIT = "_#_"
OBJC_SPLIT = "_@_"
def parse_execution_output(output: BlockData, name: str) -> Any | None:
"""
Extracts partial output data by name from a given BlockData.
The function supports extracting data from lists, dictionaries, and objects
using specific naming conventions:
- For lists: <output_name>_$_<index>
- For dictionaries: <output_name>_#_<key>
- For objects: <output_name>_@_<attribute>
Args:
output (BlockData): A tuple containing the output name and data.
name (str): The name used to extract specific data from the output.
Returns:
Any | None: The extracted data if found, otherwise None.
Examples:
>>> output = ("result", [10, 20, 30])
>>> parse_execution_output(output, "result_$_1")
20
>>> output = ("config", {"key1": "value1", "key2": "value2"})
>>> parse_execution_output(output, "config_#_key1")
'value1'
>>> class Sample:
... attr1 = "value1"
... attr2 = "value2"
>>> output = ("object", Sample())
>>> parse_execution_output(output, "object_@_attr1")
'value1'
"""
output_name, output_data = output
if name == output_name:
return output_data
if name.startswith(f"{output_name}{LIST_SPLIT}"):
index = int(name.split(LIST_SPLIT)[1])
if not isinstance(output_data, list) or len(output_data) <= index:
return None
return output_data[int(name.split(LIST_SPLIT)[1])]
if name.startswith(f"{output_name}{DICT_SPLIT}"):
index = name.split(DICT_SPLIT)[1]
if not isinstance(output_data, dict) or index not in output_data:
return None
return output_data[index]
if name.startswith(f"{output_name}{OBJC_SPLIT}"):
index = name.split(OBJC_SPLIT)[1]
if isinstance(output_data, object) and hasattr(output_data, index):
return getattr(output_data, index)
return None
return None
def merge_execution_input(data: BlockInput) -> BlockInput:
"""
Merges dynamic input pins into a single list, dictionary, or object based on naming patterns.
This function processes input keys that follow specific patterns to merge them into a unified structure:
- `<input_name>_$_<index>` for list inputs.
- `<input_name>_#_<index>` for dictionary inputs.
- `<input_name>_@_<index>` for object inputs.
Args:
data (BlockInput): A dictionary containing input keys and their corresponding values.
Returns:
BlockInput: A dictionary with merged inputs.
Raises:
ValueError: If a list index is not an integer.
Examples:
>>> data = {
... "list_$_0": "a",
... "list_$_1": "b",
... "dict_#_key1": "value1",
... "dict_#_key2": "value2",
... "object_@_attr1": "value1",
... "object_@_attr2": "value2"
... }
>>> merge_execution_input(data)
{
"list": ["a", "b"],
"dict": {"key1": "value1", "key2": "value2"},
"object": <MockObject attr1="value1" attr2="value2">
}
"""
# Merge all input with <input_name>_$_<index> into a single list.
items = list(data.items())
for key, value in items:
if LIST_SPLIT not in key:
continue
name, index = key.split(LIST_SPLIT)
if not index.isdigit():
raise ValueError(f"Invalid key: {key}, #{index} index must be an integer.")
data[name] = data.get(name, [])
if int(index) >= len(data[name]):
# Pad list with empty string on missing indices.
data[name].extend([""] * (int(index) - len(data[name]) + 1))
data[name][int(index)] = value
# Merge all input with <input_name>_#_<index> into a single dict.
for key, value in items:
if DICT_SPLIT not in key:
continue
name, index = key.split(DICT_SPLIT)
data[name] = data.get(name, {})
data[name][index] = value
# Merge all input with <input_name>_@_<index> into a single object.
for key, value in items:
if OBJC_SPLIT not in key:
continue
name, index = key.split(OBJC_SPLIT)
if name not in data or not isinstance(data[name], object):
data[name] = mock.MockObject()
setattr(data[name], index, value)
return data
# --------------------- Event Bus --------------------- #

View File

@@ -1,13 +1,18 @@
import logging
import uuid
from collections import defaultdict
from typing import Any, Literal, Optional, Type
from typing import Any, Literal, Optional, Type, cast
import prisma
from prisma import Json
from prisma.enums import SubmissionStatus
from prisma.models import AgentGraph, AgentNode, AgentNodeLink, StoreListingVersion
from prisma.types import AgentGraphWhereInput
from prisma.types import (
AgentGraphCreateInput,
AgentGraphWhereInput,
AgentNodeCreateInput,
AgentNodeLinkCreateInput,
)
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -53,22 +58,23 @@ class Node(BaseDbModel):
input_links: list[Link] = []
output_links: list[Link] = []
webhook_id: Optional[str] = None
@property
def block(self) -> Block[BlockSchema, BlockSchema]:
block = get_block(self.block_id)
if not block:
raise ValueError(
f"Block #{self.block_id} does not exist -> Node #{self.id} is invalid"
)
return block
class NodeModel(Node):
graph_id: str
graph_version: int
webhook_id: Optional[str] = None
webhook: Optional[Webhook] = None
@property
def block(self) -> Block[BlockSchema, BlockSchema]:
block = get_block(self.block_id)
if not block:
raise ValueError(f"Block #{self.block_id} does not exist")
return block
@staticmethod
def from_db(node: AgentNode, for_export: bool = False) -> "NodeModel":
obj = NodeModel(
@@ -88,8 +94,7 @@ class NodeModel(Node):
return obj
def is_triggered_by_event_type(self, event_type: str) -> bool:
if not (block := get_block(self.block_id)):
raise ValueError(f"Block #{self.block_id} not found for node #{self.id}")
block = self.block
if not block.webhook_config:
raise TypeError("This method can't be used on non-webhook blocks")
if not block.webhook_config.event_filter_input:
@@ -166,11 +171,10 @@ class BaseGraph(BaseDbModel):
def input_schema(self) -> dict[str, Any]:
return self._generate_schema(
*(
(b.input_schema, node.input_default)
(block.input_schema, node.input_default)
for node in self.nodes
if (b := get_block(node.block_id))
and b.block_type == BlockType.INPUT
and issubclass(b.input_schema, AgentInputBlock.Input)
if (block := node.block).block_type == BlockType.INPUT
and issubclass(block.input_schema, AgentInputBlock.Input)
)
)
@@ -179,11 +183,10 @@ class BaseGraph(BaseDbModel):
def output_schema(self) -> dict[str, Any]:
return self._generate_schema(
*(
(b.input_schema, node.input_default)
(block.input_schema, node.input_default)
for node in self.nodes
if (b := get_block(node.block_id))
and b.block_type == BlockType.OUTPUT
and issubclass(b.input_schema, AgentOutputBlock.Input)
if (block := node.block).block_type == BlockType.OUTPUT
and issubclass(block.input_schema, AgentOutputBlock.Input)
)
)
@@ -228,13 +231,16 @@ class GraphModel(Graph):
user_id: str
nodes: list[NodeModel] = [] # type: ignore
@computed_field
@property
def starting_nodes(self) -> list[Node]:
def has_webhook_trigger(self) -> bool:
return self.webhook_input_node is not None
@property
def starting_nodes(self) -> list[NodeModel]:
outbound_nodes = {link.sink_id for link in self.links}
input_nodes = {
v.id
for v in self.nodes
if (b := get_block(v.block_id)) and b.block_type == BlockType.INPUT
node.id for node in self.nodes if node.block.block_type == BlockType.INPUT
}
return [
node
@@ -242,6 +248,18 @@ class GraphModel(Graph):
if node.id not in outbound_nodes or node.id in input_nodes
]
@property
def webhook_input_node(self) -> NodeModel | None:
return next(
(
node
for node in self.nodes
if node.block.block_type
in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
),
None,
)
def reassign_ids(self, user_id: str, reassign_graph_id: bool = False):
"""
Reassigns all IDs in the graph to new UUIDs.
@@ -391,9 +409,7 @@ class GraphModel(Graph):
node_map = {v.id: v for v in graph.nodes}
def is_static_output_block(nid: str) -> bool:
bid = node_map[nid].block_id
b = get_block(bid)
return b.static_output if b else False
return node_map[nid].block.static_output
# Links: links are connected and the connected pin data type are compatible.
for link in graph.links:
@@ -449,13 +465,11 @@ class GraphModel(Graph):
is_active=graph.isActive,
name=graph.name or "",
description=graph.description or "",
nodes=[
NodeModel.from_db(node, for_export) for node in graph.AgentNodes or []
],
nodes=[NodeModel.from_db(node, for_export) for node in graph.Nodes or []],
links=list(
{
Link.from_db(link)
for node in graph.AgentNodes or []
for node in graph.Nodes or []
for link in (node.Input or []) + (node.Output or [])
}
),
@@ -586,8 +600,8 @@ async def get_graph(
and not (
await StoreListingVersion.prisma().find_first(
where={
"agentId": graph_id,
"agentVersion": version or graph.version,
"agentGraphId": graph_id,
"agentGraphVersion": version or graph.version,
"isDeleted": False,
"submissionStatus": SubmissionStatus.APPROVED,
}
@@ -621,12 +635,16 @@ async def get_sub_graphs(graph: AgentGraph) -> list[AgentGraph]:
sub_graph_ids = [
(graph_id, graph_version)
for graph in search_graphs
for node in graph.AgentNodes or []
for node in graph.Nodes or []
if (
node.AgentBlock
and node.AgentBlock.id == agent_block_id
and (graph_id := dict(node.constantInput).get("graph_id"))
and (graph_version := dict(node.constantInput).get("graph_version"))
and (graph_id := cast(str, dict(node.constantInput).get("graph_id")))
and (
graph_version := cast(
int, dict(node.constantInput).get("graph_version")
)
)
)
]
if not sub_graph_ids:
@@ -641,7 +659,7 @@ async def get_sub_graphs(graph: AgentGraph) -> list[AgentGraph]:
"userId": graph.userId, # Ensure the sub-graph is owned by the same user
}
for graph_id, graph_version in sub_graph_ids
] # type: ignore
]
},
include=AGENT_GRAPH_INCLUDE,
)
@@ -655,7 +673,7 @@ async def get_sub_graphs(graph: AgentGraph) -> list[AgentGraph]:
async def get_connected_output_nodes(node_id: str) -> list[tuple[Link, Node]]:
links = await AgentNodeLink.prisma().find_many(
where={"agentNodeSourceId": node_id},
include={"AgentNodeSink": {"include": AGENT_NODE_INCLUDE}}, # type: ignore
include={"AgentNodeSink": {"include": AGENT_NODE_INCLUDE}},
)
return [
(Link.from_db(link), NodeModel.from_db(link.AgentNodeSink))
@@ -726,29 +744,28 @@ async def __create_graph(tx, graph: Graph, user_id: str):
await AgentGraph.prisma(tx).create_many(
data=[
{
"id": graph.id,
"version": graph.version,
"name": graph.name,
"description": graph.description,
"isActive": graph.is_active,
"userId": user_id,
}
AgentGraphCreateInput(
id=graph.id,
version=graph.version,
name=graph.name,
description=graph.description,
isActive=graph.is_active,
userId=user_id,
)
for graph in graphs
]
)
await AgentNode.prisma(tx).create_many(
data=[
{
"id": node.id,
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
"agentBlockId": node.block_id,
"constantInput": Json(node.input_default),
"metadata": Json(node.metadata),
"webhookId": node.webhook_id,
}
AgentNodeCreateInput(
id=node.id,
agentGraphId=graph.id,
agentGraphVersion=graph.version,
agentBlockId=node.block_id,
constantInput=Json(node.input_default),
metadata=Json(node.metadata),
)
for graph in graphs
for node in graph.nodes
]
@@ -756,14 +773,14 @@ async def __create_graph(tx, graph: Graph, user_id: str):
await AgentNodeLink.prisma(tx).create_many(
data=[
{
"id": str(uuid.uuid4()),
"sourceName": link.source_name,
"sinkName": link.sink_name,
"agentNodeSourceId": link.source_id,
"agentNodeSinkId": link.sink_id,
"isStatic": link.is_static,
}
AgentNodeLinkCreateInput(
id=str(uuid.uuid4()),
sourceName=link.source_name,
sinkName=link.sink_name,
agentNodeSourceId=link.source_id,
agentNodeSinkId=link.sink_id,
isStatic=link.is_static,
)
for graph in graphs
for link in graph.links
]
@@ -814,12 +831,12 @@ async def fix_llm_provider_credentials():
SELECT graph."userId" user_id,
node.id node_id,
node."constantInput" node_preset_input
FROM platform."AgentNode" node
LEFT JOIN platform."AgentGraph" graph
ON node."agentGraphId" = graph.id
WHERE node."constantInput"::jsonb->'credentials'->>'provider' = 'llm'
ORDER BY graph."userId";
"""
FROM platform."AgentNode" node
LEFT JOIN platform."AgentGraph" graph
ON node."agentGraphId" = graph.id
WHERE node."constantInput"::jsonb->'credentials'->>'provider' = 'llm'
ORDER BY graph."userId";
"""
)
logger.info(f"Fixing LLM credential inputs on {len(broken_nodes)} nodes")
except Exception as e:
@@ -902,12 +919,19 @@ async def migrate_llm_models(migrate_to: LlmModel):
# Convert enum values to a list of strings for the SQL query
enum_values = [v.value for v in LlmModel.__members__.values()]
escaped_enum_values = repr(tuple(enum_values)) # hack but works
query = f"""
UPDATE "AgentNode"
SET "constantInput" = jsonb_set("constantInput", '{{{path}}}', '"{migrate_to.value}"', true)
WHERE "agentBlockId" = '{id}'
AND "constantInput" ? '{path}'
AND "constantInput"->>'{path}' NOT IN ({','.join(f"'{value}'" for value in enum_values)})
SET "constantInput" = jsonb_set("constantInput", $1, $2, true)
WHERE "agentBlockId" = $3
AND "constantInput" ? $4
AND "constantInput"->>$4 NOT IN {escaped_enum_values}
"""
await db.execute_raw(query)
await db.execute_raw(
query, # type: ignore - is supposed to be LiteralString
"{" + path + "}",
f'"{migrate_to.value}"',
id,
path,
)

View File

@@ -1,4 +1,7 @@
import prisma
from typing import cast
import prisma.enums
import prisma.types
from backend.blocks.io import IO_BLOCK_IDs
@@ -10,25 +13,25 @@ AGENT_NODE_INCLUDE: prisma.types.AgentNodeInclude = {
}
AGENT_GRAPH_INCLUDE: prisma.types.AgentGraphInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE} # type: ignore
"Nodes": {"include": AGENT_NODE_INCLUDE}
}
EXECUTION_RESULT_INCLUDE: prisma.types.AgentNodeExecutionInclude = {
"Input": True,
"Output": True,
"AgentNode": True,
"AgentGraphExecution": True,
"Node": True,
"GraphExecution": True,
}
MAX_NODE_EXECUTIONS_FETCH = 1000
GRAPH_EXECUTION_INCLUDE_WITH_NODES: prisma.types.AgentGraphExecutionInclude = {
"AgentNodeExecutions": {
"NodeExecutions": {
"include": {
"Input": True,
"Output": True,
"AgentNode": True,
"AgentGraphExecution": True,
"Node": True,
"GraphExecution": True,
},
"order_by": [
{"queuedTime": "desc"},
@@ -40,28 +43,30 @@ GRAPH_EXECUTION_INCLUDE_WITH_NODES: prisma.types.AgentGraphExecutionInclude = {
}
GRAPH_EXECUTION_INCLUDE: prisma.types.AgentGraphExecutionInclude = {
"AgentNodeExecutions": {
**GRAPH_EXECUTION_INCLUDE_WITH_NODES["AgentNodeExecutions"], # type: ignore
"NodeExecutions": {
**cast(
prisma.types.FindManyAgentNodeExecutionArgsFromAgentGraphExecution,
GRAPH_EXECUTION_INCLUDE_WITH_NODES["NodeExecutions"],
),
"where": {
"AgentNode": {
"AgentBlock": {"id": {"in": IO_BLOCK_IDs}}, # type: ignore
},
"Node": {"is": {"AgentBlock": {"is": {"id": {"in": IO_BLOCK_IDs}}}}},
"NOT": [{"executionStatus": prisma.enums.AgentExecutionStatus.INCOMPLETE}],
},
}
}
INTEGRATION_WEBHOOK_INCLUDE: prisma.types.IntegrationWebhookInclude = {
"AgentNodes": {"include": AGENT_NODE_INCLUDE} # type: ignore
"AgentNodes": {"include": AGENT_NODE_INCLUDE}
}
def library_agent_include(user_id: str) -> prisma.types.LibraryAgentInclude:
return {
"Agent": {
"AgentGraph": {
"include": {
**AGENT_GRAPH_INCLUDE,
"AgentGraphExecution": {"where": {"userId": user_id}},
"Executions": {"where": {"userId": user_id}},
}
},
"Creator": True,

View File

@@ -3,6 +3,7 @@ from typing import TYPE_CHECKING, AsyncGenerator, Optional
from prisma import Json
from prisma.models import IntegrationWebhook
from prisma.types import IntegrationWebhookCreateInput
from pydantic import Field, computed_field
from backend.data.includes import INTEGRATION_WEBHOOK_INCLUDE
@@ -66,18 +67,18 @@ class Webhook(BaseDbModel):
async def create_webhook(webhook: Webhook) -> Webhook:
created_webhook = await IntegrationWebhook.prisma().create(
data={
"id": webhook.id,
"userId": webhook.user_id,
"provider": webhook.provider.value,
"credentialsId": webhook.credentials_id,
"webhookType": webhook.webhook_type,
"resource": webhook.resource,
"events": webhook.events,
"config": Json(webhook.config),
"secret": webhook.secret,
"providerWebhookId": webhook.provider_webhook_id,
}
data=IntegrationWebhookCreateInput(
id=webhook.id,
userId=webhook.user_id,
provider=webhook.provider.value,
credentialsId=webhook.credentials_id,
webhookType=webhook.webhook_type,
resource=webhook.resource,
events=webhook.events,
config=Json(webhook.config),
secret=webhook.secret,
providerWebhookId=webhook.provider_webhook_id,
)
)
return Webhook.from_db(created_webhook)

View File

@@ -142,8 +142,12 @@ def SchemaField(
exclude: bool = False,
hidden: Optional[bool] = None,
depends_on: Optional[list[str]] = None,
ge: Optional[float] = None,
le: Optional[float] = None,
min_length: Optional[int] = None,
max_length: Optional[int] = None,
discriminator: Optional[str] = None,
json_schema_extra: Optional[dict[str, Any]] = None,
**kwargs,
) -> T:
if default is PydanticUndefined and default_factory is None:
advanced = False
@@ -170,8 +174,12 @@ def SchemaField(
title=title,
description=description,
exclude=exclude,
ge=ge,
le=le,
min_length=min_length,
max_length=max_length,
discriminator=discriminator,
json_schema_extra=json_schema_extra,
**kwargs,
) # type: ignore
@@ -405,9 +413,10 @@ class RefundRequest(BaseModel):
class NodeExecutionStats(BaseModel):
"""Execution statistics for a node execution."""
class Config:
arbitrary_types_allowed = True
extra = "allow"
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
)
error: Optional[Exception | str] = None
walltime: float = 0
@@ -423,9 +432,10 @@ class NodeExecutionStats(BaseModel):
class GraphExecutionStats(BaseModel):
"""Execution statistics for a graph execution."""
class Config:
arbitrary_types_allowed = True
extra = "allow"
model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True,
)
error: Optional[Exception | str] = None
walltime: float = Field(

View File

@@ -6,10 +6,14 @@ from typing import Annotated, Any, Generic, Optional, TypeVar, Union
from prisma import Json
from prisma.enums import NotificationType
from prisma.models import NotificationEvent, UserNotificationBatch
from prisma.types import UserNotificationBatchWhereInput
from prisma.types import (
NotificationEventCreateInput,
UserNotificationBatchCreateInput,
UserNotificationBatchWhereInput,
)
# from backend.notifications.models import NotificationEvent
from pydantic import BaseModel, EmailStr, Field, field_validator
from pydantic import BaseModel, ConfigDict, EmailStr, Field, field_validator
from backend.server.v2.store.exceptions import DatabaseError
@@ -35,8 +39,7 @@ class QueueType(Enum):
class BaseNotificationData(BaseModel):
class Config:
extra = "allow"
model_config = ConfigDict(extra="allow")
class AgentRunData(BaseNotificationData):
@@ -398,6 +401,8 @@ async def create_or_add_to_user_notification_batch(
logger.info(
f"Creating or adding to notification batch for {user_id} with type {notification_type} and data {notification_data}"
)
if not notification_data.data:
raise ValueError("Notification data must be provided")
# Serialize the data
json_data: Json = Json(notification_data.data.model_dump())
@@ -416,30 +421,30 @@ async def create_or_add_to_user_notification_batch(
if not existing_batch:
async with transaction() as tx:
notification_event = await tx.notificationevent.create(
data={
"type": notification_type,
"data": json_data,
}
data=NotificationEventCreateInput(
type=notification_type,
data=json_data,
)
)
# Create new batch
resp = await tx.usernotificationbatch.create(
data={
"userId": user_id,
"type": notification_type,
"Notifications": {"connect": [{"id": notification_event.id}]},
},
data=UserNotificationBatchCreateInput(
userId=user_id,
type=notification_type,
Notifications={"connect": [{"id": notification_event.id}]},
),
include={"Notifications": True},
)
return UserNotificationBatchDTO.from_db(resp)
else:
async with transaction() as tx:
notification_event = await tx.notificationevent.create(
data={
"type": notification_type,
"data": json_data,
"UserNotificationBatch": {"connect": {"id": existing_batch.id}},
}
data=NotificationEventCreateInput(
type=notification_type,
data=json_data,
UserNotificationBatch={"connect": {"id": existing_batch.id}},
)
)
# Add to existing batch
resp = await tx.usernotificationbatch.update(

View File

@@ -6,9 +6,11 @@ import pydantic
from prisma import Json
from prisma.enums import OnboardingStep
from prisma.models import UserOnboarding
from prisma.types import UserOnboardingUpdateInput
from prisma.types import UserOnboardingCreateInput, UserOnboardingUpdateInput
from backend.data import db
from backend.data.block import get_blocks
from backend.data.credit import get_user_credit_model
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.server.v2.store.model import StoreAgentDetails
@@ -24,21 +26,26 @@ REASON_MAPPING: dict[str, list[str]] = {
POINTS_AGENT_COUNT = 50 # Number of agents to calculate points for
MIN_AGENT_COUNT = 2 # Minimum number of marketplace agents to enable onboarding
user_credit = get_user_credit_model()
class UserOnboardingUpdate(pydantic.BaseModel):
completedSteps: Optional[list[OnboardingStep]] = None
notificationDot: Optional[bool] = None
notified: Optional[list[OnboardingStep]] = None
usageReason: Optional[str] = None
integrations: Optional[list[str]] = None
otherIntegrations: Optional[str] = None
selectedStoreListingVersionId: Optional[str] = None
agentInput: Optional[dict[str, Any]] = None
onboardingAgentExecutionId: Optional[str] = None
async def get_user_onboarding(user_id: str):
return await UserOnboarding.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id}, # type: ignore
"create": UserOnboardingCreateInput(userId=user_id),
"update": {},
},
)
@@ -48,6 +55,20 @@ async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate):
update: UserOnboardingUpdateInput = {}
if data.completedSteps is not None:
update["completedSteps"] = list(set(data.completedSteps))
for step in (
OnboardingStep.AGENT_NEW_RUN,
OnboardingStep.GET_RESULTS,
OnboardingStep.MARKETPLACE_ADD_AGENT,
OnboardingStep.MARKETPLACE_RUN_AGENT,
OnboardingStep.BUILDER_SAVE_AGENT,
OnboardingStep.BUILDER_RUN_AGENT,
):
if step in data.completedSteps:
await reward_user(user_id, step)
if data.notificationDot is not None:
update["notificationDot"] = data.notificationDot
if data.notified is not None:
update["notified"] = list(set(data.notified))
if data.usageReason is not None:
update["usageReason"] = data.usageReason
if data.integrations is not None:
@@ -58,16 +79,57 @@ async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate):
update["selectedStoreListingVersionId"] = data.selectedStoreListingVersionId
if data.agentInput is not None:
update["agentInput"] = Json(data.agentInput)
if data.onboardingAgentExecutionId is not None:
update["onboardingAgentExecutionId"] = data.onboardingAgentExecutionId
return await UserOnboarding.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, **update}, # type: ignore
"create": {"userId": user_id, **update},
"update": update,
},
)
async def reward_user(user_id: str, step: OnboardingStep):
async with db.locked_transaction(f"usr_trx_{user_id}-reward"):
reward = 0
match step:
# Reward user when they clicked New Run during onboarding
# This is because they need credits before scheduling a run (next step)
case OnboardingStep.AGENT_NEW_RUN:
reward = 300
case OnboardingStep.GET_RESULTS:
reward = 300
case OnboardingStep.MARKETPLACE_ADD_AGENT:
reward = 100
case OnboardingStep.MARKETPLACE_RUN_AGENT:
reward = 100
case OnboardingStep.BUILDER_SAVE_AGENT:
reward = 100
case OnboardingStep.BUILDER_RUN_AGENT:
reward = 100
if reward == 0:
return
onboarding = await get_user_onboarding(user_id)
# Skip if already rewarded
if step in onboarding.rewardedFor:
return
onboarding.rewardedFor.append(step)
await user_credit.onboarding_reward(user_id, reward, step)
await UserOnboarding.prisma().update(
where={"userId": user_id},
data={
"completedSteps": list(set(onboarding.completedSteps + [step])),
"rewardedFor": onboarding.rewardedFor,
},
)
def clean_and_split(text: str) -> list[str]:
"""
Removes all special characters from a string, truncates it to 100 characters,
@@ -186,11 +248,11 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]:
where={
"id": {"in": [agent.storeListingVersionId for agent in storeAgents]},
},
include={"Agent": True},
include={"AgentGraph": True},
)
for listing in agentListings:
agent = listing.Agent
agent = listing.AgentGraph
if agent is None:
continue
graph = GraphModel.from_db(agent)

View File

@@ -11,7 +11,7 @@ from fastapi import HTTPException
from prisma import Json
from prisma.enums import NotificationType
from prisma.models import User
from prisma.types import UserUpdateInput
from prisma.types import JsonFilter, UserCreateInput, UserUpdateInput
from backend.data.db import prisma
from backend.data.model import UserIntegrations, UserMetadata, UserMetadataRaw
@@ -36,11 +36,11 @@ async def get_or_create_user(user_data: dict) -> User:
user = await prisma.user.find_unique(where={"id": user_id})
if not user:
user = await prisma.user.create(
data={
"id": user_id,
"email": user_email,
"name": user_data.get("user_metadata", {}).get("name"),
}
data=UserCreateInput(
id=user_id,
email=user_email,
name=user_data.get("user_metadata", {}).get("name"),
)
)
return User.model_validate(user)
@@ -84,11 +84,11 @@ async def create_default_user() -> Optional[User]:
user = await prisma.user.find_unique(where={"id": DEFAULT_USER_ID})
if not user:
user = await prisma.user.create(
data={
"id": DEFAULT_USER_ID,
"email": "default@example.com",
"name": "Default User",
}
data=UserCreateInput(
id=DEFAULT_USER_ID,
email="default@example.com",
name="Default User",
)
)
return User.model_validate(user)
@@ -135,16 +135,21 @@ async def migrate_and_encrypt_user_integrations():
"""Migrate integration credentials and OAuth states from metadata to integrations column."""
users = await User.prisma().find_many(
where={
"metadata": {
"path": ["integration_credentials"],
"not": Json({"a": "yolo"}), # bogus value works to check if key exists
} # type: ignore
"metadata": cast(
JsonFilter,
{
"path": ["integration_credentials"],
"not": Json(
{"a": "yolo"}
), # bogus value works to check if key exists
},
)
}
)
logger.info(f"Migrating integration credentials for {len(users)} users")
for user in users:
raw_metadata = cast(UserMetadataRaw, user.metadata)
raw_metadata = cast(dict, user.metadata)
metadata = UserMetadata.model_validate(raw_metadata)
# Get existing integrations data
@@ -160,7 +165,6 @@ async def migrate_and_encrypt_user_integrations():
await update_user_integrations(user_id=user.id, data=integrations)
# Remove from metadata
raw_metadata = dict(raw_metadata)
raw_metadata.pop("integration_credentials", None)
raw_metadata.pop("integration_oauth_states", None)

View File

@@ -1,8 +1,8 @@
import logging
from backend.data import db
from backend.data.credit import UsageTransactionMetadata, get_user_credit_model
from backend.data.execution import (
GraphExecution,
NodeExecutionResult,
RedisExecutionEventBus,
create_graph_execution,
get_graph_execution,
get_incomplete_node_executions,
@@ -39,11 +39,12 @@ from backend.data.user import (
update_user_integrations,
update_user_metadata,
)
from backend.util.service import AppService, expose, exposed_run_and_wait
from backend.util.service import AppService, exposed_run_and_wait
from backend.util.settings import Config
config = Config()
_user_credit_model = get_user_credit_model()
logger = logging.getLogger(__name__)
async def _spend_credits(
@@ -53,22 +54,21 @@ async def _spend_credits(
class DatabaseManager(AppService):
def __init__(self):
super().__init__()
self.use_db = True
self.use_redis = True
self.execution_event_bus = RedisExecutionEventBus()
def run_service(self) -> None:
logger.info(f"[{self.service_name}] ⏳ Connecting to Database...")
self.run_and_wait(db.connect())
super().run_service()
def cleanup(self):
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Disconnecting Database...")
self.run_and_wait(db.disconnect())
@classmethod
def get_port(cls) -> int:
return config.database_api_port
@expose
def send_execution_update(
self, execution_result: GraphExecution | NodeExecutionResult
):
self.execution_event_bus.publish(execution_result)
# Executions
get_graph_execution = exposed_run_and_wait(get_graph_execution)
create_graph_execution = exposed_run_and_wait(create_graph_execution)

View File

@@ -5,11 +5,14 @@ import os
import signal
import sys
import threading
import time
from concurrent.futures import Future, ProcessPoolExecutor
from contextlib import contextmanager
from multiprocessing.pool import AsyncResult, Pool
from typing import TYPE_CHECKING, Any, Generator, Optional, TypeVar, cast
from typing import TYPE_CHECKING, Any, Generator, TypeVar, cast
from pika.adapters.blocking_connection import BlockingChannel
from pika.spec import Basic, BasicProperties
from redis.lock import Lock as RedisLock
from backend.blocks.io import AgentOutputBlock
@@ -30,43 +33,36 @@ from autogpt_libs.utils.cache import thread_cached
from backend.blocks.agent import AgentExecutorBlock
from backend.data import redis
from backend.data.block import (
Block,
BlockData,
BlockInput,
BlockSchema,
BlockType,
get_block,
)
from backend.data.block import BlockData, BlockInput, BlockSchema, get_block
from backend.data.execution import (
ExecutionQueue,
ExecutionStatus,
GraphExecution,
GraphExecutionEntry,
NodeExecutionEntry,
NodeExecutionResult,
merge_execution_input,
parse_execution_output,
)
from backend.data.graph import GraphModel, Link, Node
from backend.data.graph import Link, Node
from backend.executor.utils import (
GRAPH_EXECUTION_CANCEL_QUEUE_NAME,
GRAPH_EXECUTION_QUEUE_NAME,
CancelExecutionEvent,
UsageTransactionMetadata,
block_usage_cost,
execution_usage_cost,
get_execution_event_bus,
get_execution_queue,
parse_execution_output,
validate_exec,
)
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util import json
from backend.util.decorator import error_logged, time_measured
from backend.util.file import clean_exec_files
from backend.util.logging import configure_logging
from backend.util.process import set_service_name
from backend.util.service import (
AppService,
close_service_client,
expose,
get_service_client,
)
from backend.util.process import AppProcess, set_service_name
from backend.util.service import close_service_client, get_service_client
from backend.util.settings import Settings
from backend.util.type import convert
logger = logging.getLogger(__name__)
settings = Settings()
@@ -152,23 +148,16 @@ def execute_node(
def update_execution_status(status: ExecutionStatus) -> NodeExecutionResult:
"""Sets status and fetches+broadcasts the latest state of the node execution"""
exec_update = db_client.update_node_execution_status(node_exec_id, status)
db_client.send_execution_update(exec_update)
send_execution_update(exec_update)
return exec_update
node = db_client.get_node(node_id)
node_block = get_block(node.block_id)
if not node_block:
logger.error(f"Block {node.block_id} not found.")
return
node_block = node.block
def push_output(output_name: str, output_data: Any) -> None:
_push_node_execution_output(
db_client=db_client,
user_id=user_id,
graph_exec_id=graph_exec_id,
db_client.upsert_execution_output(
node_exec_id=node_exec_id,
block_id=node_block.id,
output_name=output_name,
output_data=output_data,
)
@@ -199,7 +188,7 @@ def execute_node(
# Execute the node
input_data_str = json.dumps(input_data)
input_size = len(input_data_str)
log_metadata.info("Executed node with input", input=input_data_str)
log_metadata.debug("Executed node with input", input=input_data_str)
update_execution_status(ExecutionStatus.RUNNING)
# Inject extra execution arguments for the blocks via kwargs
@@ -230,7 +219,7 @@ def execute_node(
):
output_data = json.convert_pydantic_to_json(output_data)
output_size += len(json.dumps(output_data))
log_metadata.info("Node produced output", **{output_name: output_data})
log_metadata.debug("Node produced output", **{output_name: output_data})
push_output(output_name, output_data)
outputs[output_name] = output_data
for execution in _enqueue_next_nodes(
@@ -280,35 +269,6 @@ def execute_node(
execution_stats.output_size = output_size
def _push_node_execution_output(
db_client: "DatabaseManager",
user_id: str,
graph_exec_id: str,
node_exec_id: str,
block_id: str,
output_name: str,
output_data: Any,
):
from backend.blocks.io import IO_BLOCK_IDs
db_client.upsert_execution_output(
node_exec_id=node_exec_id,
output_name=output_name,
output_data=output_data,
)
# Automatically push execution updates for all agent I/O
if block_id in IO_BLOCK_IDs:
graph_exec = db_client.get_graph_execution(
user_id=user_id, execution_id=graph_exec_id
)
if not graph_exec:
raise ValueError(
f"Graph execution #{graph_exec_id} for user #{user_id} not found"
)
db_client.send_execution_update(graph_exec)
def _enqueue_next_nodes(
db_client: "DatabaseManager",
node: Node,
@@ -324,7 +284,7 @@ def _enqueue_next_nodes(
exec_update = db_client.update_node_execution_status(
node_exec_id, ExecutionStatus.QUEUED, data
)
db_client.send_execution_update(exec_update)
send_execution_update(exec_update)
return NodeExecutionEntry(
user_id=user_id,
graph_exec_id=graph_exec_id,
@@ -436,60 +396,6 @@ def _enqueue_next_nodes(
]
def validate_exec(
node: Node,
data: BlockInput,
resolve_input: bool = True,
) -> tuple[BlockInput | None, str]:
"""
Validate the input data for a node execution.
Args:
node: The node to execute.
data: The input data for the node execution.
resolve_input: Whether to resolve dynamic pins into dict/list/object.
Returns:
A tuple of the validated data and the block name.
If the data is invalid, the first element will be None, and the second element
will be an error message.
If the data is valid, the first element will be the resolved input data, and
the second element will be the block name.
"""
node_block: Block | None = get_block(node.block_id)
if not node_block:
return None, f"Block for {node.block_id} not found."
schema = node_block.input_schema
# Convert non-matching data types to the expected input schema.
for name, data_type in schema.__annotations__.items():
if (value := data.get(name)) and (type(value) is not data_type):
data[name] = convert(value, data_type)
# Input data (without default values) should contain all required fields.
error_prefix = f"Input data missing or mismatch for `{node_block.name}`:"
if missing_links := schema.get_missing_links(data, node.input_links):
return None, f"{error_prefix} unpopulated links {missing_links}"
# Merge input data with default values and resolve dynamic dict/list/object pins.
input_default = schema.get_input_defaults(node.input_default)
data = {**input_default, **data}
if resolve_input:
data = merge_execution_input(data)
# Input data post-merge should contain all required fields from the schema.
if missing_input := schema.get_missing_input(data):
return None, f"{error_prefix} missing input {missing_input}"
# Last validation: Validate the input values against the schema.
if error := schema.get_mismatch_error(data):
error_message = f"{error_prefix} {error}"
logger.error(error_message)
return None, error_message
return data, node_block.name
class Executor:
"""
This class contains event handlers for the process pool executor events.
@@ -669,7 +575,13 @@ class Executor:
exec_meta = cls.db_client.update_graph_execution_start_time(
graph_exec.graph_exec_id
)
cls.db_client.send_execution_update(exec_meta)
if exec_meta is None:
logger.warning(
f"Skipped graph execution {graph_exec.graph_exec_id}, the graph execution is not found or not currently in the QUEUED state."
)
return
send_execution_update(exec_meta)
timing_info, (exec_stats, status, error) = cls._on_graph_execution(
graph_exec, cancel, log_metadata
)
@@ -682,7 +594,7 @@ class Executor:
status=status,
stats=exec_stats,
):
cls.db_client.send_execution_update(graph_exec_result)
send_execution_update(graph_exec_result)
cls._handle_agent_run_notif(graph_exec, exec_stats)
@@ -748,15 +660,19 @@ class Executor:
Exception | None: The error that occurred during the execution, if any.
"""
log_metadata.info(f"Start graph execution {graph_exec.graph_exec_id}")
exec_stats = GraphExecutionStats()
execution_stats = GraphExecutionStats()
execution_status = ExecutionStatus.RUNNING
error = None
finished = False
def cancel_handler():
nonlocal execution_status
while not cancel.is_set():
cancel.wait(1)
if finished:
return
execution_status = ExecutionStatus.TERMINATED
cls.executor.terminate()
log_metadata.info(f"Terminated graph execution {graph_exec.graph_exec_id}")
cls._init_node_executor_pool()
@@ -779,18 +695,34 @@ class Executor:
if not isinstance(result, NodeExecutionStats):
return
nonlocal exec_stats
exec_stats.node_count += 1
exec_stats.nodes_cputime += result.cputime
exec_stats.nodes_walltime += result.walltime
nonlocal execution_stats
execution_stats.node_count += 1
execution_stats.nodes_cputime += result.cputime
execution_stats.nodes_walltime += result.walltime
if (err := result.error) and isinstance(err, Exception):
exec_stats.node_error_count += 1
execution_stats.node_error_count += 1
if _graph_exec := cls.db_client.update_graph_execution_stats(
graph_exec_id=exec_data.graph_exec_id,
status=execution_status,
stats=execution_stats,
):
send_execution_update(_graph_exec)
else:
logger.error(
"Callback for "
f"finished node execution #{exec_data.node_exec_id} "
"could not update execution stats "
f"for graph execution #{exec_data.graph_exec_id}; "
f"triggered while graph exec status = {execution_status}"
)
return callback
while not queue.empty():
if cancel.is_set():
return exec_stats, ExecutionStatus.TERMINATED, error
execution_status = ExecutionStatus.TERMINATED
return execution_stats, execution_status, error
exec_data = queue.get()
@@ -812,29 +744,26 @@ class Executor:
exec_cost_counter = cls._charge_usage(
node_exec=exec_data,
execution_count=exec_cost_counter + 1,
execution_stats=exec_stats,
execution_stats=execution_stats,
)
except InsufficientBalanceError as error:
node_exec_id = exec_data.node_exec_id
_push_node_execution_output(
db_client=cls.db_client,
user_id=graph_exec.user_id,
graph_exec_id=graph_exec.graph_exec_id,
cls.db_client.upsert_execution_output(
node_exec_id=node_exec_id,
block_id=exec_data.block_id,
output_name="error",
output_data=str(error),
)
execution_status = ExecutionStatus.FAILED
exec_update = cls.db_client.update_node_execution_status(
node_exec_id, ExecutionStatus.FAILED
node_exec_id, execution_status
)
cls.db_client.send_execution_update(exec_update)
send_execution_update(exec_update)
cls._handle_low_balance_notif(
graph_exec.user_id,
graph_exec.graph_id,
exec_stats,
execution_stats,
error,
)
raise
@@ -852,7 +781,8 @@ class Executor:
)
for node_id, execution in list(running_executions.items()):
if cancel.is_set():
return exec_stats, ExecutionStatus.TERMINATED, error
execution_status = ExecutionStatus.TERMINATED
return execution_stats, execution_status, error
if not queue.empty():
break # yield to parent loop to execute new queue items
@@ -879,7 +809,7 @@ class Executor:
cancel_thread.join()
clean_exec_files(graph_exec.graph_exec_id)
return exec_stats, execution_status, error
return execution_stats, execution_status, error
@classmethod
def _handle_agent_run_notif(
@@ -945,227 +875,170 @@ class Executor:
)
class ExecutionManager(AppService):
class ExecutionManager(AppProcess):
def __init__(self):
super().__init__()
self.use_redis = True
self.use_supabase = True
self.pool_size = settings.config.num_graph_workers
self.queue = ExecutionQueue[GraphExecutionEntry]()
self.running = True
self.active_graph_runs: dict[str, tuple[Future, threading.Event]] = {}
@classmethod
def get_port(cls) -> int:
return settings.config.execution_manager_port
def run_service(self):
from backend.integrations.credentials_store import IntegrationCredentialsStore
def run(self):
retry_count_max = settings.config.execution_manager_loop_max_retry
retry_count = 0
self.credentials_store = IntegrationCredentialsStore()
for retry_count in range(retry_count_max):
try:
self._run()
except Exception as e:
if not self.running:
break
logger.exception(
f"[{self.service_name}] Error in execution manager: {e}"
)
if retry_count >= retry_count_max:
logger.error(
f"[{self.service_name}] Max retries reached ({retry_count_max}), exiting..."
)
break
else:
logger.info(
f"[{self.service_name}] Retrying execution loop in {retry_count} seconds..."
)
time.sleep(retry_count)
def _run(self):
logger.info(f"[{self.service_name}] ⏳ Spawn max-{self.pool_size} workers...")
self.executor = ProcessPoolExecutor(
max_workers=self.pool_size,
initializer=Executor.on_graph_executor_start,
)
sync_manager = multiprocessing.Manager()
logger.info(
f"[{self.service_name}] Started with max-{self.pool_size} graph workers"
logger.info(f"[{self.service_name}] ⏳ Connecting to Redis...")
redis.connect()
# Consume Cancel & Run execution requests.
channel = get_execution_queue().get_channel()
channel.basic_qos(prefetch_count=self.pool_size)
channel.basic_consume(
queue=GRAPH_EXECUTION_CANCEL_QUEUE_NAME,
on_message_callback=self._handle_cancel_message,
auto_ack=True,
)
while True:
graph_exec_data = self.queue.get()
graph_exec_id = graph_exec_data.graph_exec_id
logger.debug(
f"[ExecutionManager] Dispatching graph execution {graph_exec_id}"
)
cancel_event = sync_manager.Event()
future = self.executor.submit(
Executor.on_graph_execution, graph_exec_data, cancel_event
)
self.active_graph_runs[graph_exec_id] = (future, cancel_event)
future.add_done_callback(
lambda _: self.active_graph_runs.pop(graph_exec_id, None)
channel.basic_consume(
queue=GRAPH_EXECUTION_QUEUE_NAME,
on_message_callback=self._handle_run_message,
auto_ack=False,
)
logger.info(f"[{self.service_name}] Ready to consume messages...")
channel.start_consuming()
def _handle_cancel_message(
self,
channel: BlockingChannel,
method: Basic.Deliver,
properties: BasicProperties,
body: bytes,
):
"""
Called whenever we receive a CANCEL message from the queue.
(With auto_ack=True, message is considered 'acked' automatically.)
"""
try:
request = CancelExecutionEvent.model_validate_json(body)
graph_exec_id = request.graph_exec_id
if not graph_exec_id:
logger.warning(
f"[{self.service_name}] Cancel message missing 'graph_exec_id'"
)
return
if graph_exec_id not in self.active_graph_runs:
logger.debug(
f"[{self.service_name}] Cancel received for {graph_exec_id} but not active."
)
return
_, cancel_event = self.active_graph_runs[graph_exec_id]
logger.info(f"[{self.service_name}] Received cancel for {graph_exec_id}")
if not cancel_event.is_set():
cancel_event.set()
else:
logger.debug(
f"[{self.service_name}] Cancel already set for {graph_exec_id}"
)
except Exception as e:
logger.exception(f"Error handling cancel message: {e}")
def _handle_run_message(
self,
channel: BlockingChannel,
method: Basic.Deliver,
properties: BasicProperties,
body: bytes,
):
delivery_tag = method.delivery_tag
try:
graph_exec_entry = GraphExecutionEntry.model_validate_json(body)
except Exception as e:
logger.error(f"[{self.service_name}] Could not parse run message: {e}")
channel.basic_nack(delivery_tag, requeue=False)
return
graph_exec_id = graph_exec_entry.graph_exec_id
logger.info(
f"[{self.service_name}] Received RUN for graph_exec_id={graph_exec_id}"
)
if graph_exec_id in self.active_graph_runs:
logger.warning(
f"[{self.service_name}] Graph {graph_exec_id} already running; rejecting duplicate run."
)
channel.basic_nack(delivery_tag, requeue=False)
return
cancel_event = multiprocessing.Manager().Event()
future = self.executor.submit(
Executor.on_graph_execution, graph_exec_entry, cancel_event
)
self.active_graph_runs[graph_exec_id] = (future, cancel_event)
def _on_run_done(f: Future):
logger.info(f"[{self.service_name}] Run completed for {graph_exec_id}")
try:
self.active_graph_runs.pop(graph_exec_id, None)
if f.exception():
logger.error(
f"[{self.service_name}] Execution for {graph_exec_id} failed: {f.exception()}"
)
channel.basic_nack(delivery_tag, requeue=False)
else:
channel.basic_ack(delivery_tag)
except Exception as e:
logger.error(f"[{self.service_name}] Error acknowledging message: {e}")
future.add_done_callback(_on_run_done)
def cleanup(self):
logger.info(f"[{__class__.__name__}] ⏳ Shutting down graph executor pool...")
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Shutting down service loop...")
self.running = False
logger.info(f"[{self.service_name}] ⏳ Shutting down graph executor pool...")
self.executor.shutdown(cancel_futures=True)
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Disconnecting Redis...")
redis.disconnect()
@property
def db_client(self) -> "DatabaseManager":
return get_db_client()
@expose
def add_execution(
self,
graph_id: str,
data: BlockInput,
user_id: str,
graph_version: Optional[int] = None,
preset_id: str | None = None,
) -> GraphExecutionEntry:
graph: GraphModel | None = self.db_client.get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
if not graph:
raise ValueError(f"Graph #{graph_id} not found.")
graph.validate_graph(for_run=True)
self._validate_node_input_credentials(graph, user_id)
nodes_input = []
for node in graph.starting_nodes:
input_data = {}
block = get_block(node.block_id)
# Invalid block & Note block should never be executed.
if not block or block.block_type == BlockType.NOTE:
continue
# Extract request input data, and assign it to the input pin.
if block.block_type == BlockType.INPUT:
input_name = node.input_default.get("name")
if input_name and input_name in data:
input_data = {"value": data[input_name]}
# Extract webhook payload, and assign it to the input pin
webhook_payload_key = f"webhook_{node.webhook_id}_payload"
if (
block.block_type in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
and node.webhook_id
):
if webhook_payload_key not in data:
raise ValueError(
f"Node {block.name} #{node.id} webhook payload is missing"
)
input_data = {"payload": data[webhook_payload_key]}
input_data, error = validate_exec(node, input_data)
if input_data is None:
raise ValueError(error)
else:
nodes_input.append((node.id, input_data))
if not nodes_input:
raise ValueError(
"No starting nodes found for the graph, make sure an AgentInput or blocks with no inbound links are present as starting nodes."
)
graph_exec = self.db_client.create_graph_execution(
graph_id=graph_id,
graph_version=graph.version,
nodes_input=nodes_input,
user_id=user_id,
preset_id=preset_id,
)
self.db_client.send_execution_update(graph_exec)
graph_exec_entry = GraphExecutionEntry(
user_id=user_id,
graph_id=graph_id,
graph_version=graph_version or 0,
graph_exec_id=graph_exec.id,
start_node_execs=[
NodeExecutionEntry(
user_id=user_id,
graph_exec_id=node_exec.graph_exec_id,
graph_id=node_exec.graph_id,
node_exec_id=node_exec.node_exec_id,
node_id=node_exec.node_id,
block_id=node_exec.block_id,
data=node_exec.input_data,
)
for node_exec in graph_exec.node_executions
],
)
self.queue.add(graph_exec_entry)
return graph_exec_entry
@expose
def cancel_execution(self, graph_exec_id: str) -> None:
"""
Mechanism:
1. Set the cancel event
2. Graph executor's cancel handler thread detects the event, terminates workers,
reinitializes worker pool, and returns.
3. Update execution statuses in DB and set `error` outputs to `"TERMINATED"`.
"""
if graph_exec_id not in self.active_graph_runs:
logger.warning(
f"Graph execution #{graph_exec_id} not active/running: "
"possibly already completed/cancelled."
)
else:
future, cancel_event = self.active_graph_runs[graph_exec_id]
if not cancel_event.is_set():
cancel_event.set()
future.result()
# Update the status of the graph & node executions
self.db_client.update_graph_execution_stats(
graph_exec_id,
ExecutionStatus.TERMINATED,
)
node_execs = self.db_client.get_node_execution_results(
graph_exec_id=graph_exec_id,
statuses=[
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
ExecutionStatus.INCOMPLETE,
],
)
self.db_client.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in node_execs],
ExecutionStatus.TERMINATED,
)
for node_exec in node_execs:
node_exec.status = ExecutionStatus.TERMINATED
self.db_client.send_execution_update(node_exec)
def _validate_node_input_credentials(self, graph: GraphModel, user_id: str):
"""Checks all credentials for all nodes of the graph"""
for node in graph.nodes:
block = get_block(node.block_id)
if not block:
raise ValueError(f"Unknown block {node.block_id} for node #{node.id}")
# Find any fields of type CredentialsMetaInput
credentials_fields = cast(
type[BlockSchema], block.input_schema
).get_credentials_fields()
if not credentials_fields:
continue
for field_name, credentials_meta_type in credentials_fields.items():
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
# Fetch the corresponding Credentials and perform sanity checks
credentials = self.credentials_store.get_creds_by_id(
user_id, credentials_meta.id
)
if not credentials:
raise ValueError(
f"Unknown credentials #{credentials_meta.id} "
f"for node #{node.id} input '{field_name}'"
)
if (
credentials.provider != credentials_meta.provider
or credentials.type != credentials_meta.type
):
logger.warning(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch: "
f"{credentials_meta.type}<>{credentials.type};"
f"{credentials_meta.provider}<>{credentials.provider}"
)
raise ValueError(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch"
)
# ------- UTILITIES ------- #
@@ -1184,6 +1057,10 @@ def get_notification_service() -> "NotificationManager":
return get_service_client(NotificationManager)
def send_execution_update(entry: GraphExecution | NodeExecutionResult):
return get_execution_event_bus().publish(entry)
@contextmanager
def synchronized(key: str, timeout: int = 60):
lock: RedisLock = redis.get_redis().lock(f"lock:{key}", timeout=timeout)

View File

@@ -16,7 +16,7 @@ from pydantic import BaseModel
from sqlalchemy import MetaData, create_engine
from backend.data.block import BlockInput
from backend.executor.manager import ExecutionManager
from backend.executor import utils as execution_utils
from backend.notifications.notifications import NotificationManager
from backend.util.service import AppService, expose, get_service_client
from backend.util.settings import Config
@@ -57,11 +57,6 @@ def job_listener(event):
log(f"Job {event.job_id} completed successfully.")
@thread_cached
def get_execution_client() -> ExecutionManager:
return get_service_client(ExecutionManager)
@thread_cached
def get_notification_client():
from backend.notifications import NotificationManager
@@ -73,7 +68,7 @@ def execute_graph(**kwargs):
args = ExecutionJobArgs(**kwargs)
try:
log(f"Executing recurring job for graph #{args.graph_id}")
get_execution_client().add_execution(
execution_utils.add_graph_execution(
graph_id=args.graph_id,
data=args.input_data,
user_id=args.user_id,
@@ -164,11 +159,6 @@ class Scheduler(AppService):
def db_pool_size(cls) -> int:
return config.scheduler_db_pool_size
@property
@thread_cached
def execution_client(self) -> ExecutionManager:
return get_service_client(ExecutionManager)
@property
@thread_cached
def notification_client(self) -> NotificationManager:
@@ -176,7 +166,7 @@ class Scheduler(AppService):
def run_service(self):
load_dotenv()
db_schema, db_url = _extract_schema_from_url(os.getenv("DATABASE_URL"))
db_schema, db_url = _extract_schema_from_url(os.getenv("DIRECT_URL"))
self.scheduler = BlockingScheduler(
jobstores={
Jobstores.EXECUTION.value: SQLAlchemyJobStore(
@@ -206,6 +196,12 @@ class Scheduler(AppService):
self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
self.scheduler.start()
def cleanup(self):
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Shutting down scheduler...")
if self.scheduler:
self.scheduler.shutdown(wait=False)
@expose
def add_execution_schedule(
self,

View File

@@ -1,11 +1,70 @@
import logging
from typing import TYPE_CHECKING, Any, cast
from autogpt_libs.utils.cache import thread_cached
from pydantic import BaseModel
from backend.data.block import Block, BlockInput
from backend.data.block import (
Block,
BlockData,
BlockInput,
BlockSchema,
BlockType,
get_block,
)
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCostType
from backend.data.execution import GraphExecutionEntry, RedisExecutionEventBus
from backend.data.graph import GraphModel, Node
from backend.data.rabbitmq import (
Exchange,
ExchangeType,
Queue,
RabbitMQConfig,
SyncRabbitMQ,
)
from backend.util.mock import MockObject
from backend.util.service import get_service_client
from backend.util.settings import Config
from backend.util.type import convert
if TYPE_CHECKING:
from backend.executor import DatabaseManager
from backend.integrations.credentials_store import IntegrationCredentialsStore
config = Config()
logger = logging.getLogger(__name__)
# ============ Resource Helpers ============ #
@thread_cached
def get_execution_event_bus() -> RedisExecutionEventBus:
return RedisExecutionEventBus()
@thread_cached
def get_execution_queue() -> SyncRabbitMQ:
client = SyncRabbitMQ(create_execution_queue_config())
client.connect()
return client
@thread_cached
def get_integration_credentials_store() -> "IntegrationCredentialsStore":
from backend.integrations.credentials_store import IntegrationCredentialsStore
return IntegrationCredentialsStore()
@thread_cached
def get_db_client() -> "DatabaseManager":
from backend.executor import DatabaseManager
return get_service_client(DatabaseManager)
# ============ Execution Cost Helpers ============ #
class UsageTransactionMetadata(BaseModel):
@@ -95,3 +154,398 @@ def _is_cost_filter_match(cost_filter: BlockInput, input_data: BlockInput) -> bo
or (input_data.get(k) and _is_cost_filter_match(v, input_data[k]))
for k, v in cost_filter.items()
)
# ============ Execution Input Helpers ============ #
LIST_SPLIT = "_$_"
DICT_SPLIT = "_#_"
OBJC_SPLIT = "_@_"
def parse_execution_output(output: BlockData, name: str) -> Any | None:
"""
Extracts partial output data by name from a given BlockData.
The function supports extracting data from lists, dictionaries, and objects
using specific naming conventions:
- For lists: <output_name>_$_<index>
- For dictionaries: <output_name>_#_<key>
- For objects: <output_name>_@_<attribute>
Args:
output (BlockData): A tuple containing the output name and data.
name (str): The name used to extract specific data from the output.
Returns:
Any | None: The extracted data if found, otherwise None.
Examples:
>>> output = ("result", [10, 20, 30])
>>> parse_execution_output(output, "result_$_1")
20
>>> output = ("config", {"key1": "value1", "key2": "value2"})
>>> parse_execution_output(output, "config_#_key1")
'value1'
>>> class Sample:
... attr1 = "value1"
... attr2 = "value2"
>>> output = ("object", Sample())
>>> parse_execution_output(output, "object_@_attr1")
'value1'
"""
output_name, output_data = output
if name == output_name:
return output_data
if name.startswith(f"{output_name}{LIST_SPLIT}"):
index = int(name.split(LIST_SPLIT)[1])
if not isinstance(output_data, list) or len(output_data) <= index:
return None
return output_data[int(name.split(LIST_SPLIT)[1])]
if name.startswith(f"{output_name}{DICT_SPLIT}"):
index = name.split(DICT_SPLIT)[1]
if not isinstance(output_data, dict) or index not in output_data:
return None
return output_data[index]
if name.startswith(f"{output_name}{OBJC_SPLIT}"):
index = name.split(OBJC_SPLIT)[1]
if isinstance(output_data, object) and hasattr(output_data, index):
return getattr(output_data, index)
return None
return None
def validate_exec(
node: Node,
data: BlockInput,
resolve_input: bool = True,
) -> tuple[BlockInput | None, str]:
"""
Validate the input data for a node execution.
Args:
node: The node to execute.
data: The input data for the node execution.
resolve_input: Whether to resolve dynamic pins into dict/list/object.
Returns:
A tuple of the validated data and the block name.
If the data is invalid, the first element will be None, and the second element
will be an error message.
If the data is valid, the first element will be the resolved input data, and
the second element will be the block name.
"""
node_block: Block | None = get_block(node.block_id)
if not node_block:
return None, f"Block for {node.block_id} not found."
schema = node_block.input_schema
# Convert non-matching data types to the expected input schema.
for name, data_type in schema.__annotations__.items():
if (value := data.get(name)) and (type(value) is not data_type):
data[name] = convert(value, data_type)
# Input data (without default values) should contain all required fields.
error_prefix = f"Input data missing or mismatch for `{node_block.name}`:"
if missing_links := schema.get_missing_links(data, node.input_links):
return None, f"{error_prefix} unpopulated links {missing_links}"
# Merge input data with default values and resolve dynamic dict/list/object pins.
input_default = schema.get_input_defaults(node.input_default)
data = {**input_default, **data}
if resolve_input:
data = merge_execution_input(data)
# Input data post-merge should contain all required fields from the schema.
if missing_input := schema.get_missing_input(data):
return None, f"{error_prefix} missing input {missing_input}"
# Last validation: Validate the input values against the schema.
if error := schema.get_mismatch_error(data):
error_message = f"{error_prefix} {error}"
logger.error(error_message)
return None, error_message
return data, node_block.name
def merge_execution_input(data: BlockInput) -> BlockInput:
"""
Merges dynamic input pins into a single list, dictionary, or object based on naming patterns.
This function processes input keys that follow specific patterns to merge them into a unified structure:
- `<input_name>_$_<index>` for list inputs.
- `<input_name>_#_<index>` for dictionary inputs.
- `<input_name>_@_<index>` for object inputs.
Args:
data (BlockInput): A dictionary containing input keys and their corresponding values.
Returns:
BlockInput: A dictionary with merged inputs.
Raises:
ValueError: If a list index is not an integer.
Examples:
>>> data = {
... "list_$_0": "a",
... "list_$_1": "b",
... "dict_#_key1": "value1",
... "dict_#_key2": "value2",
... "object_@_attr1": "value1",
... "object_@_attr2": "value2"
... }
>>> merge_execution_input(data)
{
"list": ["a", "b"],
"dict": {"key1": "value1", "key2": "value2"},
"object": <MockObject attr1="value1" attr2="value2">
}
"""
# Merge all input with <input_name>_$_<index> into a single list.
items = list(data.items())
for key, value in items:
if LIST_SPLIT not in key:
continue
name, index = key.split(LIST_SPLIT)
if not index.isdigit():
raise ValueError(f"Invalid key: {key}, #{index} index must be an integer.")
data[name] = data.get(name, [])
if int(index) >= len(data[name]):
# Pad list with empty string on missing indices.
data[name].extend([""] * (int(index) - len(data[name]) + 1))
data[name][int(index)] = value
# Merge all input with <input_name>_#_<index> into a single dict.
for key, value in items:
if DICT_SPLIT not in key:
continue
name, index = key.split(DICT_SPLIT)
data[name] = data.get(name, {})
data[name][index] = value
# Merge all input with <input_name>_@_<index> into a single object.
for key, value in items:
if OBJC_SPLIT not in key:
continue
name, index = key.split(OBJC_SPLIT)
if name not in data or not isinstance(data[name], object):
data[name] = MockObject()
setattr(data[name], index, value)
return data
def _validate_node_input_credentials(graph: GraphModel, user_id: str):
"""Checks all credentials for all nodes of the graph"""
for node in graph.nodes:
block = node.block
# Find any fields of type CredentialsMetaInput
credentials_fields = cast(
type[BlockSchema], block.input_schema
).get_credentials_fields()
if not credentials_fields:
continue
for field_name, credentials_meta_type in credentials_fields.items():
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
# Fetch the corresponding Credentials and perform sanity checks
credentials = get_integration_credentials_store().get_creds_by_id(
user_id, credentials_meta.id
)
if not credentials:
raise ValueError(
f"Unknown credentials #{credentials_meta.id} "
f"for node #{node.id} input '{field_name}'"
)
if (
credentials.provider != credentials_meta.provider
or credentials.type != credentials_meta.type
):
logger.warning(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch: "
f"{credentials_meta.type}<>{credentials.type};"
f"{credentials_meta.provider}<>{credentials.provider}"
)
raise ValueError(
f"Invalid credentials #{credentials.id} for node #{node.id}: "
"type/provider mismatch"
)
def construct_node_execution_input(
graph: GraphModel,
user_id: str,
data: BlockInput,
) -> list[tuple[str, BlockInput]]:
"""
Validates and prepares the input data for executing a graph.
This function checks the graph for starting nodes, validates the input data
against the schema, and resolves dynamic input pins into a single list,
dictionary, or object.
Args:
graph (GraphModel): The graph model to execute.
user_id (str): The ID of the user executing the graph.
data (BlockInput): The input data for the graph execution.
Returns:
list[tuple[str, BlockInput]]: A list of tuples, each containing the node ID and
the corresponding input data for that node.
"""
graph.validate_graph(for_run=True)
_validate_node_input_credentials(graph, user_id)
nodes_input = []
for node in graph.starting_nodes:
input_data = {}
block = node.block
# Note block should never be executed.
if block.block_type == BlockType.NOTE:
continue
# Extract request input data, and assign it to the input pin.
if block.block_type == BlockType.INPUT:
input_name = node.input_default.get("name")
if input_name and input_name in data:
input_data = {"value": data[input_name]}
# Extract webhook payload, and assign it to the input pin
webhook_payload_key = f"webhook_{node.webhook_id}_payload"
if (
block.block_type in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
and node.webhook_id
):
if webhook_payload_key not in data:
raise ValueError(
f"Node {block.name} #{node.id} webhook payload is missing"
)
input_data = {"payload": data[webhook_payload_key]}
input_data, error = validate_exec(node, input_data)
if input_data is None:
raise ValueError(error)
else:
nodes_input.append((node.id, input_data))
if not nodes_input:
raise ValueError(
"No starting nodes found for the graph, make sure an AgentInput or blocks with no inbound links are present as starting nodes."
)
return nodes_input
# ============ Execution Queue Helpers ============ #
class CancelExecutionEvent(BaseModel):
graph_exec_id: str
GRAPH_EXECUTION_EXCHANGE = Exchange(
name="graph_execution",
type=ExchangeType.DIRECT,
durable=True,
auto_delete=False,
)
GRAPH_EXECUTION_QUEUE_NAME = "graph_execution_queue"
GRAPH_EXECUTION_ROUTING_KEY = "graph_execution.run"
GRAPH_EXECUTION_CANCEL_EXCHANGE = Exchange(
name="graph_execution_cancel",
type=ExchangeType.FANOUT,
durable=True,
auto_delete=True,
)
GRAPH_EXECUTION_CANCEL_QUEUE_NAME = "graph_execution_cancel_queue"
def create_execution_queue_config() -> RabbitMQConfig:
"""
Define two exchanges and queues:
- 'graph_execution' (DIRECT) for run tasks.
- 'graph_execution_cancel' (FANOUT) for cancel requests.
"""
run_queue = Queue(
name=GRAPH_EXECUTION_QUEUE_NAME,
exchange=GRAPH_EXECUTION_EXCHANGE,
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
durable=True,
auto_delete=False,
)
cancel_queue = Queue(
name=GRAPH_EXECUTION_CANCEL_QUEUE_NAME,
exchange=GRAPH_EXECUTION_CANCEL_EXCHANGE,
routing_key="", # not used for FANOUT
durable=True,
auto_delete=False,
)
return RabbitMQConfig(
vhost="/",
exchanges=[GRAPH_EXECUTION_EXCHANGE, GRAPH_EXECUTION_CANCEL_EXCHANGE],
queues=[run_queue, cancel_queue],
)
def add_graph_execution(
graph_id: str,
data: BlockInput,
user_id: str,
graph_version: int | None = None,
preset_id: str | None = None,
) -> GraphExecutionEntry:
"""
Adds a graph execution to the queue and returns the execution entry.
Args:
graph_id (str): The ID of the graph to execute.
data (BlockInput): The input data for the graph execution.
user_id (str): The ID of the user executing the graph.
graph_version (int | None): The version of the graph to execute. Defaults to None.
preset_id (str | None): The ID of the preset to use. Defaults to None.
Returns:
GraphExecutionEntry: The entry for the graph execution.
Raises:
ValueError: If the graph is not found or if there are validation errors.
"""
graph: GraphModel | None = get_db_client().get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
if not graph:
raise ValueError(f"Graph #{graph_id} not found.")
graph_exec = get_db_client().create_graph_execution(
graph_id=graph_id,
graph_version=graph.version,
nodes_input=construct_node_execution_input(graph, user_id, data),
user_id=user_id,
preset_id=preset_id,
)
get_execution_event_bus().publish(graph_exec)
graph_exec_entry = graph_exec.to_graph_execution_entry()
get_execution_queue().publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec_entry.model_dump_json(),
exchange=GRAPH_EXECUTION_EXCHANGE,
)
return graph_exec_entry

View File

@@ -10,6 +10,7 @@ from backend.data import redis
from backend.data.model import Credentials
from backend.integrations.credentials_store import IntegrationCredentialsStore
from backend.integrations.oauth import HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.util.exceptions import MissingConfigError
from backend.util.settings import Settings
@@ -153,12 +154,13 @@ class IntegrationCredentialsManager:
self.store.locks.release_all_locks()
def _get_provider_oauth_handler(provider_name: str) -> "BaseOAuthHandler":
def _get_provider_oauth_handler(provider_name_str: str) -> "BaseOAuthHandler":
provider_name = ProviderName(provider_name_str)
if provider_name not in HANDLERS_BY_NAME:
raise KeyError(f"Unknown provider '{provider_name}'")
client_id = getattr(settings.secrets, f"{provider_name}_client_id")
client_secret = getattr(settings.secrets, f"{provider_name}_client_secret")
client_id = getattr(settings.secrets, f"{provider_name.value}_client_id")
client_secret = getattr(settings.secrets, f"{provider_name.value}_client_secret")
if not (client_id and client_secret):
raise MissingConfigError(
f"Integration with provider '{provider_name}' is not configured",

View File

@@ -11,6 +11,7 @@ class ProviderName(str, Enum):
E2B = "e2b"
EXA = "exa"
FAL = "fal"
GENERIC_WEBHOOK = "generic_webhook"
GITHUB = "github"
GOOGLE = "google"
GOOGLE_MAPS = "google_maps"

View File

@@ -13,6 +13,7 @@ def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]
return _WEBHOOK_MANAGERS
from .compass import CompassWebhookManager
from .generic import GenericWebhooksManager
from .github import GithubWebhooksManager
from .slant3d import Slant3DWebhooksManager
@@ -23,6 +24,7 @@ def load_webhook_managers() -> dict["ProviderName", type["BaseWebhooksManager"]]
CompassWebhookManager,
GithubWebhooksManager,
Slant3DWebhooksManager,
GenericWebhooksManager,
]
}
)

View File

@@ -0,0 +1,29 @@
import logging
from fastapi import Request
from strenum import StrEnum
from backend.data import integrations
from backend.integrations.providers import ProviderName
from ._manual_base import ManualWebhookManagerBase
logger = logging.getLogger(__name__)
class GenericWebhookType(StrEnum):
PLAIN = "plain"
class GenericWebhooksManager(ManualWebhookManagerBase):
PROVIDER_NAME = ProviderName.GENERIC_WEBHOOK
WebhookType = GenericWebhookType
@classmethod
async def validate_payload(
cls, webhook: integrations.Webhook, request: Request
) -> tuple[dict, str]:
payload = await request.json()
event_type = GenericWebhookType.PLAIN
return payload, event_type

View File

@@ -1,7 +1,7 @@
import logging
from typing import TYPE_CHECKING, Callable, Optional, cast
from backend.data.block import BlockSchema, BlockWebhookConfig, get_block
from backend.data.block import BlockSchema, BlockWebhookConfig
from backend.data.graph import set_node_webhook
from backend.integrations.webhooks import get_webhook_manager, supports_webhooks
@@ -29,12 +29,7 @@ async def on_graph_activate(
# Compare nodes in new_graph_version with previous_graph_version
updated_nodes = []
for new_node in graph.nodes:
block = get_block(new_node.block_id)
if not block:
raise ValueError(
f"Node #{new_node.id} is instance of unknown block #{new_node.block_id}"
)
block_input_schema = cast(BlockSchema, block.input_schema)
block_input_schema = cast(BlockSchema, new_node.block.input_schema)
node_credentials = None
if (
@@ -75,12 +70,7 @@ async def on_graph_deactivate(
"""
updated_nodes = []
for node in graph.nodes:
block = get_block(node.block_id)
if not block:
raise ValueError(
f"Node #{node.id} is instance of unknown block #{node.block_id}"
)
block_input_schema = cast(BlockSchema, block.input_schema)
block_input_schema = cast(BlockSchema, node.block.input_schema)
node_credentials = None
if (
@@ -113,11 +103,7 @@ async def on_node_activate(
) -> "NodeModel":
"""Hook to be called when the node is activated/created"""
block = get_block(node.block_id)
if not block:
raise ValueError(
f"Node #{node.id} is instance of unknown block #{node.block_id}"
)
block = node.block
if not block.webhook_config:
return node
@@ -224,11 +210,7 @@ async def on_node_deactivate(
"""Hook to be called when node is deactivated/deleted"""
logger.debug(f"Deactivating node #{node.id}")
block = get_block(node.block_id)
if not block:
raise ValueError(
f"Node #{node.id} is instance of unknown block #{node.block_id}"
)
block = node.block
if not block.webhook_config:
return node

View File

@@ -9,6 +9,7 @@ from autogpt_libs.utils.cache import thread_cached
from prisma.enums import NotificationType
from pydantic import BaseModel
from backend.data import rabbitmq
from backend.data.notifications import (
BaseSummaryData,
BaseSummaryParams,
@@ -128,6 +129,20 @@ class NotificationManager(AppService):
self.running = True
self.email_sender = EmailSender()
@property
def rabbit(self) -> rabbitmq.AsyncRabbitMQ:
"""Access the RabbitMQ service. Will raise if not configured."""
if not self.rabbitmq_service:
raise RuntimeError("RabbitMQ not configured for this service")
return self.rabbitmq_service
@property
def rabbit_config(self) -> rabbitmq.RabbitMQConfig:
"""Access the RabbitMQ config. Will raise if not configured."""
if not self.rabbitmq_config:
raise RuntimeError("RabbitMQ not configured for this service")
return self.rabbitmq_config
@classmethod
def get_port(cls) -> int:
return settings.config.notification_service_port
@@ -245,20 +260,26 @@ class NotificationManager(AppService):
continue
unsub_link = generate_unsubscribe_link(batch.user_id)
events = [
NotificationEventModel[
get_notif_data_type(db_event.type)
].model_validate(
{
"user_id": batch.user_id,
"type": db_event.type,
"data": db_event.data,
"created_at": db_event.created_at,
}
)
for db_event in batch_data.notifications
]
events = []
for db_event in batch_data.notifications:
try:
events.append(
NotificationEventModel[
get_notif_data_type(db_event.type)
].model_validate(
{
"user_id": batch.user_id,
"type": db_event.type,
"data": db_event.data,
"created_at": db_event.created_at,
}
)
)
except Exception as e:
logger.error(
f"Error parsing notification event: {e=}, {db_event=}"
)
continue
logger.info(f"{events=}")
self.email_sender.send_templated(
@@ -668,6 +689,8 @@ class NotificationManager(AppService):
except QueueEmpty:
logger.debug(f"Queue {error_queue_name} empty")
except TimeoutError:
logger.debug(f"Queue {error_queue_name} timed out")
except Exception as e:
if message:
logger.error(
@@ -675,15 +698,19 @@ class NotificationManager(AppService):
)
self.run_and_wait(message.reject(requeue=False))
else:
logger.error(
f"Error in notification service loop, message unable to be rejected, and will have to be manually removed to free space in the queue: {e}"
logger.exception(
f"Error in notification service loop, message unable to be rejected, and will have to be manually removed to free space in the queue: {e=}"
)
def run_service(self):
logger.info(f"[{self.service_name}] ⏳ Configuring RabbitMQ...")
self.rabbitmq_service = rabbitmq.AsyncRabbitMQ(self.rabbitmq_config)
self.run_and_wait(self.rabbitmq_service.connect())
logger.info(f"[{self.service_name}] Started notification service")
# Set up scheduler for batch processing of all notification types
# this can be changed later to spawn differnt cleanups on different schedules
# this can be changed later to spawn different cleanups on different schedules
try:
get_scheduler().add_batched_notification_schedule(
notification_types=list(NotificationType),
@@ -745,3 +772,5 @@ class NotificationManager(AppService):
"""Cleanup service resources"""
self.running = False
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Disconnecting RabbitMQ...")
self.run_and_wait(self.rabbitmq_service.disconnect())

View File

@@ -2,7 +2,6 @@ import logging
from collections import defaultdict
from typing import Annotated, Any, Dict, List, Optional, Sequence
from autogpt_libs.utils.cache import thread_cached
from fastapi import APIRouter, Body, Depends, HTTPException
from prisma.enums import AgentExecutionStatus, APIKeyPermission
from typing_extensions import TypedDict
@@ -13,17 +12,10 @@ from backend.data import graph as graph_db
from backend.data.api_key import APIKey
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.data.execution import NodeExecutionResult
from backend.executor import ExecutionManager
from backend.server.external.middleware import require_permission
from backend.util.service import get_service_client
from backend.server.routers import v1 as internal_api_routes
from backend.util.settings import Settings
@thread_cached
def execution_manager_client() -> ExecutionManager:
return get_service_client(ExecutionManager)
settings = Settings()
logger = logging.getLogger(__name__)
@@ -98,18 +90,18 @@ def execute_graph_block(
path="/graphs/{graph_id}/execute/{graph_version}",
tags=["graphs"],
)
def execute_graph(
async def execute_graph(
graph_id: str,
graph_version: int,
node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
api_key: APIKey = Depends(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
) -> dict[str, Any]:
try:
graph_exec = execution_manager_client().add_execution(
graph_id,
graph_version=graph_version,
data=node_input,
graph_exec = await internal_api_routes.execute_graph(
graph_id=graph_id,
node_input=node_input,
user_id=api_key.user_id,
graph_version=graph_version,
)
return {"id": graph_exec.graph_exec_id}
except Exception as e:

View File

@@ -1,3 +1,4 @@
import asyncio
import logging
from typing import TYPE_CHECKING, Annotated, Literal
@@ -14,13 +15,12 @@ from backend.data.integrations import (
wait_for_webhook_event,
)
from backend.data.model import Credentials, CredentialsType, OAuth2Credentials
from backend.executor.manager import ExecutionManager
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
from backend.server.routers import v1 as internal_api_routes
from backend.util.exceptions import NeedConfirmation, NotFoundError
from backend.util.service import get_service_client
from backend.util.settings import Settings
if TYPE_CHECKING:
@@ -309,19 +309,22 @@ async def webhook_ingress_generic(
if not webhook.attached_nodes:
return
executor = get_service_client(ExecutionManager)
executions = []
for node in webhook.attached_nodes:
logger.debug(f"Webhook-attached node: {node}")
if not node.is_triggered_by_event_type(event_type):
logger.debug(f"Node #{node.id} doesn't trigger on event {event_type}")
continue
logger.debug(f"Executing graph #{node.graph_id} node #{node.id}")
executor.add_execution(
graph_id=node.graph_id,
graph_version=node.graph_version,
data={f"webhook_{webhook_id}_payload": payload},
user_id=webhook.user_id,
executions.append(
internal_api_routes.execute_graph(
graph_id=node.graph_id,
graph_version=node.graph_version,
node_input={f"webhook_{webhook_id}_payload": payload},
user_id=webhook.user_id,
)
)
asyncio.gather(*executions)
@router.post("/webhooks/{webhook_id}/ping")

View File

@@ -11,19 +11,19 @@ from autogpt_libs.feature_flag.client import (
initialize_launchdarkly,
shutdown_launchdarkly,
)
from autogpt_libs.logging.utils import generate_uvicorn_config
import backend.data.block
import backend.data.db
import backend.data.graph
import backend.data.user
import backend.server.integrations.router
import backend.server.routers.postmark.postmark
import backend.server.routers.v1
import backend.server.v2.admin.store_admin_routes
import backend.server.v2.library.db
import backend.server.v2.library.model
import backend.server.v2.library.routes
import backend.server.v2.otto.routes
import backend.server.v2.postmark.postmark
import backend.server.v2.store.model
import backend.server.v2.store.routes
import backend.util.service
@@ -115,8 +115,8 @@ app.include_router(
)
app.include_router(
backend.server.v2.postmark.postmark.router,
tags=["v2", "email"],
backend.server.routers.postmark.postmark.router,
tags=["v1", "email"],
prefix="/api/email",
)
@@ -141,8 +141,13 @@ class AgentServer(backend.util.service.AppProcess):
server_app,
host=backend.util.settings.Config().agent_api_host,
port=backend.util.settings.Config().agent_api_port,
log_config=generate_uvicorn_config(),
)
def cleanup(self):
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Shutting down Agent Server...")
@staticmethod
async def test_execute_graph(
graph_id: str,
@@ -150,7 +155,7 @@ class AgentServer(backend.util.service.AppProcess):
graph_version: Optional[int] = None,
node_input: Optional[dict[str, Any]] = None,
):
return backend.server.routers.v1.execute_graph(
return await backend.server.routers.v1.execute_graph(
user_id=user_id,
graph_id=graph_id,
graph_version=graph_version,
@@ -269,7 +274,9 @@ class AgentServer(backend.util.service.AppProcess):
provider: ProviderName,
credentials: Credentials,
) -> Credentials:
return backend.server.integrations.router.create_credentials(
from backend.server.integrations.router import create_credentials
return create_credentials(
user_id=user_id, provider=provider, credentials=credentials
)

View File

@@ -10,7 +10,7 @@ from backend.data.user import (
set_user_email_verification,
unsubscribe_user_by_token,
)
from backend.server.v2.postmark.models import (
from backend.server.routers.postmark.models import (
PostmarkBounceEnum,
PostmarkBounceWebhook,
PostmarkClickWebhook,

View File

@@ -13,7 +13,6 @@ from fastapi import APIRouter, Body, Depends, HTTPException, Request, Response
from starlette.status import HTTP_204_NO_CONTENT, HTTP_404_NOT_FOUND
from typing_extensions import Optional, TypedDict
import backend.data.block
import backend.server.integrations.router
import backend.server.routers.analytics
import backend.server.v2.library.db as library_db
@@ -31,7 +30,7 @@ from backend.data.api_key import (
suspend_api_key,
update_api_key_permissions,
)
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.data.block import BlockInput, CompletedBlockOutput, get_block, get_blocks
from backend.data.credit import (
AutoTopUpConfig,
RefundRequest,
@@ -41,6 +40,7 @@ from backend.data.credit import (
get_user_credit_model,
set_auto_top_up,
)
from backend.data.execution import AsyncRedisExecutionEventBus
from backend.data.notifications import NotificationPreference, NotificationPreferenceDTO
from backend.data.onboarding import (
UserOnboardingUpdate,
@@ -49,13 +49,16 @@ from backend.data.onboarding import (
onboarding_enabled,
update_user_onboarding,
)
from backend.data.rabbitmq import AsyncRabbitMQ
from backend.data.user import (
get_or_create_user,
get_user_notification_preference,
update_user_email,
update_user_notification_preference,
)
from backend.executor import ExecutionManager, Scheduler, scheduler
from backend.executor import Scheduler, scheduler
from backend.executor import utils as execution_utils
from backend.executor.utils import create_execution_queue_config
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks.graph_lifecycle_hooks import (
on_graph_activate,
@@ -79,13 +82,20 @@ if TYPE_CHECKING:
@thread_cached
def execution_manager_client() -> ExecutionManager:
return get_service_client(ExecutionManager)
def execution_scheduler_client() -> Scheduler:
return get_service_client(Scheduler)
@thread_cached
def execution_scheduler_client() -> Scheduler:
return get_service_client(Scheduler)
async def execution_queue_client() -> AsyncRabbitMQ:
client = AsyncRabbitMQ(create_execution_queue_config())
await client.connect()
return client
@thread_cached
def execution_event_bus() -> AsyncRedisExecutionEventBus:
return AsyncRedisExecutionEventBus()
settings = Settings()
@@ -206,7 +216,7 @@ async def is_onboarding_enabled():
@v1_router.get(path="/blocks", tags=["blocks"], dependencies=[Depends(auth_middleware)])
def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in backend.data.block.get_blocks().values()]
blocks = [block() for block in get_blocks().values()]
costs = get_block_costs()
return [
{**b.to_dict(), "costs": costs.get(b.id, [])} for b in blocks if not b.disabled
@@ -219,7 +229,7 @@ def get_graph_blocks() -> Sequence[dict[Any, Any]]:
dependencies=[Depends(auth_middleware)],
)
def execute_graph_block(block_id: str, data: BlockInput) -> CompletedBlockOutput:
obj = backend.data.block.get_block(block_id)
obj = get_block(block_id)
if not obj:
raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.")
@@ -308,7 +318,7 @@ async def configure_user_auto_top_up(
dependencies=[Depends(auth_middleware)],
)
async def get_user_auto_top_up(
user_id: Annotated[str, Depends(get_user_id)]
user_id: Annotated[str, Depends(get_user_id)],
) -> AutoTopUpConfig:
return await get_auto_top_up(user_id)
@@ -375,7 +385,7 @@ async def get_credit_history(
@v1_router.get(path="/credits/refunds", dependencies=[Depends(auth_middleware)])
async def get_refund_requests(
user_id: Annotated[str, Depends(get_user_id)]
user_id: Annotated[str, Depends(get_user_id)],
) -> list[RefundRequest]:
return await _user_credit_model.get_refund_requests(user_id)
@@ -391,7 +401,7 @@ class DeleteGraphResponse(TypedDict):
@v1_router.get(path="/graphs", tags=["graphs"], dependencies=[Depends(auth_middleware)])
async def get_graphs(
user_id: Annotated[str, Depends(get_user_id)]
user_id: Annotated[str, Depends(get_user_id)],
) -> Sequence[graph_db.GraphModel]:
return await graph_db.get_graphs(filter_by="active", user_id=user_id)
@@ -580,16 +590,35 @@ async def set_graph_active_version(
tags=["graphs"],
dependencies=[Depends(auth_middleware)],
)
def execute_graph(
async def execute_graph(
graph_id: str,
node_input: Annotated[dict[str, Any], Body(..., default_factory=dict)],
user_id: Annotated[str, Depends(get_user_id)],
graph_version: Optional[int] = None,
preset_id: Optional[str] = None,
) -> ExecuteGraphResponse:
graph_exec = execution_manager_client().add_execution(
graph_id, node_input, user_id=user_id, graph_version=graph_version
graph: graph_db.GraphModel | None = await graph_db.get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
return ExecuteGraphResponse(graph_exec_id=graph_exec.graph_exec_id)
if not graph:
raise ValueError(f"Graph #{graph_id} not found.")
graph_exec = await execution_db.create_graph_execution(
graph_id=graph_id,
graph_version=graph.version,
nodes_input=execution_utils.construct_node_execution_input(
graph, user_id, node_input
),
user_id=user_id,
preset_id=preset_id,
)
execution_utils.get_execution_event_bus().publish(graph_exec)
execution_utils.get_execution_queue().publish_message(
routing_key=execution_utils.GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec.to_graph_execution_entry().model_dump_json(),
exchange=execution_utils.GRAPH_EXECUTION_EXCHANGE,
)
return ExecuteGraphResponse(graph_exec_id=graph_exec.id)
@v1_router.post(
@@ -605,9 +634,7 @@ async def stop_graph_run(
):
raise HTTPException(404, detail=f"Agent execution #{graph_exec_id} not found")
await asyncio.to_thread(
lambda: execution_manager_client().cancel_execution(graph_exec_id)
)
await _cancel_execution(graph_exec_id)
# Retrieve & return canceled graph execution in its final state
result = await execution_db.get_graph_execution(
@@ -621,6 +648,49 @@ async def stop_graph_run(
return result
async def _cancel_execution(graph_exec_id: str):
"""
Mechanism:
1. Set the cancel event
2. Graph executor's cancel handler thread detects the event, terminates workers,
reinitializes worker pool, and returns.
3. Update execution statuses in DB and set `error` outputs to `"TERMINATED"`.
"""
queue_client = await execution_queue_client()
await queue_client.publish_message(
routing_key="",
message=execution_utils.CancelExecutionEvent(
graph_exec_id=graph_exec_id
).model_dump_json(),
exchange=execution_utils.GRAPH_EXECUTION_CANCEL_EXCHANGE,
)
# Update the status of the graph & node executions
await execution_db.update_graph_execution_stats(
graph_exec_id,
execution_db.ExecutionStatus.TERMINATED,
)
node_execs = [
node_exec.model_copy(update={"status": execution_db.ExecutionStatus.TERMINATED})
for node_exec in await execution_db.get_node_execution_results(
graph_exec_id=graph_exec_id,
statuses=[
execution_db.ExecutionStatus.QUEUED,
execution_db.ExecutionStatus.RUNNING,
execution_db.ExecutionStatus.INCOMPLETE,
],
)
]
await execution_db.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in node_execs],
execution_db.ExecutionStatus.TERMINATED,
)
await asyncio.gather(
*[execution_event_bus().publish(node_exec) for node_exec in node_execs]
)
@v1_router.get(
path="/executions",
tags=["graphs"],
@@ -792,7 +862,7 @@ async def create_api_key(
dependencies=[Depends(auth_middleware)],
)
async def get_api_keys(
user_id: Annotated[str, Depends(get_user_id)]
user_id: Annotated[str, Depends(get_user_id)],
) -> list[APIKeyWithoutHash]:
"""List all API keys for the user"""
try:

View File

@@ -68,12 +68,12 @@ async def list_library_agents(
if search_term:
where_clause["OR"] = [
{
"Agent": {
"AgentGraph": {
"is": {"name": {"contains": search_term, "mode": "insensitive"}}
}
},
{
"Agent": {
"AgentGraph": {
"is": {
"description": {"contains": search_term, "mode": "insensitive"}
}
@@ -228,16 +228,17 @@ async def create_library_agent(
try:
return await prisma.models.LibraryAgent.prisma().create(
data={
"isCreatedByUser": (user_id == graph.user_id),
"useGraphIsActiveVersion": True,
"User": {"connect": {"id": user_id}},
"Agent": {
data=prisma.types.LibraryAgentCreateInput(
isCreatedByUser=(user_id == graph.user_id),
useGraphIsActiveVersion=True,
User={"connect": {"id": user_id}},
# Creator={"connect": {"id": agent.userId}},
AgentGraph={
"connect": {
"graphVersionId": {"id": graph.id, "version": graph.version}
}
},
}
)
)
except prisma.errors.PrismaError as e:
logger.error(f"Database error creating agent in library: {e}")
@@ -246,38 +247,41 @@ async def create_library_agent(
async def update_agent_version_in_library(
user_id: str,
agent_id: str,
agent_version: int,
agent_graph_id: str,
agent_graph_version: int,
) -> None:
"""
Updates the agent version in the library if useGraphIsActiveVersion is True.
Args:
user_id: Owner of the LibraryAgent.
agent_id: The agent's ID to update.
agent_version: The new version of the agent.
agent_graph_id: The agent graph's ID to update.
agent_graph_version: The new version of the agent graph.
Raises:
DatabaseError: If there's an error with the update.
"""
logger.debug(
f"Updating agent version in library for user #{user_id}, "
f"agent #{agent_id} v{agent_version}"
f"agent #{agent_graph_id} v{agent_graph_version}"
)
try:
library_agent = await prisma.models.LibraryAgent.prisma().find_first_or_raise(
where={
"userId": user_id,
"agentId": agent_id,
"agentGraphId": agent_graph_id,
"useGraphIsActiveVersion": True,
},
)
await prisma.models.LibraryAgent.prisma().update(
where={"id": library_agent.id},
data={
"Agent": {
"AgentGraph": {
"connect": {
"graphVersionId": {"id": agent_id, "version": agent_version}
"graphVersionId": {
"id": agent_graph_id,
"version": agent_graph_version,
}
},
},
},
@@ -341,7 +345,7 @@ async def delete_library_agent_by_graph_id(graph_id: str, user_id: str) -> None:
"""
try:
await prisma.models.LibraryAgent.prisma().delete_many(
where={"agentId": graph_id, "userId": user_id}
where={"agentGraphId": graph_id, "userId": user_id}
)
except prisma.errors.PrismaError as e:
logger.error(f"Database error deleting library agent: {e}")
@@ -374,10 +378,10 @@ async def add_store_agent_to_library(
async with locked_transaction(f"add_agent_trx_{user_id}"):
store_listing_version = (
await prisma.models.StoreListingVersion.prisma().find_unique(
where={"id": store_listing_version_id}, include={"Agent": True}
where={"id": store_listing_version_id}, include={"AgentGraph": True}
)
)
if not store_listing_version or not store_listing_version.Agent:
if not store_listing_version or not store_listing_version.AgentGraph:
logger.warning(
f"Store listing version not found: {store_listing_version_id}"
)
@@ -385,7 +389,7 @@ async def add_store_agent_to_library(
f"Store listing version {store_listing_version_id} not found or invalid"
)
graph = store_listing_version.Agent
graph = store_listing_version.AgentGraph
if graph.userId == user_id:
logger.warning(
f"User #{user_id} attempted to add their own agent to their library"
@@ -397,8 +401,8 @@ async def add_store_agent_to_library(
await prisma.models.LibraryAgent.prisma().find_first(
where={
"userId": user_id,
"agentId": graph.id,
"agentVersion": graph.version,
"agentGraphId": graph.id,
"agentGraphVersion": graph.version,
},
include=library_agent_include(user_id),
)
@@ -418,17 +422,17 @@ async def add_store_agent_to_library(
# Create LibraryAgent entry
added_agent = await prisma.models.LibraryAgent.prisma().create(
data={
"userId": user_id,
"agentId": graph.id,
"agentVersion": graph.version,
"isCreatedByUser": False,
},
data=prisma.types.LibraryAgentCreateInput(
userId=user_id,
agentGraphId=graph.id,
agentGraphVersion=graph.version,
isCreatedByUser=False,
),
include=library_agent_include(user_id),
)
logger.debug(
f"Added graph #{graph.id} "
f"for store listing #{store_listing_version.id} "
f"Added graph #{graph.id} v{graph.version}"
f"for store listing version #{store_listing_version.id} "
f"to library for user #{user_id}"
)
return library_model.LibraryAgent.from_db(added_agent)
@@ -467,8 +471,8 @@ async def set_is_deleted_for_library_agent(
count = await prisma.models.LibraryAgent.prisma().update_many(
where={
"userId": user_id,
"agentId": agent_id,
"agentVersion": agent_version,
"agentGraphId": agent_id,
"agentGraphVersion": agent_version,
},
data={"isDeleted": is_deleted},
)
@@ -597,6 +601,12 @@ async def upsert_preset(
f"Upserting preset #{preset_id} ({repr(preset.name)}) for user #{user_id}",
)
try:
inputs = [
prisma.types.AgentNodeExecutionInputOutputCreateWithoutRelationsInput(
name=name, data=prisma.fields.Json(data)
)
for name, data in preset.inputs.items()
]
if preset_id:
# Update existing preset
updated = await prisma.models.AgentPreset.prisma().update(
@@ -605,12 +615,7 @@ async def upsert_preset(
"name": preset.name,
"description": preset.description,
"isActive": preset.is_active,
"InputPresets": {
"create": [
{"name": name, "data": prisma.fields.Json(data)}
for name, data in preset.inputs.items()
]
},
"InputPresets": {"create": inputs},
},
include={"InputPresets": True},
)
@@ -620,20 +625,15 @@ async def upsert_preset(
else:
# Create new preset
new_preset = await prisma.models.AgentPreset.prisma().create(
data={
"userId": user_id,
"name": preset.name,
"description": preset.description,
"agentId": preset.agent_id,
"agentVersion": preset.agent_version,
"isActive": preset.is_active,
"InputPresets": {
"create": [
{"name": name, "data": prisma.fields.Json(data)}
for name, data in preset.inputs.items()
]
},
},
data=prisma.types.AgentPresetCreateInput(
userId=user_id,
name=preset.name,
description=preset.description,
agentGraphId=preset.graph_id,
agentGraphVersion=preset.graph_version,
isActive=preset.is_active,
InputPresets={"create": inputs},
),
include={"InputPresets": True},
)
return library_model.LibraryAgentPreset.from_db(new_preset)

View File

@@ -30,8 +30,8 @@ async def test_get_library_agents(mocker):
prisma.models.LibraryAgent(
id="ua1",
userId="test-user",
agentId="agent2",
agentVersion=1,
agentGraphId="agent2",
agentGraphVersion=1,
isCreatedByUser=False,
isDeleted=False,
isArchived=False,
@@ -39,7 +39,7 @@ async def test_get_library_agents(mocker):
updatedAt=datetime.now(),
isFavorite=False,
useGraphIsActiveVersion=True,
Agent=prisma.models.AgentGraph(
AgentGraph=prisma.models.AgentGraph(
id="agent2",
version=1,
name="Test Agent 2",
@@ -71,8 +71,8 @@ async def test_get_library_agents(mocker):
assert result.agents[0].id == "ua1"
assert result.agents[0].name == "Test Agent 2"
assert result.agents[0].description == "Test Description 2"
assert result.agents[0].agent_id == "agent2"
assert result.agents[0].agent_version == 1
assert result.agents[0].graph_id == "agent2"
assert result.agents[0].graph_version == 1
assert result.agents[0].can_access_graph is False
assert result.agents[0].is_latest_version is True
assert result.pagination.total_items == 1
@@ -81,7 +81,7 @@ async def test_get_library_agents(mocker):
assert result.pagination.page_size == 50
@pytest.mark.asyncio(scope="session")
@pytest.mark.asyncio(loop_scope="session")
async def test_add_agent_to_library(mocker):
await connect()
# Mock data
@@ -90,8 +90,8 @@ async def test_add_agent_to_library(mocker):
version=1,
createdAt=datetime.now(),
updatedAt=datetime.now(),
agentId="agent1",
agentVersion=1,
agentGraphId="agent1",
agentGraphVersion=1,
name="Test Agent",
subHeading="Test Agent Subheading",
imageUrls=["https://example.com/image.jpg"],
@@ -102,7 +102,7 @@ async def test_add_agent_to_library(mocker):
isAvailable=True,
storeListingId="listing123",
submissionStatus=prisma.enums.SubmissionStatus.APPROVED,
Agent=prisma.models.AgentGraph(
AgentGraph=prisma.models.AgentGraph(
id="agent1",
version=1,
name="Test Agent",
@@ -116,8 +116,8 @@ async def test_add_agent_to_library(mocker):
mock_library_agent_data = prisma.models.LibraryAgent(
id="ua1",
userId="test-user",
agentId=mock_store_listing_data.agentId,
agentVersion=1,
agentGraphId=mock_store_listing_data.agentGraphId,
agentGraphVersion=1,
isCreatedByUser=False,
isDeleted=False,
isArchived=False,
@@ -125,7 +125,7 @@ async def test_add_agent_to_library(mocker):
updatedAt=datetime.now(),
isFavorite=False,
useGraphIsActiveVersion=True,
Agent=mock_store_listing_data.Agent,
AgentGraph=mock_store_listing_data.AgentGraph,
)
# Mock prisma calls
@@ -147,25 +147,28 @@ async def test_add_agent_to_library(mocker):
# Verify mocks called correctly
mock_store_listing_version.return_value.find_unique.assert_called_once_with(
where={"id": "version123"}, include={"Agent": True}
where={"id": "version123"}, include={"AgentGraph": True}
)
mock_library_agent.return_value.find_first.assert_called_once_with(
where={
"userId": "test-user",
"agentId": "agent1",
"agentVersion": 1,
"agentGraphId": "agent1",
"agentGraphVersion": 1,
},
include=library_agent_include("test-user"),
)
mock_library_agent.return_value.create.assert_called_once_with(
data=prisma.types.LibraryAgentCreateInput(
userId="test-user", agentId="agent1", agentVersion=1, isCreatedByUser=False
userId="test-user",
agentGraphId="agent1",
agentGraphVersion=1,
isCreatedByUser=False,
),
include=library_agent_include("test-user"),
)
@pytest.mark.asyncio(scope="session")
@pytest.mark.asyncio(loop_scope="session")
async def test_add_agent_to_library_not_found(mocker):
await connect()
# Mock prisma calls
@@ -182,5 +185,5 @@ async def test_add_agent_to_library_not_found(mocker):
# Verify mock called correctly
mock_store_listing_version.return_value.find_unique.assert_called_once_with(
where={"id": "version123"}, include={"Agent": True}
where={"id": "version123"}, include={"AgentGraph": True}
)

View File

@@ -25,8 +25,8 @@ class LibraryAgent(pydantic.BaseModel):
"""
id: str
agent_id: str
agent_version: int
graph_id: str
graph_version: int
image_url: str | None
@@ -58,12 +58,12 @@ class LibraryAgent(pydantic.BaseModel):
Factory method that constructs a LibraryAgent from a Prisma LibraryAgent
model instance.
"""
if not agent.Agent:
if not agent.AgentGraph:
raise ValueError("Associated Agent record is required.")
graph = graph_model.GraphModel.from_db(agent.Agent)
graph = graph_model.GraphModel.from_db(agent.AgentGraph)
agent_updated_at = agent.Agent.updatedAt
agent_updated_at = agent.AgentGraph.updatedAt
lib_agent_updated_at = agent.updatedAt
# Compute updated_at as the latest between library agent and graph
@@ -83,21 +83,21 @@ class LibraryAgent(pydantic.BaseModel):
week_ago = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(
days=7
)
executions = agent.Agent.AgentGraphExecution or []
executions = agent.AgentGraph.Executions or []
status_result = _calculate_agent_status(executions, week_ago)
status = status_result.status
new_output = status_result.new_output
# Check if user can access the graph
can_access_graph = agent.Agent.userId == agent.userId
can_access_graph = agent.AgentGraph.userId == agent.userId
# Hard-coded to True until a method to check is implemented
is_latest_version = True
return LibraryAgent(
id=agent.id,
agent_id=agent.agentId,
agent_version=agent.agentVersion,
graph_id=agent.agentGraphId,
graph_version=agent.agentGraphVersion,
image_url=agent.imageUrl,
creator_name=creator_name,
creator_image_url=creator_image_url,
@@ -174,8 +174,8 @@ class LibraryAgentPreset(pydantic.BaseModel):
id: str
updated_at: datetime.datetime
agent_id: str
agent_version: int
graph_id: str
graph_version: int
name: str
description: str
@@ -194,8 +194,8 @@ class LibraryAgentPreset(pydantic.BaseModel):
return cls(
id=preset.id,
updated_at=preset.updatedAt,
agent_id=preset.agentId,
agent_version=preset.agentVersion,
graph_id=preset.agentGraphId,
graph_version=preset.agentGraphVersion,
name=preset.name,
description=preset.description,
is_active=preset.isActive,
@@ -218,8 +218,8 @@ class CreateLibraryAgentPresetRequest(pydantic.BaseModel):
name: str
description: str
inputs: block_model.BlockInput
agent_id: str
agent_version: int
graph_id: str
graph_version: int
is_active: bool

View File

@@ -5,7 +5,6 @@ import prisma.models
import pytest
import backend.server.v2.library.model as library_model
from backend.util import json
@pytest.mark.asyncio
@@ -15,8 +14,8 @@ async def test_agent_preset_from_db():
id="test-agent-123",
createdAt=datetime.datetime.now(),
updatedAt=datetime.datetime.now(),
agentId="agent-123",
agentVersion=1,
agentGraphId="agent-123",
agentGraphVersion=1,
name="Test Agent",
description="Test agent description",
isActive=True,
@@ -27,7 +26,7 @@ async def test_agent_preset_from_db():
id="input-123",
time=datetime.datetime.now(),
name="input1",
data=json.dumps({"type": "string", "value": "test value"}), # type: ignore
data=prisma.Json({"type": "string", "value": "test value"}),
)
],
)
@@ -36,7 +35,7 @@ async def test_agent_preset_from_db():
agent = library_model.LibraryAgentPreset.from_db(db_agent)
assert agent.id == "test-agent-123"
assert agent.agent_version == 1
assert agent.graph_version == 1
assert agent.is_active is True
assert agent.name == "Test Agent"
assert agent.description == "Test agent description"

View File

@@ -2,25 +2,16 @@ import logging
from typing import Annotated, Any
import autogpt_libs.auth as autogpt_auth_lib
import autogpt_libs.utils.cache
from fastapi import APIRouter, Body, Depends, HTTPException, status
import backend.executor
import backend.server.v2.library.db as db
import backend.server.v2.library.model as models
import backend.util.service
logger = logging.getLogger(__name__)
router = APIRouter()
@autogpt_libs.utils.cache.thread_cached
def execution_manager_client() -> backend.executor.ExecutionManager:
"""Return a cached instance of ExecutionManager client."""
return backend.util.service.get_service_client(backend.executor.ExecutionManager)
@router.get(
"/presets",
summary="List presets",
@@ -216,6 +207,8 @@ async def execute_preset(
HTTPException: If the preset is not found or an error occurs while executing the preset.
"""
try:
from backend.server.routers import v1 as internal_api_routes
preset = await db.get_preset(user_id, preset_id)
if not preset:
raise HTTPException(
@@ -226,10 +219,10 @@ async def execute_preset(
# Merge input overrides with preset inputs
merged_node_input = preset.inputs | node_input
execution = execution_manager_client().add_execution(
execution = await internal_api_routes.execute_graph(
graph_id=graph_id,
node_input=merged_node_input,
graph_version=graph_version,
data=merged_node_input,
user_id=user_id,
preset_id=preset_id,
)

View File

@@ -35,8 +35,8 @@ async def test_get_library_agents_success(mocker: pytest_mock.MockFixture):
agents=[
library_model.LibraryAgent(
id="test-agent-1",
agent_id="test-agent-1",
agent_version=1,
graph_id="test-agent-1",
graph_version=1,
name="Test Agent 1",
description="Test Description 1",
image_url=None,
@@ -51,8 +51,8 @@ async def test_get_library_agents_success(mocker: pytest_mock.MockFixture):
),
library_model.LibraryAgent(
id="test-agent-2",
agent_id="test-agent-2",
agent_version=1,
graph_id="test-agent-2",
graph_version=1,
name="Test Agent 2",
description="Test Description 2",
image_url=None,
@@ -78,9 +78,9 @@ async def test_get_library_agents_success(mocker: pytest_mock.MockFixture):
data = library_model.LibraryAgentResponse.model_validate(response.json())
assert len(data.agents) == 2
assert data.agents[0].agent_id == "test-agent-1"
assert data.agents[0].graph_id == "test-agent-1"
assert data.agents[0].can_access_graph is True
assert data.agents[1].agent_id == "test-agent-2"
assert data.agents[1].graph_id == "test-agent-2"
assert data.agents[1].can_access_graph is False
mock_db_call.assert_called_once_with(
user_id="test-user-id",

View File

@@ -200,17 +200,17 @@ async def get_available_graph(
"isAvailable": True,
"isDeleted": False,
},
include={"Agent": {"include": {"AgentNodes": True}}},
include={"AgentGraph": {"include": {"Nodes": True}}},
)
)
if not store_listing_version or not store_listing_version.Agent:
if not store_listing_version or not store_listing_version.AgentGraph:
raise fastapi.HTTPException(
status_code=404,
detail=f"Store listing version {store_listing_version_id} not found",
)
graph = GraphModel.from_db(store_listing_version.Agent)
graph = GraphModel.from_db(store_listing_version.AgentGraph)
# We return graph meta, without nodes, they cannot be just removed
# because then input_schema would be empty
return {
@@ -516,7 +516,7 @@ async def delete_store_submission(
try:
# Verify the submission belongs to this user
submission = await prisma.models.StoreListing.prisma().find_first(
where={"agentId": submission_id, "owningUserId": user_id}
where={"agentGraphId": submission_id, "owningUserId": user_id}
)
if not submission:
@@ -598,7 +598,7 @@ async def create_store_submission(
# Check if listing already exists for this agent
existing_listing = await prisma.models.StoreListing.prisma().find_first(
where=prisma.types.StoreListingWhereInput(
agentId=agent_id, owningUserId=user_id
agentGraphId=agent_id, owningUserId=user_id
)
)
@@ -625,15 +625,15 @@ async def create_store_submission(
# If no existing listing, create a new one
data = prisma.types.StoreListingCreateInput(
slug=slug,
agentId=agent_id,
agentVersion=agent_version,
agentGraphId=agent_id,
agentGraphVersion=agent_version,
owningUserId=user_id,
createdAt=datetime.now(tz=timezone.utc),
Versions={
"create": [
prisma.types.StoreListingVersionCreateInput(
agentId=agent_id,
agentVersion=agent_version,
agentGraphId=agent_id,
agentGraphVersion=agent_version,
name=name,
videoUrl=video_url,
imageUrls=image_urls,
@@ -758,8 +758,8 @@ async def create_store_version(
new_version = await prisma.models.StoreListingVersion.prisma().create(
data=prisma.types.StoreListingVersionCreateInput(
version=next_version,
agentId=agent_id,
agentVersion=agent_version,
agentGraphId=agent_id,
agentGraphVersion=agent_version,
name=name,
videoUrl=video_url,
imageUrls=image_urls,
@@ -959,17 +959,17 @@ async def get_my_agents(
try:
search_filter: prisma.types.LibraryAgentWhereInput = {
"userId": user_id,
"Agent": {"is": {"StoreListing": {"none": {"isDeleted": False}}}},
"AgentGraph": {"is": {"StoreListings": {"none": {"isDeleted": False}}}},
"isArchived": False,
"isDeleted": False,
}
library_agents = await prisma.models.LibraryAgent.prisma().find_many(
where=search_filter,
order=[{"agentVersion": "desc"}],
order=[{"agentGraphVersion": "desc"}],
skip=(page - 1) * page_size,
take=page_size,
include={"Agent": True},
include={"AgentGraph": True},
)
total = await prisma.models.LibraryAgent.prisma().count(where=search_filter)
@@ -985,7 +985,7 @@ async def get_my_agents(
agent_image=library_agent.imageUrl,
)
for library_agent in library_agents
if (graph := library_agent.Agent)
if (graph := library_agent.AgentGraph)
]
return backend.server.v2.store.model.MyAgentsResponse(
@@ -1020,13 +1020,13 @@ async def get_agent(
graph = await backend.data.graph.get_graph(
user_id=user_id,
graph_id=store_listing_version.agentId,
version=store_listing_version.agentVersion,
graph_id=store_listing_version.agentGraphId,
version=store_listing_version.agentGraphVersion,
for_export=True,
)
if not graph:
raise ValueError(
f"Agent {store_listing_version.agentId} v{store_listing_version.agentVersion} not found"
f"Agent {store_listing_version.agentGraphId} v{store_listing_version.agentGraphVersion} not found"
)
return graph
@@ -1050,11 +1050,14 @@ async def _get_missing_sub_store_listing(
# Fetch all the sub-graphs that are listed, and return the ones missing.
store_listed_sub_graphs = {
(listing.agentId, listing.agentVersion)
(listing.agentGraphId, listing.agentGraphVersion)
for listing in await prisma.models.StoreListingVersion.prisma().find_many(
where={
"OR": [
{"agentId": sub_graph.id, "agentVersion": sub_graph.version}
{
"agentGraphId": sub_graph.id,
"agentGraphVersion": sub_graph.version,
}
for sub_graph in sub_graphs
],
"submissionStatus": prisma.enums.SubmissionStatus.APPROVED,
@@ -1084,7 +1087,7 @@ async def review_store_submission(
where={"id": store_listing_version_id},
include={
"StoreListing": True,
"Agent": {"include": AGENT_GRAPH_INCLUDE}, # type: ignore
"AgentGraph": {"include": AGENT_GRAPH_INCLUDE},
},
)
)
@@ -1096,23 +1099,23 @@ async def review_store_submission(
)
# If approving, update the listing to indicate it has an approved version
if is_approved and store_listing_version.Agent:
heading = f"Sub-graph of {store_listing_version.name}v{store_listing_version.agentVersion}"
if is_approved and store_listing_version.AgentGraph:
heading = f"Sub-graph of {store_listing_version.name}v{store_listing_version.agentGraphVersion}"
sub_store_listing_versions = [
prisma.types.StoreListingVersionCreateWithoutRelationsInput(
agentId=sub_graph.id,
agentVersion=sub_graph.version,
agentGraphId=sub_graph.id,
agentGraphVersion=sub_graph.version,
name=sub_graph.name or heading,
submissionStatus=prisma.enums.SubmissionStatus.APPROVED,
subHeading=heading,
description=f"{heading}: {sub_graph.description}",
changesSummary=f"This listing is added as a {heading} / #{store_listing_version.agentId}.",
changesSummary=f"This listing is added as a {heading} / #{store_listing_version.agentGraphId}.",
isAvailable=False, # Hide sub-graphs from the store by default.
submittedAt=datetime.now(tz=timezone.utc),
)
for sub_graph in await _get_missing_sub_store_listing(
store_listing_version.Agent
store_listing_version.AgentGraph
)
]
@@ -1155,8 +1158,8 @@ async def review_store_submission(
# Convert to Pydantic model for consistency
return backend.server.v2.store.model.StoreSubmission(
agent_id=submission.agentId,
agent_version=submission.agentVersion,
agent_id=submission.agentGraphId,
agent_version=submission.agentGraphVersion,
name=submission.name,
sub_heading=submission.subHeading,
slug=(
@@ -1294,8 +1297,8 @@ async def get_admin_listings_with_versions(
# If we have versions, turn them into StoreSubmission models
for version in listing.Versions or []:
version_model = backend.server.v2.store.model.StoreSubmission(
agent_id=version.agentId,
agent_version=version.agentVersion,
agent_id=version.agentGraphId,
agent_version=version.agentGraphVersion,
name=version.name,
sub_heading=version.subHeading,
slug=listing.slug,
@@ -1324,8 +1327,8 @@ async def get_admin_listings_with_versions(
backend.server.v2.store.model.StoreListingWithVersions(
listing_id=listing.id,
slug=listing.slug,
agent_id=listing.agentId,
agent_version=listing.agentVersion,
agent_id=listing.agentGraphId,
agent_version=listing.agentGraphVersion,
active_version_id=listing.activeVersionId,
has_approved_version=listing.hasApprovedVersion,
creator_email=creator_email,

View File

@@ -170,14 +170,14 @@ async def test_create_store_submission(mocker):
isDeleted=False,
hasApprovedVersion=False,
slug="test-agent",
agentId="agent-id",
agentVersion=1,
agentGraphId="agent-id",
agentGraphVersion=1,
owningUserId="user-id",
Versions=[
prisma.models.StoreListingVersion(
id="version-id",
agentId="agent-id",
agentVersion=1,
agentGraphId="agent-id",
agentGraphVersion=1,
name="Test Agent",
description="Test description",
createdAt=datetime.now(),

View File

@@ -5,11 +5,11 @@ from typing import Protocol
import uvicorn
from autogpt_libs.auth import parse_jwt_token
from autogpt_libs.logging.utils import generate_uvicorn_config
from autogpt_libs.utils.cache import thread_cached
from fastapi import Depends, FastAPI, WebSocket, WebSocketDisconnect
from starlette.middleware.cors import CORSMiddleware
from backend.data import redis
from backend.data.execution import AsyncRedisExecutionEventBus
from backend.data.user import DEFAULT_USER_ID
from backend.server.conn_manager import ConnectionManager
@@ -55,15 +55,12 @@ def get_db_client():
async def event_broadcaster(manager: ConnectionManager):
try:
redis.connect()
event_queue = AsyncRedisExecutionEventBus()
async for event in event_queue.listen("*"):
await manager.send_execution_update(event)
except Exception as e:
logger.exception(f"Event broadcaster error: {e}")
raise
finally:
redis.disconnect()
async def authenticate_websocket(websocket: WebSocket) -> str:
@@ -286,8 +283,14 @@ class WebsocketServer(AppProcess):
allow_methods=["*"],
allow_headers=["*"],
)
uvicorn.run(
server_app,
host=Config().websocket_server_host,
port=Config().websocket_server_port,
log_config=generate_uvicorn_config(),
)
def cleanup(self):
super().cleanup()
logger.info(f"[{self.service_name}] ⏳ Shutting down WebSocket Server...")

View File

@@ -15,21 +15,25 @@ def to_dict(data) -> dict:
def dumps(data) -> str:
return json.dumps(jsonable_encoder(data))
return json.dumps(to_dict(data))
T = TypeVar("T")
@overload
def loads(data: str, *args, target_type: Type[T], **kwargs) -> T: ...
def loads(data: str | bytes, *args, target_type: Type[T], **kwargs) -> T: ...
@overload
def loads(data: str, *args, **kwargs) -> Any: ...
def loads(data: str | bytes, *args, **kwargs) -> Any: ...
def loads(data: str, *args, target_type: Type[T] | None = None, **kwargs) -> Any:
def loads(
data: str | bytes, *args, target_type: Type[T] | None = None, **kwargs
) -> Any:
if isinstance(data, bytes):
data = data.decode("utf-8")
parsed = json.loads(data, *args, **kwargs)
if target_type:
return type_match(parsed, target_type)

View File

@@ -1,8 +1,24 @@
import logging
import sentry_sdk
from sentry_sdk.integrations.anthropic import AnthropicIntegration
from sentry_sdk.integrations.logging import LoggingIntegration
from backend.util.settings import Settings
def sentry_init():
sentry_dsn = Settings().secrets.sentry_dsn
sentry_sdk.init(dsn=sentry_dsn, traces_sample_rate=1.0, profiles_sample_rate=1.0)
sentry_sdk.init(
dsn=sentry_dsn,
traces_sample_rate=1.0,
profiles_sample_rate=1.0,
environment=f"app:{Settings().config.app_env.value}-behave:{Settings().config.behave_as.value}",
_experiments={"enable_logs": True},
integrations=[
LoggingIntegration(sentry_logs_level=logging.INFO),
AnthropicIntegration(
include_prompts=False,
),
],
)

View File

@@ -28,6 +28,7 @@ class AppProcess(ABC):
"""
process: Optional[Process] = None
cleaned_up = False
set_start_method("spawn", force=True)
configure_logging()
@@ -47,6 +48,7 @@ class AppProcess(ABC):
def service_name(cls) -> str:
return cls.__name__
@abstractmethod
def cleanup(self):
"""
Implement this method on a subclass to do post-execution cleanup,
@@ -62,6 +64,7 @@ class AppProcess(ABC):
def execute_run_command(self, silent):
signal.signal(signal.SIGTERM, self._self_terminate)
signal.signal(signal.SIGINT, self._self_terminate)
try:
if silent:
@@ -73,9 +76,16 @@ class AppProcess(ABC):
self.run()
except (KeyboardInterrupt, SystemExit) as e:
logger.warning(f"[{self.service_name}] Terminated: {e}; quitting...")
finally:
if not self.cleaned_up:
self.cleanup()
self.cleaned_up = True
logger.info(f"[{self.service_name}] Terminated.")
def _self_terminate(self, signum: int, frame):
self.cleanup()
if not self.cleaned_up:
self.cleanup()
self.cleaned_up = True
sys.exit(0)
# Methods that are executed OUTSIDE the process #

View File

@@ -142,7 +142,7 @@ def validate_url(
# Resolve all IP addresses for the hostname
try:
ip_list = [res[4][0] for res in socket.getaddrinfo(ascii_hostname, None)]
ip_list = [str(res[4][0]) for res in socket.getaddrinfo(ascii_hostname, None)]
ipv4 = [ip for ip in ip_list if ":" not in ip]
ipv6 = [ip for ip in ip_list if ":" in ip]
ip_addresses = ipv4 + ipv6 # Prefer IPv4 over IPv6

View File

@@ -34,7 +34,7 @@ def conn_retry(
def on_retry(retry_state):
prefix = _log_prefix(resource_name, conn_id)
exception = retry_state.outcome.exception()
logger.error(f"{prefix} {action_name} failed: {exception}. Retrying now...")
logger.warning(f"{prefix} {action_name} failed: {exception}. Retrying now...")
def decorator(func):
is_coroutine = asyncio.iscoroutinefunction(func)

View File

@@ -1,52 +1,34 @@
import asyncio
import builtins
import inspect
import logging
import os
import threading
import time
import typing
from abc import ABC, abstractmethod
from enum import Enum
from functools import wraps
from types import NoneType, UnionType
from typing import (
Annotated,
Any,
Awaitable,
Callable,
Concatenate,
Coroutine,
Dict,
FrozenSet,
Iterator,
List,
Optional,
ParamSpec,
Set,
Tuple,
Type,
TypeVar,
Union,
cast,
get_args,
get_origin,
)
import httpx
import Pyro5.api
import uvicorn
from fastapi import FastAPI, Request, responses
from pydantic import BaseModel, TypeAdapter, create_model
from Pyro5 import api as pyro
from Pyro5 import config as pyro_config
from backend.data import db, rabbitmq, redis
from backend.util.exceptions import InsufficientBalanceError
from backend.util.json import to_dict
from backend.util.metrics import sentry_init
from backend.util.process import AppProcess, get_service_name
from backend.util.retry import conn_retry
from backend.util.settings import Config, Secrets
from backend.util.settings import Config
logger = logging.getLogger(__name__)
T = TypeVar("T")
@@ -57,21 +39,18 @@ api_host = config.pyro_host
api_comm_retry = config.pyro_client_comm_retry
api_comm_timeout = config.pyro_client_comm_timeout
api_call_timeout = config.rpc_client_call_timeout
pyro_config.MAX_RETRIES = api_comm_retry # type: ignore
pyro_config.COMMTIMEOUT = api_comm_timeout # type: ignore
P = ParamSpec("P")
R = TypeVar("R")
def fastapi_expose(func: C) -> C:
def expose(func: C) -> C:
func = getattr(func, "__func__", func)
setattr(func, "__exposed__", True)
return func
def fastapi_exposed_run_and_wait(
def exposed_run_and_wait(
f: Callable[P, Coroutine[None, None, R]]
) -> Callable[Concatenate[object, P], R]:
# TODO:
@@ -81,107 +60,11 @@ def fastapi_exposed_run_and_wait(
return expose(f) # type: ignore
# ----- Begin Pyro Expose Block ---- #
def pyro_expose(func: C) -> C:
"""
Decorator to mark a method or class to be exposed for remote calls.
## ⚠️ Gotcha
Aside from "simple" types, only Pydantic models are passed unscathed *if annotated*.
Any other passed or returned class objects are converted to dictionaries by Pyro.
"""
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
msg = f"Error in {func.__name__}: {e}"
if isinstance(e, ValueError):
logger.warning(msg)
else:
logger.exception(msg)
raise
register_pydantic_serializers(func)
return pyro.expose(wrapper) # type: ignore
def register_pydantic_serializers(func: Callable):
"""Register custom serializers and deserializers for annotated Pydantic models"""
for name, annotation in func.__annotations__.items():
try:
pydantic_types = _pydantic_models_from_type_annotation(annotation)
except Exception as e:
raise TypeError(f"Error while exposing {func.__name__}: {e}")
for model in pydantic_types:
logger.debug(
f"Registering Pyro (de)serializers for {func.__name__} annotation "
f"'{name}': {model.__qualname__}"
)
pyro.register_class_to_dict(model, _make_custom_serializer(model))
pyro.register_dict_to_class(
model.__qualname__, _make_custom_deserializer(model)
)
def _make_custom_serializer(model: Type[BaseModel]):
def custom_class_to_dict(obj):
data = {
"__class__": obj.__class__.__qualname__,
**obj.model_dump(),
}
logger.debug(f"Serializing {obj.__class__.__qualname__} with data: {data}")
return data
return custom_class_to_dict
def _make_custom_deserializer(model: Type[BaseModel]):
def custom_dict_to_class(qualname, data: dict):
logger.debug(f"Deserializing {model.__qualname__} from data: {data}")
return model(**data)
return custom_dict_to_class
def pyro_exposed_run_and_wait(
f: Callable[P, Coroutine[None, None, R]]
) -> Callable[Concatenate[object, P], R]:
@expose
@wraps(f)
def wrapper(self, *args: P.args, **kwargs: P.kwargs) -> R:
coroutine = f(*args, **kwargs)
res = self.run_and_wait(coroutine)
return res
# Register serializers for annotations on bare function
register_pydantic_serializers(f)
return wrapper
if config.use_http_based_rpc:
expose = fastapi_expose
exposed_run_and_wait = fastapi_exposed_run_and_wait
else:
expose = pyro_expose
exposed_run_and_wait = pyro_exposed_run_and_wait
# ----- End Pyro Expose Block ---- #
# --------------------------------------------------
# AppService for IPC service based on HTTP request through FastAPI
# --------------------------------------------------
class BaseAppService(AppProcess, ABC):
shared_event_loop: asyncio.AbstractEventLoop
use_db: bool = False
use_redis: bool = False
rabbitmq_config: Optional[rabbitmq.RabbitMQConfig] = None
rabbitmq_service: Optional[rabbitmq.AsyncRabbitMQ] = None
use_supabase: bool = False
@classmethod
@abstractmethod
@@ -202,20 +85,6 @@ class BaseAppService(AppProcess, ABC):
return target_host
@property
def rabbit(self) -> rabbitmq.AsyncRabbitMQ:
"""Access the RabbitMQ service. Will raise if not configured."""
if not self.rabbitmq_service:
raise RuntimeError("RabbitMQ not configured for this service")
return self.rabbitmq_service
@property
def rabbit_config(self) -> rabbitmq.RabbitMQConfig:
"""Access the RabbitMQ config. Will raise if not configured."""
if not self.rabbitmq_config:
raise RuntimeError("RabbitMQ not configured for this service")
return self.rabbitmq_config
def run_service(self) -> None:
while True:
time.sleep(10)
@@ -225,31 +94,6 @@ class BaseAppService(AppProcess, ABC):
def run(self):
self.shared_event_loop = asyncio.get_event_loop()
if self.use_db:
self.shared_event_loop.run_until_complete(db.connect())
if self.use_redis:
redis.connect()
if self.rabbitmq_config:
logger.info(f"[{self.__class__.__name__}] ⏳ Configuring RabbitMQ...")
self.rabbitmq_service = rabbitmq.AsyncRabbitMQ(self.rabbitmq_config)
self.shared_event_loop.run_until_complete(self.rabbitmq_service.connect())
if self.use_supabase:
from supabase import create_client
secrets = Secrets()
self.supabase = create_client(
secrets.supabase_url, secrets.supabase_service_role_key
)
def cleanup(self):
if self.use_db:
logger.info(f"[{self.__class__.__name__}] ⏳ Disconnecting DB...")
self.run_and_wait(db.disconnect())
if self.use_redis:
logger.info(f"[{self.__class__.__name__}] ⏳ Disconnecting Redis...")
redis.disconnect()
if self.rabbitmq_config:
logger.info(f"[{self.__class__.__name__}] ⏳ Disconnecting RabbitMQ...")
class RemoteCallError(BaseModel):
@@ -268,7 +112,7 @@ EXCEPTION_MAPPING = {
}
class FastApiAppService(BaseAppService, ABC):
class AppService(BaseAppService, ABC):
fastapi_app: FastAPI
@staticmethod
@@ -324,14 +168,16 @@ class FastApiAppService(BaseAppService, ABC):
async def async_endpoint(body: RequestBodyModel): # type: ignore #RequestBodyModel being variable
return await f(
**{name: getattr(body, name) for name in body.model_fields}
**{name: getattr(body, name) for name in type(body).model_fields}
)
return async_endpoint
else:
def sync_endpoint(body: RequestBodyModel): # type: ignore #RequestBodyModel being variable
return f(**{name: getattr(body, name) for name in body.model_fields})
return f(
**{name: getattr(body, name) for name in type(body).model_fields}
)
return sync_endpoint
@@ -351,6 +197,7 @@ class FastApiAppService(BaseAppService, ABC):
self.shared_event_loop.run_until_complete(server.serve())
def run(self):
sentry_init()
super().run()
self.fastapi_app = FastAPI()
@@ -381,62 +228,13 @@ class FastApiAppService(BaseAppService, ABC):
self.run_service()
# ----- Begin Pyro AppService Block ---- #
class PyroAppService(BaseAppService, ABC):
@conn_retry("Pyro", "Starting Pyro Service")
def __start_pyro(self):
maximum_connection_thread_count = max(
Pyro5.config.THREADPOOL_SIZE,
config.num_node_workers * config.num_graph_workers,
)
Pyro5.config.THREADPOOL_SIZE = maximum_connection_thread_count # type: ignore
daemon = Pyro5.api.Daemon(host=api_host, port=self.get_port())
self.uri = daemon.register(self, objectId=self.service_name)
logger.info(f"[{self.service_name}] Connected to Pyro; URI = {self.uri}")
daemon.requestLoop()
def run(self):
super().run()
# Initialize the async loop.
async_thread = threading.Thread(target=self.shared_event_loop.run_forever)
async_thread.daemon = True
async_thread.start()
# Initialize pyro service
daemon_thread = threading.Thread(target=self.__start_pyro)
daemon_thread.daemon = True
daemon_thread.start()
# Run the main service loop (blocking).
self.run_service()
if config.use_http_based_rpc:
class AppService(FastApiAppService, ABC): # type: ignore #AppService defined twice
pass
else:
class AppService(PyroAppService, ABC):
pass
# ----- End Pyro AppService Block ---- #
# --------------------------------------------------
# HTTP Client utilities for dynamic service client abstraction
# --------------------------------------------------
AS = TypeVar("AS", bound=AppService)
def fastapi_close_service_client(client: Any) -> None:
def close_service_client(client: Any) -> None:
if hasattr(client, "close"):
client.close()
else:
@@ -444,7 +242,7 @@ def fastapi_close_service_client(client: Any) -> None:
@conn_retry("FastAPI client", "Creating service client", max_retry=api_comm_retry)
def fastapi_get_service_client(
def get_service_client(
service_type: Type[AS],
call_timeout: int | None = api_call_timeout,
) -> AS:
@@ -504,93 +302,3 @@ def fastapi_get_service_client(
client.health_check()
return cast(AS, client)
# ----- Begin Pyro Client Block ---- #
class PyroClient:
proxy: Pyro5.api.Proxy
def pyro_close_service_client(client: BaseAppService) -> None:
if isinstance(client, PyroClient):
client.proxy._pyroRelease()
else:
raise RuntimeError(f"Client {client.__class__} is not a Pyro client.")
def pyro_get_service_client(service_type: Type[AS]) -> AS:
service_name = service_type.service_name
class DynamicClient(PyroClient):
@conn_retry("Pyro", f"Connecting to [{service_name}]")
def __init__(self):
uri = f"PYRO:{service_type.service_name}@{service_type.get_host()}:{service_type.get_port()}"
logger.debug(f"Connecting to service [{service_name}]. URI = {uri}")
self.proxy = Pyro5.api.Proxy(uri)
# Attempt to bind to ensure the connection is established
self.proxy._pyroBind()
logger.debug(f"Successfully connected to service [{service_name}]")
def __getattr__(self, name: str) -> Callable[..., Any]:
res = getattr(self.proxy, name)
return res
return cast(AS, DynamicClient())
builtin_types = [*vars(builtins).values(), NoneType, Enum]
def _pydantic_models_from_type_annotation(annotation) -> Iterator[type[BaseModel]]:
# Peel Annotated parameters
if (origin := get_origin(annotation)) and origin is Annotated:
annotation = get_args(annotation)[0]
origin = get_origin(annotation)
args = get_args(annotation)
if origin in (
Union,
UnionType,
list,
List,
tuple,
Tuple,
set,
Set,
frozenset,
FrozenSet,
):
for arg in args:
yield from _pydantic_models_from_type_annotation(arg)
elif origin in (dict, Dict):
key_type, value_type = args
yield from _pydantic_models_from_type_annotation(key_type)
yield from _pydantic_models_from_type_annotation(value_type)
elif origin in (Awaitable, Coroutine):
# For coroutines and awaitables, check the return type
return_type = args[-1]
yield from _pydantic_models_from_type_annotation(return_type)
else:
annotype = annotation if origin is None else origin
# Exclude generic types and aliases
if (
annotype is not None
and not hasattr(typing, getattr(annotype, "__name__", ""))
and isinstance(annotype, type)
):
if issubclass(annotype, BaseModel):
yield annotype
elif annotype not in builtin_types and not issubclass(annotype, Enum):
raise TypeError(f"Unsupported type encountered: {annotype}")
if config.use_http_based_rpc:
close_service_client = fastapi_close_service_client
get_service_client = fastapi_get_service_client
else:
close_service_client = pyro_close_service_client
get_service_client = pyro_get_service_client
# ----- End Pyro Client Block ---- #

View File

@@ -31,12 +31,12 @@ class UpdateTrackingModel(BaseModel, Generic[T]):
_updated_fields: Set[str] = PrivateAttr(default_factory=set)
def __setattr__(self, name: str, value) -> None:
if name in self.model_fields:
if name in UpdateTrackingModel.model_fields:
self._updated_fields.add(name)
super().__setattr__(name, value)
def mark_updated(self, field_name: str) -> None:
if field_name in self.model_fields:
if field_name in UpdateTrackingModel.model_fields:
self._updated_fields.add(field_name)
def clear_updates(self) -> None:
@@ -65,10 +65,6 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
le=1000,
description="Maximum number of workers to use for node execution within a single graph.",
)
use_http_based_rpc: bool = Field(
default=True,
description="Whether to use HTTP-based RPC for communication between services.",
)
pyro_host: str = Field(
default="localhost",
description="The default hostname of the Pyro server.",
@@ -141,6 +137,10 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
default=8002,
description="The port for execution manager daemon to run on",
)
execution_manager_loop_max_retry: int = Field(
default=5,
description="The maximum number of retries for the execution manager loop",
)
execution_scheduler_port: int = Field(
default=8003,

View File

@@ -182,6 +182,7 @@ def _try_convert(value: Any, target_type: Type, raise_on_mismatch: bool) -> Any:
T = TypeVar("T")
TT = TypeVar("TT")
def type_match(value: Any, target_type: Type[T]) -> T:

View File

@@ -21,18 +21,27 @@ def run(*command: str) -> None:
)
except subprocess.CalledProcessError as e:
print(e.output.decode("utf-8"), file=sys.stderr)
raise
def lint():
try:
run("ruff", "check", *TARGET_DIRS, "--exit-zero")
run("ruff", "format", "--diff", "--check", LIBS_DIR)
run("isort", "--diff", "--check", "--profile", "black", BACKEND_DIR)
run("black", "--diff", "--check", BACKEND_DIR)
run("pyright", *TARGET_DIRS)
except subprocess.CalledProcessError as e:
print("Lint failed, try running `poetry run format` to fix the issues: ", e)
raise e
lint_step_args: list[list[str]] = [
["ruff", "check", *TARGET_DIRS, "--exit-zero"],
["ruff", "format", "--diff", "--check", LIBS_DIR],
["isort", "--diff", "--check", "--profile", "black", BACKEND_DIR],
["black", "--diff", "--check", BACKEND_DIR],
["pyright", *TARGET_DIRS],
]
lint_error = None
for args in lint_step_args:
try:
run(*args)
except subprocess.CalledProcessError as e:
lint_error = e
if lint_error:
print("Lint failed, try running `poetry run format` to fix the issues")
sys.exit(1)
def format():

View File

@@ -0,0 +1,50 @@
/*
Warnings:
- The relation LibraryAgent:AgentPreset was REMOVED
- A unique constraint covering the columns `[userId,agentGraphId,agentGraphVersion]` on the table `LibraryAgent` will be added. If there are existing duplicate values, this will fail.
- The foreign key constraints on AgentPreset and LibraryAgent are being changed from CASCADE to RESTRICT for AgentGraph deletion, which means you cannot delete AgentGraphs that have associated LibraryAgents or AgentPresets.
Use the following query to check whether these conditions are satisfied:
-- Check for duplicate LibraryAgent userId + agentGraphId + agentGraphVersion combinations that would violate the new unique constraint
SELECT la."userId",
la."agentId" as graph_id,
la."agentVersion" as graph_version,
COUNT(*) as multiplicity
FROM "LibraryAgent" la
GROUP BY la."userId",
la."agentId",
la."agentVersion"
HAVING COUNT(*) > 1;
*/
-- Drop foreign key constraints on columns we're about to rename
ALTER TABLE "AgentPreset" DROP CONSTRAINT "AgentPreset_agentId_agentVersion_fkey";
ALTER TABLE "LibraryAgent" DROP CONSTRAINT "LibraryAgent_agentId_agentVersion_fkey";
ALTER TABLE "LibraryAgent" DROP CONSTRAINT "LibraryAgent_agentPresetId_fkey";
-- Rename columns in AgentPreset
ALTER TABLE "AgentPreset" RENAME COLUMN "agentId" TO "agentGraphId";
ALTER TABLE "AgentPreset" RENAME COLUMN "agentVersion" TO "agentGraphVersion";
-- Rename columns in LibraryAgent
ALTER TABLE "LibraryAgent" RENAME COLUMN "agentId" TO "agentGraphId";
ALTER TABLE "LibraryAgent" RENAME COLUMN "agentVersion" TO "agentGraphVersion";
-- Drop LibraryAgent.agentPresetId column
ALTER TABLE "LibraryAgent" DROP COLUMN "agentPresetId";
-- Replace userId index with unique index on userId + agentGraphId + agentGraphVersion
DROP INDEX "LibraryAgent_userId_idx";
CREATE UNIQUE INDEX "LibraryAgent_userId_agentGraphId_agentGraphVersion_key" ON "LibraryAgent"("userId", "agentGraphId", "agentGraphVersion");
-- Re-add the foreign key constraints with new column names
ALTER TABLE "LibraryAgent" ADD CONSTRAINT "LibraryAgent_agentGraphId_agentGraphVersion_fkey"
FOREIGN KEY ("agentGraphId", "agentGraphVersion") REFERENCES "AgentGraph"("id", "version")
ON DELETE RESTRICT -- Disallow deleting AgentGraph when still referenced by existing LibraryAgents
ON UPDATE CASCADE;
ALTER TABLE "AgentPreset" ADD CONSTRAINT "AgentPreset_agentGraphId_agentGraphVersion_fkey"
FOREIGN KEY ("agentGraphId", "agentGraphVersion") REFERENCES "AgentGraph"("id", "version")
ON DELETE RESTRICT -- Disallow deleting AgentGraph when still referenced by existing AgentPresets
ON UPDATE CASCADE;

Some files were not shown because too many files have changed in this diff Show More