Compare commits

..

82 Commits

Author SHA1 Message Date
Abhimanyu Yadav
54bbafc431 Merge branch 'dev' into ci-chromatic 2025-04-22 20:26:42 +05:30
Krzysztof Czerwinski
c80d357149 feat(frontend): Use route groups (#9855)
Navbar sometimes disappears outside `/onboarding`.

### Changes 🏗️

This PR solves the problem of disappearing Navbar outside `/onboarding`
by introducing `app/(platform)` route group.

- Move all routes requiring Navbar to `app/(platform)`
- Move `<Navbar>` to `app/(platform)/layout.tsx`
- Move `/onboarding` to `app/(no-navbar/`
- Remove pathname injection to header from middleware and stop relying
on it to hide the navbar

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Common routes work properly
2025-04-22 09:10:12 +00:00
Abhimanyu Yadav
5662783624 Merge branch 'dev' into ci-chromatic 2025-04-22 10:29:24 +05:30
Zamil Majdy
20d39f6d44 fix(platform): Fix Google Maps API Key setting through env (#9848)
Setting the Google Maps API through the API has never worked on the
platform.

### Changes 🏗️

Set the default api key from the environment variable.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test GoogleMapsBlock
2025-04-22 03:00:47 +07:00
Bently
d5b82c01e0 feat(backend): Adds latest llm models (#9856)
This PR adds the following models:
OpenAI's O3: https://platform.openai.com/docs/models/o3
OpenAI's GPT 4.1: https://platform.openai.com/docs/models/gpt-4.1
Anthropics Claude 3.7: https://www.anthropic.com/news/claude-3-7-sonnet
Googles gemini 2.5 pro:
https://openrouter.ai/google/gemini-2.5-pro-preview-03-25
2025-04-21 19:26:21 +00:00
Abhimanyu Yadav
69b8d96516 fix(library/run): Replace credits to cents (#9845)
Replacing credits with cents (100 credits = 1$).

I haven’t touched anything internally, just changed the UI.

Everything is working great.

On the frontend, there’s no other place where we use credits instead of
dollars.

![Screenshot 2025-04-19 at 11 36
00 AM](https://github.com/user-attachments/assets/de799b5c-094e-4c96-a7da-273ce60b2125)
<img width="1503" alt="Screenshot 2025-04-19 at 11 33 24 AM"
src="https://github.com/user-attachments/assets/87d7e218-f8f5-4e2e-92ef-70c81735db6b"
/>
2025-04-21 12:31:48 +00:00
Krzysztof Czerwinski
67af77e179 fix((backend): Fix migrate llm models in existing agents (#9810)
https://github.com/Significant-Gravitas/AutoGPT/pull/9452 was throwing
`operator does not exist: text ? unknown` on deployed dev and so the
function call was commented as a hotfix.
This PR fixes and re-enables the llm model migration function.

### Changes 🏗️

- Uncomment and fix `migrate_llm_models` function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migrate nodes with non-existing models
  - [x] Don't migrate nodes without any model or with correct models

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-19 12:52:36 +00:00
Abhimanyu Yadav
2a92970a5f fix(marketplace/library): Removing white borders from Avatar (#9818)
There are some white borders around the avatar in the store card, but
they are not present in the design, so I'm removing them.

![Screenshot 2025-04-15 at 3 58
05 PM](https://github.com/user-attachments/assets/f8c98076-9cc3-46f1-b4f3-41d4e48f6127)
2025-04-19 05:36:36 +00:00
Zamil Majdy
9052ee7b95 fix(backend): Clear RabbitMQ connection cache on execution-manager retry 2025-04-19 07:50:04 +02:00
Zamil Majdy
c783f64b33 fix(backend): Handle add execution API request failure (#9838)
There are cases where the publishing agent execution is failing, making
the agent execution appear to be stuck in a queue, but the execution has
never been in a queue in the first place.

### Changes 🏗️

On publishing failure, we set the graph & starting node execution status
to FAILED and let the UI bubble up the error so the user can try again.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Normal add execution flow
2025-04-18 18:35:43 +00:00
Zamil Majdy
055a231aed feat(backend): Add retry mechanism for pika publish_message (#9839)
For unknown reason publishing message can fail sometimes due to the
connection being broken:
MessageQueue suddenly unavailable, connection simply broke, connection
being reset, etc.

### Changes 🏗️

Adding a tenacity retry on AMQP or ConnectionError, which hopefully can
alleviate the issue.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Simple add execution
2025-04-18 17:56:27 +00:00
Reinier van der Leer
417d7732af feat(platform/library): Add credentials UX on /library/agents/[id] (#9789)
- Resolves #9771
- ... in a non-persistent way, so it won't work for webhook-triggered
agents
    For webhooks: #9541

### Changes 🏗️

Frontend:
- Add credentials inputs in Library "New run" screen (based on
`graph.credentials_input_schema`)
- Refactor `CredentialsInput` and `useCredentials` to not rely on XYFlow
context

- Unsplit lists of saved credentials in `CredentialsProvider` state

- Move logic that was being executed at component render to `useEffect`
hooks in `CredentialsInput`

Backend:
- Implement logic to aggregate credentials input requirements to one per
provider per graph
- Add `BaseGraph.credentials_input_schema` (JSON schema) computed field
    Underlying added logic:
- `BaseGraph._credentials_input_schema` - makes a `BlockSchema` from a
graph's aggregated credentials inputs
- `BaseGraph.aggregate_credentials_inputs()` - aggregates a graph's
nodes' credentials inputs using `CredentialsFieldInfo.combine(..)`
- `BlockSchema.get_credentials_fields_info() -> dict[str,
CredentialsFieldInfo]`
- `CredentialsFieldInfo` model (created from
`_CredentialsFieldSchemaExtra`)

- Implement logic to inject explicitly passed credentials into graph
execution
  - Add `credentials_inputs` parameter to `execute_graph` endpoint
- Add `graph_credentials_input` parameter to
`.executor.utils.add_graph_execution(..)`
  - Implement `.executor.utils.make_node_credentials_input_map(..)`
  - Amend `.executor.utils.construct_node_execution_input`
  - Add `GraphExecutionEntry.node_credentials_input_map` attribute
  - Amend validation to allow injecting credentials
    - Amend `GraphModel._validate_graph(..)`
    - Amend `.executor.utils._validate_node_input_credentials`
- Add `node_credentials_map` parameter to
`ExecutionManager.add_execution(..)`
    - Amend execution validation to handle side-loaded credentials
    - Add `GraphExecutionEntry.node_execution_map` attribute
- Add mechanism to inject passed credentials into node execution data
- Add credentials injection mechanism to node execution queueing logic
in `Executor._on_graph_execution(..)`

- Replace boilerplate logic in `v1.execute_graph` endpoint with call to
existing `.executor.utils.add_graph_execution(..)`
- Replace calls to `.server.routers.v1.execute_graph` with
`add_graph_execution`

Also:
- Address tech debt in `GraphModel._validate_gaph(..)`
- Fix type checking in `BaseGraph._generate_schema(..)`

#### TODO
- [ ] ~~Make "Run again" work with credentials in
`AgentRunDetailsView`~~
- [ ] Prohibit saving a graph if it has nodes with missing discriminator
value for discriminated credentials inputs

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...
2025-04-18 14:27:13 +00:00
Krzysztof Czerwinski
f16a398a8e feat(frontend): Update completed task group design in Wallet (#9820)
This redesigns how the task group is displayed when finished for both
expanded and folded state.

### Changes 🏗️

- Folded state now displays `Done` badge and hides tasks
- Expanded state shows only task names and hides details and video

Screenshot:
1. Expanded unfinished group
2. Expanded finished group
3. Folded finished group

<img width="463" alt="Screenshot 2025-04-15 at 2 05 31 PM"
src="https://github.com/user-attachments/assets/40152073-fc0e-47c2-9fd4-a6b0161280e6"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Finished group displays correctly
  - [x] Unfinished group displays correctly
2025-04-18 09:45:35 +00:00
Krzysztof Czerwinski
e8bbd945f2 feat(frontend): Wallet top-up and auto-refill (#9819)
### Changes 🏗️

- Add top-up and auto-refill tabs in the Wallet
- Add shadcn `tabs` component
- Disable increase/decrease spinner buttons on number inputs across
Platform (moved css from `customnode.css` to `globals.css`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Incorrect values are detected properly
  - [x] Top-up works
  - [x] Setting auto-refill works
2025-04-18 09:44:54 +00:00
Krzysztof Czerwinski
d1730d7b1d fix(frontend): Fix onboarding agent execution (#9822)
Onboarding executes original agent graph directly without waiting for
marketplace agent to be added to user library.

### Changes 🏗️

- Execute library agent after it's already added to library

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding agent executes properly
2025-04-18 09:36:40 +00:00
Krzysztof Czerwinski
8ea64327a1 fix(backend): Fix array types in database (#9828)
Array fields in `schema.prisma` are non-nullable, but generated
migrations don’t add `NOT NULL` constraints. This causes existing rows
to get `NULL` values when new array columns are added, breaking schema
expectations and leading to bugs.

### Changes 🏗️

- Backfill all `NULL` rows on non-nullable array columns to empty arrays
- Set `NOT NULL` constraint on all array columns

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Existing `NULL` rows are properly backfilled
  - [x] Existing arrays are not set to default empty arrays
  - [x] Affected columns became non-nullable in the db
2025-04-18 07:43:54 +00:00
Bently
3cf30c22fb update(docs): Remove outdated submodule command from docs (#9836)
### Changes 🏗️

Updates to the setup docs to remove the old unneeded ``git submodule
update --init --recursive --progress`` command + some other small tweaks
around it
2025-04-17 16:45:07 +00:00
Reinier van der Leer
05c670eef9 fix(frontend/library): Prevent execution updates mixing between library agents (#9835)
If the websocket doesn't disconnect when the user switches to viewing a
different agent, they aren't unsubscribed. If execution updates *from a
different agent* are adopted into the page state, that can cause
crashes.

### Changes 🏗️

- Filter incoming execution updates by `graph_id`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Go to an agent and initiate a run that will take a while (long enough
to navigate to a different agent)
  - Navigate: Library -> [another agent]
- [ ] Runs from the first agent don't show up in the runs list of the
other agent
2025-04-17 14:11:09 +00:00
Zamil Majdy
f6a4b036c7 fix(block): Disable LLM blocks parallel tool calls (#9834)
SmartDecisionBlock sometimes tried to be smart by calling multiple tool
calls and our platform does not support this yet.

### Changes 🏗️

Disable parallel tool calls for OpenAI & OpenRouter LLM provider LLM
blocks.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested SmartDecisionBlock & AITextGeneratorBlock
2025-04-17 12:58:05 +00:00
Zamil Majdy
c43924cd4e feat(backend): Add RabbitMQ connection cleanup on executor shutdown hook 2025-04-17 01:28:15 +02:00
Zamil Majdy
e3846c22bd fix(backend): Avoid multithreaded pika access (#9832)
### Changes 🏗️

Avoid other threads accessing the channel within the same process.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manual agent runs
2025-04-16 22:06:07 +00:00
Toran Bruce Richards
9a7a838418 fix(backend): Change node output logging type from info to debug (#9831)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
This PR simply changes the logging type from info to debug of node
outputs in the agent.py file.
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-04-16 20:45:51 +00:00
Toran Bruce Richards
d61d815208 fix(logging): Change node data logging to debug level from info (#9830)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
This change simply changes the logging level of node inputs and outputs
to debug level. This change is needed because currently logging all node
data causes logs that are too large for the logger to prevent nodes from
running.

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-04-16 19:22:52 +00:00
Nicholas Tindle
a5f448af98 Merge branch 'dev' into ci-chromatic 2025-03-06 11:24:49 -06:00
Nicholas Tindle
c766bd66e1 fix(frontend): typechecking 2025-02-05 17:15:24 -06:00
Nicholas Tindle
6d11ad8051 fix(frontend): format 2025-02-05 17:12:33 -06:00
Nicholas Tindle
d476983bd2 fix: doesn't crash 2025-02-05 17:10:21 -06:00
Nicholas Tindle
3ac1ce5a3f fix: format 2025-02-05 16:50:51 -06:00
Nicholas Tindle
3b89e6d2b7 Merge branch 'ci-chromatic' of https://github.com/Significant-Gravitas/AutoGPT into ci-chromatic 2025-02-05 16:49:32 -06:00
Nicholas Tindle
c7a7652b9f Merge branch 'dev' into ci-chromatic 2025-02-05 16:47:46 -06:00
Nicholas Tindle
b6b0d0b209 Merge branch 'dev' into ci-chromatic 2025-02-03 11:55:31 -06:00
Nicholas Tindle
a5b1495062 Merge branch 'dev' into ci-chromatic 2025-02-03 07:13:53 -06:00
Nicholas Tindle
026f16c10f Merge branch 'dev' into ci-chromatic 2025-01-31 04:50:48 -06:00
Nicholas Tindle
c468201c53 Update mock_client.ts 2025-01-29 07:10:03 -06:00
Nicholas Tindle
5beb581d1c feat(frontend): minimocking 2025-01-29 07:04:38 -06:00
Nicholas Tindle
df2339c1cf feat: add mock backend for rendering the storybook stuff 2025-01-29 06:46:50 -06:00
Nicholas Tindle
327db54321 Merge branch 'open-2047-add-type-checking-step-to-front-end-ci' into ci-chromatic 2025-01-29 12:26:19 +00:00
Nicholas Tindle
234d6f78ba Merge branch 'dev' into open-2047-add-type-checking-step-to-front-end-ci 2025-01-29 12:17:23 +00:00
Nicholas Tindle
43088ddff8 fix: incorrect meshing of types and test 2025-01-29 06:16:28 -06:00
Nicholas Tindle
fd955fba25 ref: add providers to the story previews 2025-01-29 05:35:56 -06:00
Nicholas Tindle
83943d9ddb Merge branch 'open-2047-add-type-checking-step-to-front-end-ci' into ci-chromatic 2025-01-29 10:57:00 +00:00
Nicholas Tindle
60c26e62f6 Merge branch 'dev' into open-2047-add-type-checking-step-to-front-end-ci 2025-01-29 10:53:51 +00:00
Nicholas Tindle
1fc8f9ba66 fix: handle conditions better for feature flagging 2025-01-28 18:04:41 +00:00
Nicholas Tindle
33d747f457 ref: remove unused code 2025-01-28 16:39:11 +00:00
Nicholas Tindle
06fa001a37 ref: use data structure for copy and paste data 2025-01-28 16:36:07 +00:00
Nicholas Tindle
4e7b56b814 ref: pr changes 2025-01-28 16:31:25 +00:00
Nicholas Tindle
d6b03a4f18 ref: pr change request 2025-01-28 16:30:13 +00:00
Nicholas Tindle
fae9aeb49a fix: linting 2025-01-28 16:30:05 +00:00
Nicholas Tindle
5e8c1e274e fix: linting 2025-01-28 16:29:58 +00:00
Nicholas Tindle
55f7dc4853 fix: drop classname unused 2025-01-28 16:25:23 +00:00
Nicholas Tindle
b317adb9cf ref: remove classname from navbar link 2025-01-28 16:23:14 +00:00
Nicholas Tindle
c873ba04b8 ref: split out type-check step + fix tsc error 2025-01-28 15:38:05 +00:00
Nicholas Tindle
00f0311dd0 ref: split out type-check step + fix tsc error 2025-01-28 15:31:52 +00:00
Nicholas Tindle
9b2bd756fa Update platform-frontend-ci.yml 2025-01-28 15:17:42 +00:00
Nicholas Tindle
bceb83ca30 fix: workingdir required 2025-01-28 15:17:42 +00:00
Nicholas Tindle
eadbfcd920 Update platform-frontend-ci.yml 2025-01-28 15:17:41 +00:00
SwiftyOS
9768540b60 Merge branch 'dev' into open-2047-add-type-checking-step-to-front-end-ci 2025-01-28 15:46:21 +01:00
Nicholas Tindle
697436be07 Merge branch 'dev' into open-2047-add-type-checking-step-to-front-end-ci 2025-01-28 07:53:27 +00:00
Nicholas Tindle
d725e105a0 Merge branch 'dev' into open-2047-add-type-checking-step-to-front-end-ci 2025-01-26 15:27:50 +01:00
Nicholas Tindle
927f43f52f fix: formatting 2025-01-26 12:18:11 +00:00
Nicholas Tindle
eedcc92d6f fix: add secret to all the subschemas 2025-01-26 12:15:37 +00:00
Nicholas Tindle
f0c378c70d fix: missing type addition 2025-01-26 12:12:12 +00:00
Nicholas Tindle
c6c2b852df fix: missing inputs based on changes 2025-01-26 12:11:37 +00:00
Nicholas Tindle
aaab8b1e0e fix: more formatting 2025-01-26 11:56:07 +00:00
Nicholas Tindle
a4eeb4535a fix: formatting 2025-01-26 11:55:56 +00:00
Nicholas Tindle
db068c598c fix: missing types 2025-01-26 11:46:35 +00:00
Nicholas Tindle
d4d9efc73e fix: missing attribute 2025-01-26 11:46:25 +00:00
Nicholas Tindle
ffaf77df4e fix: type the params 2025-01-26 11:46:10 +00:00
Nicholas Tindle
2daf08434e fix: type the params 2025-01-26 11:46:01 +00:00
Nicholas Tindle
745137f4c2 fix: pass correct subclass 2025-01-26 11:45:46 +00:00
Nicholas Tindle
3a2c3deb0e fix: remove import + impossible case 2025-01-26 11:45:34 +00:00
Nicholas Tindle
66a15a7b8c fix: user correct object when deleting 2025-01-26 11:44:25 +00:00
Nicholas Tindle
669c61de76 fix: take in classnames as used by the outer component
we probbaly shouldn't be doing this?
2025-01-26 11:44:04 +00:00
Nicholas Tindle
e860bde3d4 fix: coalesce types and use a default
@aarushik93 is this okay?
2025-01-26 11:43:40 +00:00
Nicholas Tindle
f5394f6d65 fix: expose interface for sub object so it can be used in other places to fix type errors 2025-01-26 11:43:00 +00:00
Nicholas Tindle
06e845abe7 feat: take in props for navbar
Is this desired?
2025-01-26 11:42:28 +00:00
Nicholas Tindle
c2c3c29018 fix: use proper state object 2025-01-26 11:42:08 +00:00
Nicholas Tindle
31fd0b557a fix: add missing import 2025-01-26 11:41:44 +00:00
Nicholas Tindle
9350fe1d2b fix: fully disable unused page 2025-01-26 11:41:12 +00:00
Nicholas Tindle
5ae92820b4 fix: remove unused classnames 2025-01-26 11:40:57 +00:00
Nicholas Tindle
66a87e5a14 ci: typechecker for frontend 2025-01-26 11:22:38 +00:00
Nicholas Tindle
e1f8882e2d fix: stories being broken 2025-01-26 11:18:12 +00:00
115 changed files with 1891 additions and 751 deletions

View File

@@ -56,6 +56,30 @@ jobs:
run: |
yarn type-check
design:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Install dependencies
run: |
yarn install --frozen-lockfile
- name: Run Chromatic
uses: chromaui/action@latest
with:
# ⚠️ Make sure to configure a `CHROMATIC_PROJECT_TOKEN` repository secret
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
workingDir: autogpt_platform/frontend
test:
runs-on: ubuntu-latest
strategy:

View File

@@ -1,6 +1,6 @@
import inspect
import threading
from typing import Any, Awaitable, Callable, ParamSpec, TypeVar, cast, overload
from typing import Awaitable, Callable, ParamSpec, TypeVar, cast, overload
P = ParamSpec("P")
R = TypeVar("R")
@@ -19,41 +19,41 @@ def thread_cached(
) -> Callable[P, R] | Callable[P, Awaitable[R]]:
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (func, args, tuple(sorted(kwargs.items())))
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = await cast(Callable[P, Awaitable[R]], func)(
*args, **kwargs
)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
# Include function in the key to prevent collisions between different functions
key = (func, args, tuple(sorted(kwargs.items())))
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable[..., Any]) -> None:
"""Clear the cache for a thread-cached function."""
thread_local = threading.local()
cache = getattr(thread_local, "cache", None)
if cache is not None:
# Clear all entries that match the function
for key in list(cache.keys()):
if key and len(key) > 0 and key[0] == func:
del cache[key]
def clear_thread_cache(func: Callable) -> None:
if clear := getattr(func, "clear_cache", None):
clear()

View File

@@ -67,15 +67,15 @@ class AgentExecutorBlock(Block):
graph_id=input_data.graph_id,
graph_version=input_data.graph_version,
user_id=input_data.user_id,
data=input_data.data,
inputs=input_data.data,
)
log_id = f"Graph #{input_data.graph_id}-V{input_data.graph_version}, exec-id: {graph_exec.graph_exec_id}"
log_id = f"Graph #{input_data.graph_id}-V{input_data.graph_version}, exec-id: {graph_exec.id}"
logger.info(f"Starting execution of {log_id}")
for event in event_bus.listen(
user_id=graph_exec.user_id,
graph_id=graph_exec.graph_id,
graph_exec_id=graph_exec.graph_exec_id,
graph_exec_id=graph_exec.id,
):
if event.event_type == ExecutionEventType.GRAPH_EXEC_UPDATE:
if event.status in [
@@ -88,7 +88,7 @@ class AgentExecutorBlock(Block):
else:
continue
logger.info(
logger.debug(
f"Execution {log_id} produced input {event.input_data} output {event.output_data}"
)
@@ -106,5 +106,7 @@ class AgentExecutorBlock(Block):
continue
for output_data in event.output_data.get("output", []):
logger.info(f"Execution {log_id} produced {output_name}: {output_data}")
logger.debug(
f"Execution {log_id} produced {output_name}: {output_data}"
)
yield output_name, output_data

View File

@@ -9,7 +9,6 @@ from typing import Any, Iterable, List, Literal, NamedTuple, Optional
import anthropic
import ollama
import openai
from anthropic import NotGiven
from anthropic.types import ToolParam
from groq import Groq
from pydantic import BaseModel, SecretStr
@@ -90,14 +89,17 @@ class LlmModelMeta(EnumMeta):
class LlmModel(str, Enum, metaclass=LlmModelMeta):
# OpenAI models
O3_MINI = "o3-mini"
O3 = "o3-2025-04-16"
O1 = "o1"
O1_PREVIEW = "o1-preview"
O1_MINI = "o1-mini"
GPT41 = "gpt-4.1-2025-04-14"
GPT4O_MINI = "gpt-4o-mini"
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"
# Anthropic models
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_3_5_SONNET = "claude-3-5-sonnet-latest"
CLAUDE_3_5_HAIKU = "claude-3-5-haiku-latest"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
@@ -118,6 +120,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
OLLAMA_DOLPHIN = "dolphin-mistral:latest"
# OpenRouter models
GEMINI_FLASH_1_5 = "google/gemini-flash-1.5"
GEMINI_2_5_PRO = "google/gemini-2.5-pro-preview-03-25"
GROK_BETA = "x-ai/grok-beta"
MISTRAL_NEMO = "mistralai/mistral-nemo"
COHERE_COMMAND_R_08_2024 = "cohere/command-r-08-2024"
@@ -157,12 +160,14 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
MODEL_METADATA = {
# https://platform.openai.com/docs/models
LlmModel.O3: ModelMetadata("openai", 200000, 100000),
LlmModel.O3_MINI: ModelMetadata("openai", 200000, 100000), # o3-mini-2025-01-31
LlmModel.O1: ModelMetadata("openai", 200000, 100000), # o1-2024-12-17
LlmModel.O1_PREVIEW: ModelMetadata(
"openai", 128000, 32768
), # o1-preview-2024-09-12
LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12
LlmModel.GPT41: ModelMetadata("openai", 1047576, 32768),
LlmModel.GPT4O_MINI: ModelMetadata(
"openai", 128000, 16384
), # gpt-4o-mini-2024-07-18
@@ -172,6 +177,9 @@ MODEL_METADATA = {
), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385, 4096), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 8192
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_5_SONNET: ModelMetadata(
"anthropic", 200000, 8192
), # claude-3-5-sonnet-20241022
@@ -197,6 +205,7 @@ MODEL_METADATA = {
LlmModel.OLLAMA_DOLPHIN: ModelMetadata("ollama", 32768, None),
# https://openrouter.ai/models
LlmModel.GEMINI_FLASH_1_5: ModelMetadata("open_router", 1000000, 8192),
LlmModel.GEMINI_2_5_PRO: ModelMetadata("open_router", 1050000, 8192),
LlmModel.GROK_BETA: ModelMetadata("open_router", 131072, 131072),
LlmModel.MISTRAL_NEMO: ModelMetadata("open_router", 128000, 4096),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata("open_router", 128000, 4096),
@@ -249,7 +258,7 @@ class LLMResponse(BaseModel):
def convert_openai_tool_fmt_to_anthropic(
openai_tools: list[dict] | None = None,
) -> Iterable[ToolParam] | NotGiven:
) -> Iterable[ToolParam] | anthropic.NotGiven:
"""
Convert OpenAI tool format to Anthropic tool format.
"""
@@ -287,6 +296,7 @@ def llm_call(
max_tokens: int | None,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
parallel_tool_calls: bool | None = None,
) -> LLMResponse:
"""
Make a call to a language model.
@@ -332,6 +342,9 @@ def llm_call(
response_format=response_format, # type: ignore
max_completion_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=(
openai.NOT_GIVEN if parallel_tool_calls is None else parallel_tool_calls
),
)
if response.choices[0].message.tool_calls:
@@ -487,6 +500,9 @@ def llm_call(
messages=prompt, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=(
openai.NOT_GIVEN if parallel_tool_calls is None else parallel_tool_calls
),
)
# If there's no response, raise an error

View File

@@ -491,6 +491,7 @@ class SmartDecisionMakerBlock(Block):
max_tokens=input_data.max_tokens,
tools=tool_functions,
ollama_host=input_data.ollama_host,
parallel_tool_calls=False,
)
if not response.tool_calls:

View File

@@ -28,6 +28,7 @@ from backend.util.settings import Config
from .model import (
ContributorDetails,
Credentials,
CredentialsFieldInfo,
CredentialsMetaInput,
is_credentials_field_name,
)
@@ -203,6 +204,15 @@ class BlockSchema(BaseModel):
)
}
@classmethod
def get_credentials_fields_info(cls) -> dict[str, CredentialsFieldInfo]:
return {
field_name: CredentialsFieldInfo.model_validate(
cls.get_field_schema(field_name), by_alias=True
)
for field_name in cls.get_credentials_fields().keys()
}
@classmethod
def get_input_defaults(cls, data: BlockInput) -> BlockInput:
return data # Return as is, by default.
@@ -509,6 +519,7 @@ async def initialize_blocks() -> None:
)
def get_block(block_id: str) -> Block | None:
# Note on the return type annotation: https://github.com/microsoft/pyright/issues/10281
def get_block(block_id: str) -> Block[BlockSchema, BlockSchema] | None:
cls = get_blocks().get(block_id)
return cls() if cls else None

View File

@@ -36,14 +36,17 @@ from backend.integrations.credentials_store import (
# =============== Configure the cost for each LLM Model call =============== #
MODEL_COST: dict[LlmModel, int] = {
LlmModel.O3: 7,
LlmModel.O3_MINI: 2, # $1.10 / $4.40
LlmModel.O1: 16, # $15 / $60
LlmModel.O1_PREVIEW: 16,
LlmModel.O1_MINI: 4,
LlmModel.GPT41: 2,
LlmModel.GPT4O_MINI: 1,
LlmModel.GPT4O: 3,
LlmModel.GPT4_TURBO: 10,
LlmModel.GPT3_5_TURBO: 1,
LlmModel.CLAUDE_3_7_SONNET: 5,
LlmModel.CLAUDE_3_5_SONNET: 4,
LlmModel.CLAUDE_3_5_HAIKU: 1, # $0.80 / $4.00
LlmModel.CLAUDE_3_HAIKU: 1,
@@ -60,6 +63,7 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.DEEPSEEK_LLAMA_70B: 1, # ? / ?
LlmModel.OLLAMA_DOLPHIN: 1,
LlmModel.GEMINI_FLASH_1_5: 1,
LlmModel.GEMINI_2_5_PRO: 4,
LlmModel.GROK_BETA: 5,
LlmModel.MISTRAL_NEMO: 1,
LlmModel.COHERE_COMMAND_R_08_2024: 1,

View File

@@ -44,7 +44,7 @@ from .includes import (
GRAPH_EXECUTION_INCLUDE,
GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
from .model import GraphExecutionStats, NodeExecutionStats
from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats
from .queue import AsyncRedisEventBus, RedisEventBus
T = TypeVar("T")
@@ -220,6 +220,7 @@ class GraphExecutionWithNodes(GraphExecution):
)
for node_exec in self.node_executions
],
node_credentials_input_map={}, # FIXME
)
@@ -361,7 +362,7 @@ async def get_graph_execution(
async def create_graph_execution(
graph_id: str,
graph_version: int,
nodes_input: list[tuple[str, BlockInput]],
starting_nodes_input: list[tuple[str, BlockInput]],
user_id: str,
preset_id: str | None = None,
) -> GraphExecutionWithNodes:
@@ -388,7 +389,7 @@ async def create_graph_execution(
]
},
)
for node_id, node_input in nodes_input
for node_id, node_input in starting_nodes_input
]
},
userId=user_id,
@@ -712,6 +713,7 @@ class GraphExecutionEntry(BaseModel):
graph_id: str
graph_version: int
start_node_execs: list["NodeExecutionEntry"]
node_credentials_input_map: Optional[dict[str, dict[str, CredentialsMetaInput]]]
class NodeExecutionEntry(BaseModel):

View File

@@ -1,7 +1,7 @@
import logging
import uuid
from collections import defaultdict
from typing import Any, Literal, Optional, Type, cast
from typing import Any, Literal, Optional, cast
import prisma
from prisma import Json
@@ -13,12 +13,19 @@ from prisma.types import (
AgentNodeCreateInput,
AgentNodeLinkCreateInput,
)
from pydantic import create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
from backend.blocks.io import AgentInputBlock, AgentOutputBlock
from backend.blocks.llm import LlmModel
from backend.data.db import prisma as db
from backend.data.model import (
CredentialsField,
CredentialsFieldInfo,
CredentialsMetaInput,
is_credentials_field_name,
)
from backend.util import type as type_utils
from .block import Block, BlockInput, BlockSchema, BlockType, get_block, get_blocks
@@ -190,14 +197,19 @@ class BaseGraph(BaseDbModel):
)
)
@computed_field
@property
def credentials_input_schema(self) -> dict[str, Any]:
return self._credentials_input_schema.jsonschema()
@staticmethod
def _generate_schema(
*props: tuple[Type[AgentInputBlock.Input] | Type[AgentOutputBlock.Input], dict],
*props: tuple[type[AgentInputBlock.Input] | type[AgentOutputBlock.Input], dict],
) -> dict[str, Any]:
schema = []
schema_fields: list[AgentInputBlock.Input | AgentOutputBlock.Input] = []
for type_class, input_default in props:
try:
schema.append(type_class(**input_default))
schema_fields.append(type_class(**input_default))
except Exception as e:
logger.warning(f"Invalid {type_class}: {input_default}, {e}")
@@ -217,9 +229,93 @@ class BaseGraph(BaseDbModel):
**({"description": p.description} if p.description else {}),
**({"default": p.value} if p.value is not None else {}),
}
for p in schema
for p in schema_fields
},
"required": [p.name for p in schema if p.value is None],
"required": [p.name for p in schema_fields if p.value is None],
}
@property
def _credentials_input_schema(self) -> type[BlockSchema]:
graph_credentials_inputs = self.aggregate_credentials_inputs()
logger.debug(
f"Combined credentials input fields for graph #{self.id} ({self.name}): "
f"{graph_credentials_inputs}"
)
# Warn if same-provider credentials inputs can't be combined (= bad UX)
graph_cred_fields = list(graph_credentials_inputs.values())
for i, (field, keys) in enumerate(graph_cred_fields):
for other_field, other_keys in list(graph_cred_fields)[i + 1 :]:
if field.provider != other_field.provider:
continue
# If this happens, that means a block implementation probably needs
# to be updated.
logger.warning(
"Multiple combined credentials fields "
f"for provider {field.provider} "
f"on graph #{self.id} ({self.name}); "
f"fields: {field} <> {other_field};"
f"keys: {keys} <> {other_keys}."
)
fields: dict[str, tuple[type[CredentialsMetaInput], CredentialsMetaInput]] = {
agg_field_key: (
CredentialsMetaInput[
Literal[tuple(field_info.provider)], # type: ignore
Literal[tuple(field_info.supported_types)], # type: ignore
],
CredentialsField(
required_scopes=set(field_info.required_scopes or []),
discriminator=field_info.discriminator,
discriminator_mapping=field_info.discriminator_mapping,
),
)
for agg_field_key, (field_info, _) in graph_credentials_inputs.items()
}
return create_model(
self.name.replace(" ", "") + "CredentialsInputSchema",
__base__=BlockSchema,
**fields, # type: ignore
)
def aggregate_credentials_inputs(
self,
) -> dict[str, tuple[CredentialsFieldInfo, set[tuple[str, str]]]]:
"""
Returns:
dict[aggregated_field_key, tuple(
CredentialsFieldInfo: A spec for one aggregated credentials field
set[(node_id, field_name)]: Node credentials fields that are
compatible with this aggregated field spec
)]
"""
return {
"_".join(sorted(agg_field_info.provider))
+ "_"
+ "_".join(sorted(agg_field_info.supported_types))
+ "_credentials": (agg_field_info, node_fields)
for agg_field_info, node_fields in CredentialsFieldInfo.combine(
*(
(
# Apply discrimination before aggregating credentials inputs
(
field_info.discriminate(
node.input_default[field_info.discriminator]
)
if (
field_info.discriminator
and node.input_default.get(field_info.discriminator)
)
else field_info
),
(node.id, field_name),
)
for node in self.nodes
for field_name, field_info in node.block.input_schema.get_credentials_fields_info().items()
)
)
}
@@ -320,8 +416,6 @@ class GraphModel(Graph):
return sanitized_name
# Validate smart decision maker nodes
smart_decision_maker_nodes = set()
agent_nodes = set()
nodes_block = {
node.id: block
for node in graph.nodes
@@ -332,13 +426,6 @@ class GraphModel(Graph):
if (block := nodes_block.get(node.id)) is None:
raise ValueError(f"Invalid block {node.block_id} for node #{node.id}")
# Smart decision maker nodes
if block.block_type == BlockType.AI:
smart_decision_maker_nodes.add(node.id)
# Agent nodes
elif block.block_type == BlockType.AGENT:
agent_nodes.add(node.id)
input_links = defaultdict(list)
for link in graph.links:
@@ -353,16 +440,21 @@ class GraphModel(Graph):
[sanitize(name) for name in node.input_default]
+ [sanitize(link.sink_name) for link in input_links.get(node.id, [])]
)
for name in block.input_schema.get_required_fields():
input_schema = block.input_schema
for name in (required_fields := input_schema.get_required_fields()):
if (
name not in provided_inputs
# Webhook payload is passed in by ExecutionManager
and not (
name == "payload"
and block.block_type
in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
)
# Checking availability of credentials is done by ExecutionManager
and name not in input_schema.get_credentials_fields()
# Validate only I/O nodes, or validate everything when executing
and (
for_run # Skip input completion validation, unless when executing.
for_run
or block.block_type
in [
BlockType.INPUT,
@@ -375,9 +467,18 @@ class GraphModel(Graph):
f"Node {block.name} #{node.id} required input missing: `{name}`"
)
if (
block.block_type == BlockType.INPUT
and (input_key := node.input_default.get("name"))
and is_credentials_field_name(input_key)
):
raise ValueError(
f"Agent input node uses reserved name '{input_key}'; "
"'credentials' and `*_credentials` are reserved input names"
)
# Get input schema properties and check dependencies
input_schema = block.input_schema.model_fields
required_fields = block.input_schema.get_required_fields()
input_fields = input_schema.model_fields
def has_value(name):
return (
@@ -385,14 +486,21 @@ class GraphModel(Graph):
and name in node.input_default
and node.input_default[name] is not None
and str(node.input_default[name]).strip() != ""
) or (name in input_schema and input_schema[name].default is not None)
) or (name in input_fields and input_fields[name].default is not None)
# Validate dependencies between fields
for field_name, field_info in input_schema.items():
for field_name, field_info in input_fields.items():
# Apply input dependency validation only on run & field with depends_on
json_schema_extra = field_info.json_schema_extra or {}
dependencies = json_schema_extra.get("depends_on", [])
if not for_run or not dependencies:
if not (
for_run
and isinstance(json_schema_extra, dict)
and (
dependencies := cast(
list[str], json_schema_extra.get("depends_on", [])
)
)
):
continue
# Check if dependent field has value in input_default
@@ -914,24 +1022,24 @@ async def migrate_llm_models(migrate_to: LlmModel):
if field.annotation == LlmModel:
llm_model_fields[block.id] = field_name
# Convert enum values to a list of strings for the SQL query
enum_values = [v.value for v in LlmModel]
escaped_enum_values = repr(tuple(enum_values)) # hack but works
# Update each block
for id, path in llm_model_fields.items():
# Convert enum values to a list of strings for the SQL query
enum_values = [v.value for v in LlmModel.__members__.values()]
escaped_enum_values = repr(tuple(enum_values)) # hack but works
query = f"""
UPDATE "AgentNode"
SET "constantInput" = jsonb_set("constantInput", $1, $2, true)
UPDATE platform."AgentNode"
SET "constantInput" = jsonb_set("constantInput", $1, to_jsonb($2), true)
WHERE "agentBlockId" = $3
AND "constantInput" ? $4
AND "constantInput"->>$4 NOT IN {escaped_enum_values}
AND "constantInput" ? ($4)::text
AND "constantInput"->>($4)::text NOT IN {escaped_enum_values}
"""
await db.execute_raw(
query, # type: ignore - is supposed to be LiteralString
"{" + path + "}",
f'"{migrate_to.value}"',
[path],
migrate_to.value,
id,
path,
)

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import base64
import logging
from collections import defaultdict
from datetime import datetime, timezone
from typing import (
TYPE_CHECKING,
@@ -12,6 +13,7 @@ from typing import (
Generic,
Literal,
Optional,
Sequence,
TypedDict,
TypeVar,
get_args,
@@ -300,9 +302,7 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
)
field_schema = model.jsonschema()["properties"][field_name]
try:
schema_extra = _CredentialsFieldSchemaExtra[CP, CT].model_validate(
field_schema
)
schema_extra = CredentialsFieldInfo[CP, CT].model_validate(field_schema)
except ValidationError as e:
if "Field required [type=missing" not in str(e):
raise
@@ -328,14 +328,90 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
)
class _CredentialsFieldSchemaExtra(BaseModel, Generic[CP, CT]):
class CredentialsFieldInfo(BaseModel, Generic[CP, CT]):
# TODO: move discrimination mechanism out of CredentialsField (frontend + backend)
credentials_provider: list[CP]
credentials_scopes: Optional[list[str]] = None
credentials_types: list[CT]
provider: frozenset[CP] = Field(..., alias="credentials_provider")
supported_types: frozenset[CT] = Field(..., alias="credentials_types")
required_scopes: Optional[frozenset[str]] = Field(None, alias="credentials_scopes")
discriminator: Optional[str] = None
discriminator_mapping: Optional[dict[str, CP]] = None
@classmethod
def combine(
cls, *fields: tuple[CredentialsFieldInfo[CP, CT], T]
) -> Sequence[tuple[CredentialsFieldInfo[CP, CT], set[T]]]:
"""
Combines multiple CredentialsFieldInfo objects into as few as possible.
Rules:
- Items can only be combined if they have the same supported credentials types
and the same supported providers.
- When combining items, the `required_scopes` of the result is a join
of the `required_scopes` of the original items.
Params:
*fields: (CredentialsFieldInfo, key) objects to group and combine
Returns:
A sequence of tuples containing combined CredentialsFieldInfo objects and
the set of keys of the respective original items that were grouped together.
"""
if not fields:
return []
# Group fields by their provider and supported_types
grouped_fields: defaultdict[
tuple[frozenset[CP], frozenset[CT]],
list[tuple[T, CredentialsFieldInfo[CP, CT]]],
] = defaultdict(list)
for field, key in fields:
group_key = (frozenset(field.provider), frozenset(field.supported_types))
grouped_fields[group_key].append((key, field))
# Combine fields within each group
result: list[tuple[CredentialsFieldInfo[CP, CT], set[T]]] = []
for group in grouped_fields.values():
# Start with the first field in the group
_, combined = group[0]
# Track the keys that were combined
combined_keys = {key for key, _ in group}
# Combine required_scopes from all fields in the group
all_scopes = set()
for _, field in group:
if field.required_scopes:
all_scopes.update(field.required_scopes)
# Create a new combined field
result.append(
(
CredentialsFieldInfo[CP, CT](
credentials_provider=combined.provider,
credentials_types=combined.supported_types,
credentials_scopes=frozenset(all_scopes) or None,
discriminator=combined.discriminator,
discriminator_mapping=combined.discriminator_mapping,
),
combined_keys,
)
)
return result
def discriminate(self, discriminator_value: Any) -> CredentialsFieldInfo:
if not (self.discriminator and self.discriminator_mapping):
return self
discriminator_value = self.discriminator_mapping[discriminator_value]
return CredentialsFieldInfo(
credentials_provider=frozenset([discriminator_value]),
credentials_types=self.supported_types,
credentials_scopes=self.required_scopes,
)
def CredentialsField(
required_scopes: set[str] = set(),

View File

@@ -4,10 +4,18 @@ from enum import Enum
from typing import Awaitable, Optional
import aio_pika
import aio_pika.exceptions as aio_ex
import pika
import pika.adapters.blocking_connection
from pika.exceptions import AMQPError
from pika.spec import BasicProperties
from pydantic import BaseModel
from tenacity import (
retry,
retry_if_exception_type,
stop_after_attempt,
wait_random_exponential,
)
from backend.util.retry import conn_retry
from backend.util.settings import Settings
@@ -161,6 +169,12 @@ class SyncRabbitMQ(RabbitMQBase):
routing_key=queue.routing_key or queue.name,
)
@retry(
retry=retry_if_exception_type((AMQPError, ConnectionError)),
wait=wait_random_exponential(multiplier=1, max=5),
stop=stop_after_attempt(5),
reraise=True,
)
def publish_message(
self,
routing_key: str,
@@ -258,6 +272,12 @@ class AsyncRabbitMQ(RabbitMQBase):
exchange, routing_key=queue.routing_key or queue.name
)
@retry(
retry=retry_if_exception_type((aio_ex.AMQPError, ConnectionError)),
wait=wait_random_exponential(multiplier=1, max=5),
stop=stop_after_attempt(5),
reraise=True,
)
async def publish_message(
self,
routing_key: str,

View File

@@ -29,7 +29,7 @@ if TYPE_CHECKING:
from backend.executor import DatabaseManager
from backend.notifications.notifications import NotificationManager
from autogpt_libs.utils.cache import thread_cached
from autogpt_libs.utils.cache import clear_thread_cache, thread_cached
from backend.blocks.agent import AgentExecutorBlock
from backend.data import redis
@@ -87,7 +87,7 @@ class LogMetadata:
"node_id": node_id,
"block_name": block_name,
}
self.prefix = f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|nid:{node_eid}|{block_name}]"
self.prefix = f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|neid:{node_eid}|{block_name}]"
def info(self, msg: str, **extra):
msg = self._wrap(msg, **extra)
@@ -724,10 +724,10 @@ class Executor:
execution_status = ExecutionStatus.TERMINATED
return execution_stats, execution_status, error
exec_data = queue.get()
queued_node_exec = queue.get()
# Avoid parallel execution of the same node.
execution = running_executions.get(exec_data.node_id)
execution = running_executions.get(queued_node_exec.node_id)
if execution and not execution.ready():
# TODO (performance improvement):
# Wait for the completion of the same node execution is blocking.
@@ -736,18 +736,18 @@ class Executor:
execution.wait()
log_metadata.debug(
f"Dispatching node execution {exec_data.node_exec_id} "
f"for node {exec_data.node_id}",
f"Dispatching node execution {queued_node_exec.node_exec_id} "
f"for node {queued_node_exec.node_id}",
)
try:
exec_cost_counter = cls._charge_usage(
node_exec=exec_data,
node_exec=queued_node_exec,
execution_count=exec_cost_counter + 1,
execution_stats=execution_stats,
)
except InsufficientBalanceError as error:
node_exec_id = exec_data.node_exec_id
node_exec_id = queued_node_exec.node_exec_id
cls.db_client.upsert_execution_output(
node_exec_id=node_exec_id,
output_name="error",
@@ -768,10 +768,23 @@ class Executor:
)
raise
running_executions[exec_data.node_id] = cls.executor.apply_async(
# Add credentials input overrides
node_id = queued_node_exec.node_id
if (node_creds_map := graph_exec.node_credentials_input_map) and (
node_field_creds_map := node_creds_map.get(node_id)
):
queued_node_exec.data.update(
{
field_name: creds_meta.model_dump()
for field_name, creds_meta in node_field_creds_map.items()
}
)
# Initiate node execution
running_executions[queued_node_exec.node_id] = cls.executor.apply_async(
cls.on_node_execution,
(queue, exec_data),
callback=make_exec_callback(exec_data),
(queue, queued_node_exec),
callback=make_exec_callback(queued_node_exec),
)
# Avoid terminating graph execution when some nodes are still running.
@@ -922,6 +935,7 @@ class ExecutionManager(AppProcess):
redis.connect()
# Consume Cancel & Run execution requests.
clear_thread_cache(get_execution_queue)
channel = get_execution_queue().get_channel()
channel.basic_qos(prefetch_count=self.pool_size)
channel.basic_consume(
@@ -1015,9 +1029,13 @@ class ExecutionManager(AppProcess):
logger.error(
f"[{self.service_name}] Execution for {graph_exec_id} failed: {f.exception()}"
)
channel.basic_nack(delivery_tag, requeue=False)
channel.connection.add_callback_threadsafe(
lambda: channel.basic_nack(delivery_tag, requeue=False)
)
else:
channel.basic_ack(delivery_tag)
channel.connection.add_callback_threadsafe(
lambda: channel.basic_ack(delivery_tag)
)
except Exception as e:
logger.error(f"[{self.service_name}] Error acknowledging message: {e}")
@@ -1029,6 +1047,9 @@ class ExecutionManager(AppProcess):
logger.info(f"[{self.service_name}] ⏳ Shutting down service loop...")
self.running = False
logger.info(f"[{self.service_name}] ⏳ Shutting down RabbitMQ channel...")
get_execution_queue().get_channel().stop_consuming()
logger.info(f"[{self.service_name}] ⏳ Shutting down graph executor pool...")
self.executor.shutdown(cancel_futures=True)

View File

@@ -70,7 +70,7 @@ def execute_graph(**kwargs):
log(f"Executing recurring job for graph #{args.graph_id}")
execution_utils.add_graph_execution(
graph_id=args.graph_id,
data=args.input_data,
inputs=args.input_data,
user_id=args.user_id,
graph_version=args.graph_version,
)

View File

@@ -1,5 +1,5 @@
import logging
from typing import TYPE_CHECKING, Any, cast
from typing import TYPE_CHECKING, Any, Optional, cast
from autogpt_libs.utils.cache import thread_cached
from pydantic import BaseModel
@@ -14,15 +14,27 @@ from backend.data.block import (
)
from backend.data.block_cost_config import BLOCK_COSTS
from backend.data.cost import BlockCostType
from backend.data.execution import GraphExecutionEntry, RedisExecutionEventBus
from backend.data.graph import GraphModel, Node
from backend.data.execution import (
AsyncRedisExecutionEventBus,
ExecutionStatus,
GraphExecutionStats,
GraphExecutionWithNodes,
RedisExecutionEventBus,
create_graph_execution,
update_graph_execution_stats,
update_node_execution_status_batch,
)
from backend.data.graph import GraphModel, Node, get_graph
from backend.data.model import CredentialsMetaInput
from backend.data.rabbitmq import (
AsyncRabbitMQ,
Exchange,
ExchangeType,
Queue,
RabbitMQConfig,
SyncRabbitMQ,
)
from backend.util.exceptions import NotFoundError
from backend.util.mock import MockObject
from backend.util.service import get_service_client
from backend.util.settings import Config
@@ -43,6 +55,11 @@ def get_execution_event_bus() -> RedisExecutionEventBus:
return RedisExecutionEventBus()
@thread_cached
def get_async_execution_event_bus() -> AsyncRedisExecutionEventBus:
return AsyncRedisExecutionEventBus()
@thread_cached
def get_execution_queue() -> SyncRabbitMQ:
client = SyncRabbitMQ(create_execution_queue_config())
@@ -50,6 +67,13 @@ def get_execution_queue() -> SyncRabbitMQ:
return client
@thread_cached
async def get_async_execution_queue() -> AsyncRabbitMQ:
client = AsyncRabbitMQ(create_execution_queue_config())
await client.connect()
return client
@thread_cached
def get_integration_credentials_store() -> "IntegrationCredentialsStore":
from backend.integrations.credentials_store import IntegrationCredentialsStore
@@ -347,7 +371,13 @@ def merge_execution_input(data: BlockInput) -> BlockInput:
return data
def _validate_node_input_credentials(graph: GraphModel, user_id: str):
def _validate_node_input_credentials(
graph: GraphModel,
user_id: str,
node_credentials_input_map: Optional[
dict[str, dict[str, CredentialsMetaInput]]
] = None,
):
"""Checks all credentials for all nodes of the graph"""
for node in graph.nodes:
@@ -361,9 +391,22 @@ def _validate_node_input_credentials(graph: GraphModel, user_id: str):
continue
for field_name, credentials_meta_type in credentials_fields.items():
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
if (
node_credentials_input_map
and (node_credentials_inputs := node_credentials_input_map.get(node.id))
and field_name in node_credentials_inputs
):
credentials_meta = node_credentials_input_map[node.id][field_name]
elif field_name in node.input_default:
credentials_meta = credentials_meta_type.model_validate(
node.input_default[field_name]
)
else:
raise ValueError(
f"Credentials absent for {block.name} node #{node.id} "
f"input '{field_name}'"
)
# Fetch the corresponding Credentials and perform sanity checks
credentials = get_integration_credentials_store().get_creds_by_id(
user_id, credentials_meta.id
@@ -389,10 +432,46 @@ def _validate_node_input_credentials(graph: GraphModel, user_id: str):
)
def make_node_credentials_input_map(
graph: GraphModel,
graph_credentials_input: dict[str, CredentialsMetaInput],
) -> dict[str, dict[str, CredentialsMetaInput]]:
"""
Maps credentials for an execution to the correct nodes.
Params:
graph: The graph to be executed.
graph_credentials_input: A (graph_input_name, credentials_meta) map.
Returns:
dict[node_id, dict[field_name, CredentialsMetaInput]]: Node credentials input map.
"""
result: dict[str, dict[str, CredentialsMetaInput]] = {}
# Get aggregated credentials fields for the graph
graph_cred_inputs = graph.aggregate_credentials_inputs()
for graph_input_name, (_, compatible_node_fields) in graph_cred_inputs.items():
# Best-effort map: skip missing items
if graph_input_name not in graph_credentials_input:
continue
# Use passed-in credentials for all compatible node input fields
for node_id, node_field_name in compatible_node_fields:
if node_id not in result:
result[node_id] = {}
result[node_id][node_field_name] = graph_credentials_input[graph_input_name]
return result
def construct_node_execution_input(
graph: GraphModel,
user_id: str,
data: BlockInput,
graph_inputs: BlockInput,
node_credentials_input_map: Optional[
dict[str, dict[str, CredentialsMetaInput]]
] = None,
) -> list[tuple[str, BlockInput]]:
"""
Validates and prepares the input data for executing a graph.
@@ -404,13 +483,14 @@ def construct_node_execution_input(
graph (GraphModel): The graph model to execute.
user_id (str): The ID of the user executing the graph.
data (BlockInput): The input data for the graph execution.
node_credentials_map: `dict[node_id, dict[input_name, CredentialsMetaInput]]`
Returns:
list[tuple[str, BlockInput]]: A list of tuples, each containing the node ID and
the corresponding input data for that node.
"""
graph.validate_graph(for_run=True)
_validate_node_input_credentials(graph, user_id)
_validate_node_input_credentials(graph, user_id, node_credentials_input_map)
nodes_input = []
for node in graph.starting_nodes:
@@ -424,8 +504,8 @@ def construct_node_execution_input(
# Extract request input data, and assign it to the input pin.
if block.block_type == BlockType.INPUT:
input_name = node.input_default.get("name")
if input_name and input_name in data:
input_data = {"value": data[input_name]}
if input_name and input_name in graph_inputs:
input_data = {"value": graph_inputs[input_name]}
# Extract webhook payload, and assign it to the input pin
webhook_payload_key = f"webhook_{node.webhook_id}_payload"
@@ -433,11 +513,17 @@ def construct_node_execution_input(
block.block_type in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
and node.webhook_id
):
if webhook_payload_key not in data:
if webhook_payload_key not in graph_inputs:
raise ValueError(
f"Node {block.name} #{node.id} webhook payload is missing"
)
input_data = {"payload": data[webhook_payload_key]}
input_data = {"payload": graph_inputs[webhook_payload_key]}
# Apply node credentials overrides
if node_credentials_input_map and (
node_credentials := node_credentials_input_map.get(node.id)
):
input_data.update({k: v.model_dump() for k, v in node_credentials.items()})
input_data, error = validate_exec(node, input_data)
if input_data is None:
@@ -505,47 +591,158 @@ def create_execution_queue_config() -> RabbitMQConfig:
)
def add_graph_execution(
async def add_graph_execution_async(
graph_id: str,
data: BlockInput,
user_id: str,
graph_version: int | None = None,
preset_id: str | None = None,
) -> GraphExecutionEntry:
inputs: BlockInput,
preset_id: Optional[str] = None,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
) -> GraphExecutionWithNodes:
"""
Adds a graph execution to the queue and returns the execution entry.
Args:
graph_id (str): The ID of the graph to execute.
data (BlockInput): The input data for the graph execution.
user_id (str): The ID of the user executing the graph.
graph_version (int | None): The version of the graph to execute. Defaults to None.
preset_id (str | None): The ID of the preset to use. Defaults to None.
graph_id: The ID of the graph to execute.
user_id: The ID of the user executing the graph.
inputs: The input data for the graph execution.
preset_id: The ID of the preset to use.
graph_version: The version of the graph to execute.
graph_credentials_inputs: Credentials inputs to use in the execution.
Keys should map to the keys generated by `GraphModel.aggregate_credentials_inputs`.
Returns:
GraphExecutionEntry: The entry for the graph execution.
Raises:
ValueError: If the graph is not found or if there are validation errors.
""" # noqa
graph: GraphModel | None = await get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
if not graph:
raise NotFoundError(f"Graph #{graph_id} not found.")
node_credentials_input_map = (
make_node_credentials_input_map(graph, graph_credentials_inputs)
if graph_credentials_inputs
else None
)
graph_exec = await create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
starting_nodes_input=construct_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=inputs,
node_credentials_input_map=node_credentials_input_map,
),
preset_id=preset_id,
)
try:
queue = await get_async_execution_queue()
graph_exec_entry = graph_exec.to_graph_execution_entry()
if node_credentials_input_map:
graph_exec_entry.node_credentials_input_map = node_credentials_input_map
await queue.publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec_entry.model_dump_json(),
exchange=GRAPH_EXECUTION_EXCHANGE,
)
bus = get_async_execution_event_bus()
await bus.publish(graph_exec)
return graph_exec
except Exception as e:
logger.error(f"Unable to publish graph #{graph_id} exec #{graph_exec.id}: {e}")
await update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in graph_exec.node_executions],
ExecutionStatus.FAILED,
)
await update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.FAILED,
stats=GraphExecutionStats(error=str(e)),
)
raise
def add_graph_execution(
graph_id: str,
user_id: str,
inputs: BlockInput,
preset_id: Optional[str] = None,
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[dict[str, CredentialsMetaInput]] = None,
) -> GraphExecutionWithNodes:
"""
Adds a graph execution to the queue and returns the execution entry.
Args:
graph_id: The ID of the graph to execute.
user_id: The ID of the user executing the graph.
inputs: The input data for the graph execution.
preset_id: The ID of the preset to use.
graph_version: The version of the graph to execute.
graph_credentials_inputs: Credentials inputs to use in the execution.
Keys should map to the keys generated by `GraphModel.aggregate_credentials_inputs`.
Returns:
GraphExecutionEntry: The entry for the graph execution.
Raises:
ValueError: If the graph is not found or if there are validation errors.
"""
graph: GraphModel | None = get_db_client().get_graph(
db = get_db_client()
graph: GraphModel | None = db.get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
if not graph:
raise ValueError(f"Graph #{graph_id} not found.")
raise NotFoundError(f"Graph #{graph_id} not found.")
graph_exec = get_db_client().create_graph_execution(
node_credentials_input_map = (
make_node_credentials_input_map(graph, graph_credentials_inputs)
if graph_credentials_inputs
else None
)
graph_exec = db.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
graph_version=graph.version,
nodes_input=construct_node_execution_input(graph, user_id, data),
user_id=user_id,
starting_nodes_input=construct_node_execution_input(
graph=graph,
user_id=user_id,
graph_inputs=inputs,
node_credentials_input_map=node_credentials_input_map,
),
preset_id=preset_id,
)
get_execution_event_bus().publish(graph_exec)
try:
queue = get_execution_queue()
graph_exec_entry = graph_exec.to_graph_execution_entry()
if node_credentials_input_map:
graph_exec_entry.node_credentials_input_map = node_credentials_input_map
queue.publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec_entry.model_dump_json(),
exchange=GRAPH_EXECUTION_EXCHANGE,
)
graph_exec_entry = graph_exec.to_graph_execution_entry()
get_execution_queue().publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec_entry.model_dump_json(),
exchange=GRAPH_EXECUTION_EXCHANGE,
)
bus = get_execution_event_bus()
bus.publish(graph_exec)
return graph_exec_entry
return graph_exec
except Exception as e:
logger.error(f"Unable to publish graph #{graph_id} exec #{graph_exec.id}: {e}")
db.update_node_execution_status_batch(
[node_exec.node_exec_id for node_exec in graph_exec.node_executions],
ExecutionStatus.FAILED,
)
db.update_graph_execution_stats(
graph_exec_id=graph_exec.id,
status=ExecutionStatus.FAILED,
stats=GraphExecutionStats(error=str(e)),
)
raise

View File

@@ -161,6 +161,14 @@ smartlead_credentials = APIKeyCredentials(
expires_at=None,
)
google_maps_credentials = APIKeyCredentials(
id="9aa1bde0-4947-4a70-a20c-84daa3850d52",
provider="google_maps",
api_key=SecretStr(settings.secrets.google_maps_api_key),
title="Use Credits for Google Maps",
expires_at=None,
)
zerobounce_credentials = APIKeyCredentials(
id="63a6e279-2dc2-448e-bf57-85776f7176dc",
provider="zerobounce",
@@ -190,6 +198,7 @@ DEFAULT_CREDENTIALS = [
apollo_credentials,
smartlead_credentials,
zerobounce_credentials,
google_maps_credentials,
]
@@ -263,6 +272,8 @@ class IntegrationCredentialsStore:
all_credentials.append(smartlead_credentials)
if settings.secrets.zerobounce_api_key:
all_credentials.append(zerobounce_credentials)
if settings.secrets.google_maps_api_key:
all_credentials.append(google_maps_credentials)
return all_credentials
def get_creds_by_id(self, user_id: str, credentials_id: str) -> Credentials | None:

View File

@@ -12,8 +12,8 @@ from backend.data import graph as graph_db
from backend.data.api_key import APIKey
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.data.execution import NodeExecutionResult
from backend.executor.utils import add_graph_execution_async
from backend.server.external.middleware import require_permission
from backend.server.routers import v1 as internal_api_routes
from backend.util.settings import Settings
settings = Settings()
@@ -97,13 +97,13 @@ async def execute_graph(
api_key: APIKey = Depends(require_permission(APIKeyPermission.EXECUTE_GRAPH)),
) -> dict[str, Any]:
try:
graph_exec = await internal_api_routes.execute_graph(
graph_exec = await add_graph_execution_async(
graph_id=graph_id,
node_input=node_input,
user_id=api_key.user_id,
inputs=node_input,
graph_version=graph_version,
)
return {"id": graph_exec.graph_exec_id}
return {"id": graph_exec.id}
except Exception as e:
msg = str(e).encode().decode("unicode_escape")
raise HTTPException(status_code=400, detail=msg)

View File

@@ -1,6 +1,6 @@
import asyncio
import logging
from typing import TYPE_CHECKING, Annotated, Literal
from typing import TYPE_CHECKING, Annotated, Awaitable, Literal
from fastapi import APIRouter, Body, Depends, HTTPException, Path, Query, Request
from pydantic import BaseModel, Field
@@ -15,11 +15,11 @@ from backend.data.integrations import (
wait_for_webhook_event,
)
from backend.data.model import Credentials, CredentialsType, OAuth2Credentials
from backend.executor.utils import add_graph_execution_async
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.oauth import HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
from backend.server.routers import v1 as internal_api_routes
from backend.util.exceptions import NeedConfirmation, NotFoundError
from backend.util.settings import Settings
@@ -309,7 +309,7 @@ async def webhook_ingress_generic(
if not webhook.attached_nodes:
return
executions = []
executions: list[Awaitable] = []
for node in webhook.attached_nodes:
logger.debug(f"Webhook-attached node: {node}")
if not node.is_triggered_by_event_type(event_type):
@@ -317,11 +317,11 @@ async def webhook_ingress_generic(
continue
logger.debug(f"Executing graph #{node.graph_id} node #{node.id}")
executions.append(
internal_api_routes.execute_graph(
add_graph_execution_async(
user_id=webhook.user_id,
graph_id=node.graph_id,
graph_version=node.graph_version,
node_input={f"webhook_{webhook_id}_payload": payload},
user_id=webhook.user_id,
inputs={f"webhook_{webhook_id}_payload": payload},
)
)
asyncio.gather(*executions)

View File

@@ -28,6 +28,7 @@ import backend.server.v2.store.model
import backend.server.v2.store.routes
import backend.util.service
import backend.util.settings
from backend.blocks.llm import LlmModel
from backend.data.model import Credentials
from backend.integrations.providers import ProviderName
from backend.server.external.api import external_app
@@ -56,8 +57,7 @@ async def lifespan_context(app: fastapi.FastAPI):
await backend.data.block.initialize_blocks()
await backend.data.user.migrate_and_encrypt_user_integrations()
await backend.data.graph.fix_llm_provider_credentials()
# FIXME ERROR: operator does not exist: text ? unknown
# await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)
await backend.data.graph.migrate_llm_models(LlmModel.GPT4O)
with launch_darkly_context():
yield
await backend.data.db.disconnect()
@@ -159,7 +159,8 @@ class AgentServer(backend.util.service.AppProcess):
user_id=user_id,
graph_id=graph_id,
graph_version=graph_version,
node_input=node_input or {},
inputs=node_input or {},
credentials_inputs={},
)
@staticmethod

View File

@@ -41,6 +41,7 @@ from backend.data.credit import (
set_auto_top_up,
)
from backend.data.execution import AsyncRedisExecutionEventBus
from backend.data.model import CredentialsMetaInput
from backend.data.notifications import NotificationPreference, NotificationPreferenceDTO
from backend.data.onboarding import (
UserOnboardingUpdate,
@@ -592,31 +593,21 @@ async def set_graph_active_version(
)
async def execute_graph(
graph_id: str,
node_input: Annotated[dict[str, Any], Body(..., default_factory=dict)],
user_id: Annotated[str, Depends(get_user_id)],
inputs: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)],
credentials_inputs: Annotated[
dict[str, CredentialsMetaInput], Body(..., embed=True, default_factory=dict)
],
graph_version: Optional[int] = None,
preset_id: Optional[str] = None,
) -> ExecuteGraphResponse:
graph: graph_db.GraphModel | None = await graph_db.get_graph(
graph_id=graph_id, user_id=user_id, version=graph_version
)
if not graph:
raise ValueError(f"Graph #{graph_id} not found.")
graph_exec = await execution_db.create_graph_execution(
graph_exec = await execution_utils.add_graph_execution_async(
graph_id=graph_id,
graph_version=graph.version,
nodes_input=execution_utils.construct_node_execution_input(
graph, user_id, node_input
),
user_id=user_id,
inputs=inputs,
preset_id=preset_id,
)
execution_utils.get_execution_event_bus().publish(graph_exec)
execution_utils.get_execution_queue().publish_message(
routing_key=execution_utils.GRAPH_EXECUTION_ROUTING_KEY,
message=graph_exec.to_graph_execution_entry().model_dump_json(),
exchange=execution_utils.GRAPH_EXECUTION_EXCHANGE,
graph_version=graph_version,
graph_credentials_inputs=credentials_inputs,
)
return ExecuteGraphResponse(graph_exec_id=graph_exec.id)

View File

@@ -6,6 +6,7 @@ from fastapi import APIRouter, Body, Depends, HTTPException, status
import backend.server.v2.library.db as db
import backend.server.v2.library.model as models
from backend.executor.utils import add_graph_execution_async
logger = logging.getLogger(__name__)
@@ -207,8 +208,6 @@ async def execute_preset(
HTTPException: If the preset is not found or an error occurs while executing the preset.
"""
try:
from backend.server.routers import v1 as internal_api_routes
preset = await db.get_preset(user_id, preset_id)
if not preset:
raise HTTPException(
@@ -219,17 +218,17 @@ async def execute_preset(
# Merge input overrides with preset inputs
merged_node_input = preset.inputs | node_input
execution = await internal_api_routes.execute_graph(
execution = await add_graph_execution_async(
graph_id=graph_id,
node_input=merged_node_input,
graph_version=graph_version,
user_id=user_id,
inputs=merged_node_input,
preset_id=preset_id,
graph_version=graph_version,
)
logger.debug(f"Execution added: {execution} with input: {merged_node_input}")
return {"id": execution.graph_exec_id}
return {"id": execution.id}
except HTTPException:
raise
except Exception as e:

View File

@@ -0,0 +1,56 @@
-- Backfill nulls with empty arrays
UPDATE "UserOnboarding"
SET "integrations" = ARRAY[]::TEXT[]
WHERE "integrations" IS NULL;
UPDATE "UserOnboarding"
SET "completedSteps" = '{}'
WHERE "completedSteps" IS NULL;
UPDATE "UserOnboarding"
SET "notified" = '{}'
WHERE "notified" IS NULL;
UPDATE "UserOnboarding"
SET "rewardedFor" = '{}'
WHERE "rewardedFor" IS NULL;
UPDATE "IntegrationWebhook"
SET "events" = ARRAY[]::TEXT[]
WHERE "events" IS NULL;
UPDATE "APIKey"
SET "permissions" = '{}'
WHERE "permissions" IS NULL;
UPDATE "Profile"
SET "links" = ARRAY[]::TEXT[]
WHERE "links" IS NULL;
UPDATE "StoreListingVersion"
SET "imageUrls" = ARRAY[]::TEXT[]
WHERE "imageUrls" IS NULL;
UPDATE "StoreListingVersion"
SET "categories" = ARRAY[]::TEXT[]
WHERE "categories" IS NULL;
-- Enforce NOT NULL constraints
ALTER TABLE "UserOnboarding"
ALTER COLUMN "integrations" SET NOT NULL,
ALTER COLUMN "completedSteps" SET NOT NULL,
ALTER COLUMN "notified" SET NOT NULL,
ALTER COLUMN "rewardedFor" SET NOT NULL;
ALTER TABLE "IntegrationWebhook"
ALTER COLUMN "events" SET NOT NULL;
ALTER TABLE "APIKey"
ALTER COLUMN "permissions" SET NOT NULL;
ALTER TABLE "Profile"
ALTER COLUMN "links" SET NOT NULL;
ALTER TABLE "StoreListingVersion"
ALTER COLUMN "imageUrls" SET NOT NULL,
ALTER COLUMN "categories" SET NOT NULL;

View File

@@ -1,8 +1,9 @@
import type { Preview } from "@storybook/react";
import { initialize, mswLoader } from "msw-storybook-addon";
import "../src/app/globals.css";
import { Providers } from "../src/app/providers";
// Initialize MSW
import React from "react";
initialize();
const preview: Preview = {
@@ -18,6 +19,17 @@ const preview: Preview = {
},
},
loaders: [mswLoader],
decorators: [
(Story, context) => {
const mockOptions = context.parameters.mockBackend || {};
return (
<Providers useMockBackend mockClientProps={mockOptions}>
<Story />
</Providers>
);
},
],
};
export default preview;

View File

@@ -42,6 +42,7 @@
"@radix-ui/react-separator": "^1.1.0",
"@radix-ui/react-slot": "^1.1.0",
"@radix-ui/react-switch": "^1.1.1",
"@radix-ui/react-tabs": "^1.1.4",
"@radix-ui/react-toast": "^1.2.5",
"@radix-ui/react-tooltip": "^1.1.7",
"@sentry/nextjs": "^9",

View File

@@ -80,18 +80,23 @@ export default function Page() {
if (!agent) {
return;
}
api.addMarketplaceAgentToLibrary(
storeAgent?.store_listing_version_id || "",
);
api
.executeGraph(agent.id, agent.version, state?.agentInput || {})
.then(({ graph_exec_id }) => {
updateState({
onboardingAgentExecutionId: graph_exec_id,
});
router.push("/onboarding/6-congrats");
.addMarketplaceAgentToLibrary(storeAgent?.store_listing_version_id || "")
.then((libraryAgent) => {
api
.executeGraph(
libraryAgent.graph_id,
libraryAgent.graph_version,
state?.agentInput || {},
)
.then(({ graph_exec_id }) => {
updateState({
onboardingAgentExecutionId: graph_exec_id,
});
router.push("/onboarding/6-congrats");
});
});
}, [api, agent, router, state?.agentInput]);
}, [api, agent, router, state?.agentInput, storeAgent, updateState]);
const runYourAgent = (
<div className="ml-[54px] w-[481px] pl-5">

View File

@@ -7,9 +7,9 @@ export default function OnboardingLayout({
}) {
return (
<div className="flex min-h-screen w-full items-center justify-center bg-gray-100">
<div className="mx-auto flex w-full flex-col items-center">
<main className="mx-auto flex w-full flex-col items-center">
{children}
</div>
</main>
</div>
);
}

View File

@@ -0,0 +1,68 @@
import { ReactNode } from "react";
import { Navbar } from "@/components/agptui/Navbar";
import { IconType } from "@/components/ui/icons";
export default function PlatformLayout({ children }: { children: ReactNode }) {
return (
<>
<Navbar
links={[
{
name: "Marketplace",
href: "/marketplace",
},
{
name: "Library",
href: "/library",
},
{
name: "Build",
href: "/build",
},
]}
menuItemGroups={[
{
items: [
{
icon: IconType.Edit,
text: "Edit profile",
href: "/profile",
},
],
},
{
items: [
{
icon: IconType.LayoutDashboard,
text: "Creator Dashboard",
href: "/profile/dashboard",
},
{
icon: IconType.UploadCloud,
text: "Publish an agent",
},
],
},
{
items: [
{
icon: IconType.Settings,
text: "Settings",
href: "/profile/settings",
},
],
},
{
items: [
{
icon: IconType.LogOut,
text: "Log out",
},
],
},
]}
/>
<main>{children}</main>
</>
);
}

View File

@@ -145,6 +145,8 @@ export default function AgentRunsPage(): React.ReactElement {
const detachExecUpdateHandler = api.onWebSocketMessage(
"graph_execution_event",
(data) => {
if (data.graph_id != agent?.graph_id) return;
setAgentRuns((prev) => {
const index = prev.findIndex((run) => run.id === data.id);
if (index === -1) {
@@ -163,7 +165,7 @@ export default function AgentRunsPage(): React.ReactElement {
return () => {
detachExecUpdateHandler();
};
}, [api, selectedView.id]);
}, [api, agent?.graph_id, selectedView.id]);
// load selectedRun based on selectedView
useEffect(() => {

View File

@@ -118,8 +118,7 @@ export default function CreditsPage() {
{topupStatus === "success" && (
<span className="text-green-500">
Your payment was successful. Your credits will be updated
shortly. You can click the refresh icon 🔄 in case it is not
updated.
shortly. Try refreshing the page in case it is not updated.
</span>
)}
{topupStatus === "cancel" && (

View File

@@ -131,11 +131,7 @@ export default function PrivatePage() {
const allCredentials = providers
? Object.values(providers).flatMap((provider) =>
[
...provider.savedOAuthCredentials,
...provider.savedApiKeys,
...provider.savedUserPasswordCredentials,
]
provider.savedCredentials
.filter((cred) => !hiddenCredentials.includes(cred.id))
.map((credentials) => ({
...credentials,

View File

@@ -144,3 +144,13 @@
text-wrap: balance;
}
}
input[type="number"]::-webkit-outer-spin-button,
input[type="number"]::-webkit-inner-spin-button {
-webkit-appearance: none;
margin: 0;
}
input[type="number"] {
-moz-appearance: textfield;
}

View File

@@ -1,17 +1,14 @@
import React from "react";
import React, { Suspense } from "react";
import type { Metadata } from "next";
import { Inter, Poppins } from "next/font/google";
import { GoogleAnalytics } from "@next/third-parties/google";
import { GeistSans } from "geist/font/sans";
import { GeistMono } from "geist/font/mono";
import { headers } from "next/headers";
import { cn } from "@/lib/utils";
import "./globals.css";
import { Navbar } from "@/components/agptui/Navbar";
import { Toaster } from "@/components/ui/toaster";
import { IconType } from "@/components/ui/icons";
import { Providers } from "@/app/providers";
import TallyPopupSimple from "@/components/TallyPopup";
import OttoChatWidget from "@/components/OttoChatWidget";
@@ -34,9 +31,6 @@ export default async function RootLayout({
}: Readonly<{
children: React.ReactNode;
}>) {
const pathname = headers().get("x-current-path");
const isOnboarding = pathname?.startsWith("/onboarding");
return (
<html
lang="en"
@@ -56,68 +50,11 @@ export default async function RootLayout({
disableTransitionOnChange
>
<div className="flex min-h-screen flex-col items-stretch justify-items-stretch">
{!isOnboarding && (
<Navbar
links={[
{
name: "Marketplace",
href: "/marketplace",
},
{
name: "Library",
href: "/library",
},
{
name: "Build",
href: "/build",
},
]}
menuItemGroups={[
{
items: [
{
icon: IconType.Edit,
text: "Edit profile",
href: "/profile",
},
],
},
{
items: [
{
icon: IconType.LayoutDashboard,
text: "Creator Dashboard",
href: "/profile/dashboard",
},
{
icon: IconType.UploadCloud,
text: "Publish an agent",
},
],
},
{
items: [
{
icon: IconType.Settings,
text: "Settings",
href: "/profile/settings",
},
],
},
{
items: [
{
icon: IconType.LogOut,
text: "Log out",
},
],
},
]}
/>
)}
<main className="w-full flex-grow">{children}</main>
{children}
<TallyPopupSimple />
<OttoChatWidget />
<Suspense fallback={null}>
<OttoChatWidget />
</Suspense>
</div>
<Toaster />
</Providers>

View File

@@ -7,12 +7,27 @@ import { BackendAPIProvider } from "@/lib/autogpt-server-api/context";
import { TooltipProvider } from "@/components/ui/tooltip";
import CredentialsProvider from "@/components/integrations/credentials-provider";
import { LaunchDarklyProvider } from "@/components/feature-flag/feature-flag-provider";
import { MockClientProps } from "@/lib/autogpt-server-api/mock_client";
import OnboardingProvider from "@/components/onboarding/onboarding-provider";
export function Providers({ children, ...props }: ThemeProviderProps) {
export interface ProvidersProps extends ThemeProviderProps {
children: React.ReactNode;
useMockBackend?: boolean;
mockClientProps?: MockClientProps;
}
export function Providers({
children,
useMockBackend,
mockClientProps,
...props
}: ProvidersProps) {
return (
<NextThemesProvider {...props}>
<BackendAPIProvider>
<BackendAPIProvider
useMockBackend={useMockBackend}
mockClientProps={mockClientProps}
>
<CredentialsProvider>
<LaunchDarklyProvider>
<OnboardingProvider>

View File

@@ -178,18 +178,24 @@ export const CustomNode = React.memo(
return obj;
}, []);
const setHardcodedValues = (values: any) => {
updateNodeData(id, { hardcodedValues: values });
};
const setHardcodedValues = useCallback(
(values: any) => {
updateNodeData(id, { hardcodedValues: values });
},
[id, updateNodeData],
);
useEffect(() => {
isInitialSetup.current = false;
setHardcodedValues(fillDefaults(data.hardcodedValues, data.inputSchema));
}, []);
const setErrors = (errors: { [key: string]: string }) => {
updateNodeData(id, { errors });
};
const setErrors = useCallback(
(errors: { [key: string]: string }) => {
updateNodeData(id, { errors });
},
[id, updateNodeData],
);
const toggleOutput = (checked: boolean) => {
setIsOutputOpen(checked);
@@ -340,46 +346,49 @@ export const CustomNode = React.memo(
});
}
};
const handleInputChange = (path: string, value: any) => {
const keys = parseKeys(path);
const newValues = JSON.parse(JSON.stringify(data.hardcodedValues));
let current = newValues;
const handleInputChange = useCallback(
(path: string, value: any) => {
const keys = parseKeys(path);
const newValues = JSON.parse(JSON.stringify(data.hardcodedValues));
let current = newValues;
for (let i = 0; i < keys.length - 1; i++) {
const { key: currentKey, index } = keys[i];
if (index !== undefined) {
if (!current[currentKey]) current[currentKey] = [];
if (!current[currentKey][index]) current[currentKey][index] = {};
current = current[currentKey][index];
} else {
if (!current[currentKey]) current[currentKey] = {};
current = current[currentKey];
for (let i = 0; i < keys.length - 1; i++) {
const { key: currentKey, index } = keys[i];
if (index !== undefined) {
if (!current[currentKey]) current[currentKey] = [];
if (!current[currentKey][index]) current[currentKey][index] = {};
current = current[currentKey][index];
} else {
if (!current[currentKey]) current[currentKey] = {};
current = current[currentKey];
}
}
}
const lastKey = keys[keys.length - 1];
if (lastKey.index !== undefined) {
if (!current[lastKey.key]) current[lastKey.key] = [];
current[lastKey.key][lastKey.index] = value;
} else {
current[lastKey.key] = value;
}
const lastKey = keys[keys.length - 1];
if (lastKey.index !== undefined) {
if (!current[lastKey.key]) current[lastKey.key] = [];
current[lastKey.key][lastKey.index] = value;
} else {
current[lastKey.key] = value;
}
if (!isInitialSetup.current) {
history.push({
type: "UPDATE_INPUT",
payload: { nodeId: id, oldValues: data.hardcodedValues, newValues },
undo: () => setHardcodedValues(data.hardcodedValues),
redo: () => setHardcodedValues(newValues),
});
}
if (!isInitialSetup.current) {
history.push({
type: "UPDATE_INPUT",
payload: { nodeId: id, oldValues: data.hardcodedValues, newValues },
undo: () => setHardcodedValues(data.hardcodedValues),
redo: () => setHardcodedValues(newValues),
});
}
setHardcodedValues(newValues);
const errors = data.errors || {};
// Remove error with the same key
setNestedProperty(errors, path, null);
setErrors({ ...errors });
};
setHardcodedValues(newValues);
const errors = data.errors || {};
// Remove error with the same key
setNestedProperty(errors, path, null);
setErrors({ ...errors });
},
[data.hardcodedValues, id, setHardcodedValues, data.errors, setErrors],
);
const isInputHandleConnected = (key: string) => {
return (
@@ -407,28 +416,34 @@ export const CustomNode = React.memo(
);
};
const handleInputClick = (key: string) => {
console.debug(`Opening modal for key: ${key}`);
setActiveKey(key);
const value = getValue(key, data.hardcodedValues);
setInputModalValue(
typeof value === "object" ? JSON.stringify(value, null, 2) : value,
);
setIsModalOpen(true);
};
const handleInputClick = useCallback(
(key: string) => {
console.debug(`Opening modal for key: ${key}`);
setActiveKey(key);
const value = getValue(key, data.hardcodedValues);
setInputModalValue(
typeof value === "object" ? JSON.stringify(value, null, 2) : value,
);
setIsModalOpen(true);
},
[data.hardcodedValues],
);
const handleModalSave = (value: string) => {
if (activeKey) {
try {
const parsedValue = JSON.parse(value);
handleInputChange(activeKey, parsedValue);
} catch (error) {
handleInputChange(activeKey, value);
const handleModalSave = useCallback(
(value: string) => {
if (activeKey) {
try {
const parsedValue = JSON.parse(value);
handleInputChange(activeKey, parsedValue);
} catch (error) {
handleInputChange(activeKey, value);
}
}
}
setIsModalOpen(false);
setActiveKey(null);
};
setIsModalOpen(false);
setActiveKey(null);
},
[activeKey, handleInputChange],
);
const handleOutputClick = () => {
setIsOutputModalOpen(true);

View File

@@ -6,7 +6,7 @@ import { useToast } from "@/components/ui/use-toast";
import useAgentGraph from "../hooks/useAgentGraph";
import ReactMarkdown from "react-markdown";
import { GraphID } from "@/lib/autogpt-server-api/types";
import { askOtto } from "@/app/build/actions";
import { askOtto } from "@/app/(platform)/build/actions";
interface Message {
type: "user" | "assistant";

View File

@@ -11,7 +11,7 @@ import {
SubmissionStatus,
} from "@/lib/autogpt-server-api/types";
import { PaginationControls } from "../../ui/pagination-controls";
import { getAdminListingsWithVersions } from "@/app/admin/marketplace/actions";
import { getAdminListingsWithVersions } from "@/app/(platform)/admin/marketplace/actions";
import { ExpandableRow } from "./expandable-row";
import { SearchAndFilterAdminMarketplace } from "./search-filter-form";

View File

@@ -15,7 +15,10 @@ import { Label } from "@/components/ui/label";
import { Textarea } from "@/components/ui/textarea";
import type { StoreSubmission } from "@/lib/autogpt-server-api/types";
import { useRouter } from "next/navigation";
import { approveAgent, rejectAgent } from "@/app/admin/marketplace/actions";
import {
approveAgent,
rejectAgent,
} from "@/app/(platform)/admin/marketplace/actions";
export function ApproveRejectButtons({
version,

View File

@@ -1,5 +1,6 @@
"use client";
import React, { useCallback, useMemo } from "react";
import { isEmpty } from "lodash";
import moment from "moment";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
@@ -22,6 +23,7 @@ import {
AgentRunStatus,
agentRunStatusMap,
} from "@/components/agents/agent-run-status-chip";
import useCredits from "@/hooks/useCredits";
export default function AgentRunDetailsView({
agent,
@@ -39,6 +41,7 @@ export default function AgentRunDetailsView({
deleteRun: () => void;
}): React.ReactNode {
const api = useBackendAPI();
const { formatCredits } = useCredits();
const runStatus: AgentRunStatus = useMemo(
() => agentRunStatusMap[run.status],
@@ -65,11 +68,11 @@ export default function AgentRunDetailsView({
value: moment.duration(run.stats.duration, "seconds").humanize(),
},
{ label: "Steps", value: run.stats.node_exec_count },
{ label: "Cost", value: `${run.stats.cost} credits` },
{ label: "Cost", value: formatCredits(run.stats.cost) },
]
: []),
];
}, [run, runStatus]);
}, [run, runStatus, formatCredits]);
const agentRunInputs:
| Record<
@@ -164,7 +167,8 @@ export default function AgentRunDetailsView({
] satisfies ButtonAction[])
: []),
...(["success", "failed", "stopped"].includes(runStatus) &&
!graph.has_webhook_trigger
!graph.has_webhook_trigger &&
isEmpty(graph.credentials_input_schema.required) // TODO: enable re-run with credentials - https://linear.app/autogpt/issue/SECRT-1243
? [
{
label: (
@@ -193,6 +197,7 @@ export default function AgentRunDetailsView({
stopRun,
deleteRun,
graph.has_webhook_trigger,
graph.credentials_input_schema.properties,
agent.can_access_graph,
run.graph_id,
run.graph_version,

View File

@@ -6,6 +6,7 @@ import { GraphExecutionID, GraphMeta } from "@/lib/autogpt-server-api";
import type { ButtonAction } from "@/components/agptui/types";
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
import { CredentialsInput } from "@/components/integrations/credentials-input";
import { TypeBasedInput } from "@/components/type-based-input";
import { useToastOnFail } from "@/components/ui/use-toast";
import ActionButtonGroup from "@/components/agptui/action-button-group";
@@ -26,19 +27,32 @@ export default function AgentRunDraftView({
const toastOnFail = useToastOnFail();
const agentInputs = graph.input_schema.properties;
const agentCredentialsInputs = graph.credentials_input_schema.properties;
const [inputValues, setInputValues] = useState<Record<string, any>>({});
const [inputCredentials, setInputCredentials] = useState<Record<string, any>>(
{},
);
const { state, completeStep } = useOnboarding();
const doRun = useCallback(() => {
api
.executeGraph(graph.id, graph.version, inputValues)
.executeGraph(graph.id, graph.version, inputValues, inputCredentials)
.then((newRun) => onRun(newRun.graph_exec_id))
.catch(toastOnFail("execute agent"));
// Mark run agent onboarding step as completed
if (state?.completedSteps.includes("MARKETPLACE_ADD_AGENT")) {
completeStep("MARKETPLACE_RUN_AGENT");
}
}, [api, graph, inputValues, onRun, state]);
}, [
api,
graph,
inputValues,
inputCredentials,
onRun,
toastOnFail,
state,
completeStep,
]);
const runActions: ButtonAction[] = useMemo(
() => [
@@ -64,6 +78,26 @@ export default function AgentRunDraftView({
<CardTitle className="font-poppins text-lg">Input</CardTitle>
</CardHeader>
<CardContent className="flex flex-col gap-4">
{/* Credentials inputs */}
{Object.entries(agentCredentialsInputs).map(
([key, inputSubSchema]) => (
<CredentialsInput
key={key}
schema={{ ...inputSubSchema, discriminator: undefined }}
selectedCredentials={
inputCredentials[key] ?? inputSubSchema.default
}
onSelectCredentials={(value) =>
setInputCredentials((obj) => ({
...obj,
[key]: value,
}))
}
/>
),
)}
{/* Regular inputs */}
{Object.entries(agentInputs).map(([key, inputSubSchema]) => (
<div key={key} className="flex flex-col space-y-2">
<label className="flex items-center gap-1 text-sm font-medium">

View File

@@ -15,8 +15,8 @@ export interface AgentTableCardProps {
imageSrc: string[];
dateSubmitted: string;
status: StatusType;
runs: number;
rating: number;
runs?: number;
rating?: number;
id: number;
onEditSubmission: (submission: StoreSubmissionRequest) => void;
}
@@ -82,11 +82,11 @@ export const AgentTableCard: React.FC<AgentTableCardProps> = ({
{dateSubmitted}
</div>
<div className="text-sm text-neutral-600 dark:text-neutral-400">
{runs.toLocaleString()} runs
{runs ? runs.toLocaleString() : "N/A"} runs
</div>
<div className="flex items-center gap-1">
<span className="text-sm font-medium text-neutral-800 dark:text-neutral-200">
{rating.toFixed(1)}
{rating ? rating.toFixed(1) : "N/A"}
</span>
<IconStarFilled className="h-4 w-4 text-neutral-800 dark:text-neutral-200" />
</div>

View File

@@ -3,8 +3,6 @@ import { Navbar } from "./Navbar";
import { userEvent, within } from "@storybook/test";
import { IconType } from "../ui/icons";
import { ProfileDetails } from "@/lib/autogpt-server-api/types";
// You can't import this here, jest is not available in storybook and will crash it
// import { jest } from "@jest/globals";
// Mock the API responses
const mockProfileData: ProfileDetails = {
@@ -15,40 +13,6 @@ const mockProfileData: ProfileDetails = {
avatar_url: "https://avatars.githubusercontent.com/u/123456789?v=4",
};
const mockCreditData = {
credits: 1500,
};
// Mock the API module
// jest.mock("@/lib/autogpt-server-api", () => {
// return function () {
// return {
// getStoreProfile: () => Promise.resolve(mockProfileData),
// getUserCredit: () => Promise.resolve(mockCreditData),
// };
// };
// });
const meta = {
title: "AGPT UI/Navbar",
component: Navbar,
parameters: {
layout: "fullscreen",
},
tags: ["autodocs"],
argTypes: {
// isLoggedIn: { control: "boolean" },
// avatarSrc: { control: "text" },
links: { control: "object" },
// activeLink: { control: "text" },
menuItemGroups: { control: "object" },
// params: { control: { type: "object", defaultValue: { lang: "en" } } },
},
} satisfies Meta<typeof Navbar>;
export default meta;
type Story = StoryObj<typeof meta>;
const defaultMenuItemGroups = [
{
items: [
@@ -89,35 +53,83 @@ const defaultLinks = [
{ name: "Build", href: "/builder" },
];
const meta = {
title: "AGPT UI/Navbar",
component: Navbar,
parameters: {
layout: "fullscreen",
},
tags: ["autodocs"],
argTypes: {
links: { control: "object" },
menuItemGroups: { control: "object" },
mockUser: { control: "object" },
mockClientProps: { control: "object" },
},
} satisfies Meta<typeof Navbar>;
export default meta;
type Story = StoryObj<typeof meta>;
export const Default: Story = {
args: {
// params: { lang: "en" },
// isLoggedIn: true,
links: defaultLinks,
// activeLink: "/marketplace",
// avatarSrc: mockProfileData.avatar_url,
menuItemGroups: defaultMenuItemGroups,
mockUser: {
id: "123",
email: "test@test.com",
user_metadata: {
name: "Test User",
},
app_metadata: {
provider: "email",
},
aud: "test",
created_at: new Date().toISOString(),
},
mockClientProps: {
credits: 1500,
profile: mockProfileData,
},
},
parameters: {
mockBackend: {
credits: 1500,
profile: mockProfileData,
},
},
};
export const WithActiveLink: Story = {
export const WithCredits: Story = {
args: {
...Default.args,
// activeLink: "/library",
},
parameters: {
mockBackend: {
credits: 1500,
},
},
};
export const LongUserName: Story = {
export const WithLargeCredits: Story = {
args: {
...Default.args,
// avatarSrc: "https://avatars.githubusercontent.com/u/987654321?v=4",
},
parameters: {
mockBackend: {
credits: 999999,
},
},
};
export const NoAvatar: Story = {
export const WithZeroCredits: Story = {
args: {
...Default.args,
// avatarSrc: undefined,
},
parameters: {
mockBackend: {
credits: 0,
},
},
};
@@ -125,6 +137,12 @@ export const WithInteraction: Story = {
args: {
...Default.args,
},
parameters: {
mockBackend: {
credits: 1500,
profile: mockProfileData,
},
},
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
const profileTrigger = canvas.getByRole("button");
@@ -135,29 +153,3 @@ export const WithInteraction: Story = {
await canvas.findByText("Edit profile");
},
};
export const NotLoggedIn: Story = {
args: {
...Default.args,
// isLoggedIn: false,
// avatarSrc: undefined,
},
};
export const WithCredits: Story = {
args: {
...Default.args,
},
};
export const WithLargeCredits: Story = {
args: {
...Default.args,
},
};
export const WithZeroCredits: Story = {
args: {
...Default.args,
},
};

View File

@@ -9,6 +9,10 @@ import { ProfileDetails } from "@/lib/autogpt-server-api/types";
import { NavbarLink } from "./NavbarLink";
import getServerUser from "@/lib/supabase/getServerUser";
import BackendAPI from "@/lib/autogpt-server-api";
import { User } from "@supabase/supabase-js";
import MockClient, {
MockClientProps,
} from "@/lib/autogpt-server-api/mock_client";
// Disable theme toggle for now
// import { ThemeToggle } from "./ThemeToggle";
@@ -29,26 +33,38 @@ interface NavbarProps {
onClick?: () => void;
}[];
}[];
mockUser?: User;
mockClientProps?: MockClientProps;
}
async function getProfileData() {
async function getProfileData(mockClientProps?: MockClientProps) {
if (mockClientProps) {
const api = new MockClient(mockClientProps);
const profile = await Promise.resolve(api.getStoreProfile("navbar"));
return profile;
}
const api = new BackendAPI();
const profile = await Promise.resolve(api.getStoreProfile());
return profile;
}
export const Navbar = async ({ links, menuItemGroups }: NavbarProps) => {
const { user } = await getServerUser();
export const Navbar = async ({
links,
menuItemGroups,
mockUser,
mockClientProps,
}: NavbarProps) => {
const { user } = await getServerUser(mockUser);
const isLoggedIn = user !== null;
let profile: ProfileDetails | null = null;
if (isLoggedIn) {
profile = await getProfileData();
profile = await getProfileData(mockClientProps);
}
return (
<>
<nav className="sticky top-0 z-50 mx-[16px] hidden h-16 items-center justify-between rounded-bl-2xl rounded-br-2xl border border-white/50 bg-white/5 py-3 pl-6 pr-3 backdrop-blur-[26px] dark:border-gray-700 dark:bg-gray-900 md:inline-flex">
<nav className="sticky top-0 z-40 mx-[16px] hidden h-16 items-center justify-between rounded-bl-2xl rounded-br-2xl border border-white/50 bg-white/5 py-3 pl-6 pr-3 backdrop-blur-[26px] dark:border-gray-700 dark:bg-gray-900 md:inline-flex">
<div className="flex items-center gap-11">
<div className="relative h-10 w-[88.87px]">
<IconAutoGPTLogo className="h-full w-full" />

View File

@@ -1,6 +1,6 @@
"use client";
import { logout } from "@/app/login/actions";
import { logout } from "@/app/(platform)/login/actions";
import { IconLogOut } from "@/components/ui/icons";
export const ProfilePopoutMenuLogoutButton = () => {

View File

@@ -57,14 +57,14 @@ export const StoreCard: React.FC<StoreCardProps> = ({
)}
{!hideAvatar && (
<div className="absolute bottom-4 left-4">
<Avatar className="h-16 w-16 border-2 border-white dark:border-gray-800">
<Avatar className="h-16 w-16">
{avatarSrc && (
<AvatarImage
src={avatarSrc}
alt={`${creatorName || agentName} creator avatar`}
/>
)}
<AvatarFallback>
<AvatarFallback size={64}>
{(creatorName || agentName).charAt(0)}
</AvatarFallback>
</Avatar>

View File

@@ -14,6 +14,7 @@ import { useOnboarding } from "../onboarding/onboarding-provider";
import { useCallback, useEffect, useRef } from "react";
import { cn } from "@/lib/utils";
import * as party from "party-js";
import WalletRefill from "./WalletRefill";
export default function Wallet() {
const { credits, formatCredits, fetchCredits } = useCredits({
@@ -86,26 +87,31 @@ export default function Wallet() {
"rounded-xl border-zinc-200 bg-zinc-50 shadow-[0_3px_3px] shadow-zinc-300",
)}
>
<div>
<div className="mx-1 flex items-center justify-between border-b border-zinc-300 pb-2">
<span className="font-poppins font-medium text-zinc-900">
Your wallet
</span>
<div className="flex items-center font-inter text-sm font-semibold text-violet-700">
<div className="rounded-lg bg-violet-100 px-3 py-2">
Wallet{" "}
<span className="font-semibold">{formatCredits(credits)}</span>
</div>
<PopoverClose>
<X className="ml-[2.8rem] h-5 w-5 text-zinc-800 hover:text-foreground" />
</PopoverClose>
{/* Header */}
<div className="mx-1 flex items-center justify-between border-b border-zinc-300 pb-2">
<span className="font-poppins font-medium text-zinc-900">
Your wallet
</span>
<div className="flex items-center font-inter text-sm font-semibold text-violet-700">
<div className="rounded-lg bg-violet-100 px-3 py-2">
Wallet{" "}
<span className="font-semibold">{formatCredits(credits)}</span>
</div>
<PopoverClose>
<X className="ml-[2.8rem] h-5 w-5 text-zinc-800 hover:text-foreground" />
</PopoverClose>
</div>
<p className="mx-1 mt-3 font-inter text-xs text-muted-foreground text-zinc-400">
</div>
<ScrollArea className="max-h-[85vh] overflow-y-auto">
{/* Top ups */}
<WalletRefill />
{/* Tasks */}
<p className="mx-1 mt-4 font-sans text-xs font-medium text-violet-700">
Onboarding tasks
</p>
<p className="mx-1 my-1 font-sans text-xs font-normal text-zinc-500">
Complete the following tasks to earn more credits!
</p>
</div>
<ScrollArea className="max-h-[80vh] overflow-y-auto">
<TaskGroups />
</ScrollArea>
</PopoverContent>

View File

@@ -0,0 +1,265 @@
import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs";
import { cn } from "@/lib/utils";
import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
import { z } from "zod";
import {
Form,
FormControl,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/components/ui/form";
import { Input } from "../ui/input";
import Link from "next/link";
import { useToast, useToastOnFail } from "../ui/use-toast";
import useCredits from "@/hooks/useCredits";
import { useCallback, useEffect, useState } from "react";
const topUpSchema = z.object({
amount: z
.number({ coerce: true, invalid_type_error: "Enter top-up amount" })
.min(5, "Top-ups start at $5. Please enter a higher amount."),
});
const autoRefillSchema = z
.object({
threshold: z
.number({ coerce: true, invalid_type_error: "Enter min. balance" })
.min(
5,
"Looks like your balance is too low for auto-refill. Try $5 or more.",
),
refillAmount: z
.number({ coerce: true, invalid_type_error: "Enter top-up amount" })
.min(5, "Top-ups start at $5. Please enter a higher amount."),
})
.refine((data) => data.refillAmount >= data.threshold, {
message:
"Your refill amount must be equal to or greater than the balance you entered above.",
path: ["refillAmount"],
});
export default function WalletRefill() {
const { toast } = useToast();
const toastOnFail = useToastOnFail();
const { requestTopUp, autoTopUpConfig, updateAutoTopUpConfig } = useCredits({
fetchInitialAutoTopUpConfig: true,
});
const [isLoading, setIsLoading] = useState(false);
const topUpForm = useForm<z.infer<typeof topUpSchema>>({
resolver: zodResolver(topUpSchema),
});
const autoRefillForm = useForm<z.infer<typeof autoRefillSchema>>({
resolver: zodResolver(autoRefillSchema),
});
console.log("autoRefillForm");
// Pre-fill the auto-refill form with existing values
useEffect(() => {
const values = autoRefillForm.getValues();
if (
autoTopUpConfig &&
autoTopUpConfig.amount > 0 &&
autoTopUpConfig.threshold > 0 &&
!autoRefillForm.getFieldState("threshold").isTouched &&
!autoRefillForm.getFieldState("refillAmount").isTouched
) {
autoRefillForm.setValue("threshold", autoTopUpConfig.threshold / 100);
autoRefillForm.setValue("refillAmount", autoTopUpConfig.amount / 100);
}
}, [autoTopUpConfig, autoRefillForm]);
const submitTopUp = useCallback(
async (data: z.infer<typeof topUpSchema>) => {
setIsLoading(true);
await requestTopUp(data.amount * 100).catch(
toastOnFail("request top-up"),
);
setIsLoading(false);
},
[requestTopUp, toastOnFail],
);
const submitAutoTopUpConfig = useCallback(
async (data: z.infer<typeof autoRefillSchema>) => {
setIsLoading(true);
await updateAutoTopUpConfig(data.refillAmount * 100, data.threshold * 100)
.then(() => {
toast({ title: "Auto top-up config updated! 🎉" });
})
.catch(toastOnFail("update auto top-up config"));
setIsLoading(false);
},
[updateAutoTopUpConfig, toast, toastOnFail],
);
return (
<div className="mx-1 border-b border-zinc-300">
<p className="mx-0 mt-4 font-sans text-xs font-medium text-violet-700">
Add credits to your balance
</p>
<p className="mx-0 my-1 font-sans text-xs font-normal text-zinc-500">
Choose a one-time top-up or set up automatic refills
</p>
<Tabs
defaultValue="top-up"
className="mb-6 mt-4 flex w-full flex-col items-center"
>
<TabsList className="mx-auto">
<TabsTrigger value="top-up">One-time top up</TabsTrigger>
<TabsTrigger value="auto-refill">Auto-refill</TabsTrigger>
</TabsList>
<div className="mt-4 w-full rounded-lg px-5 outline outline-1 outline-offset-2 outline-zinc-200">
<TabsContent value="top-up" className="flex flex-col">
<div className="mt-2 justify-start font-sans text-sm font-medium leading-snug text-zinc-900">
One-time top-up
</div>
<div className="mt-1 justify-start font-sans text-xs font-normal leading-tight text-zinc-500">
Enter an amount (min. $5) and add credits instantly.
</div>
<Form {...topUpForm}>
<form onSubmit={topUpForm.handleSubmit(submitTopUp)}>
<FormField
control={topUpForm.control}
name="amount"
render={({ field }) => (
<FormItem className="mb-6 mt-4">
<FormLabel className="font-sans text-sm font-medium leading-snug text-zinc-800">
Amount
</FormLabel>
<FormControl>
<>
<Input
className={cn(
"mt-2 rounded-3xl border-0 bg-white py-2 pl-6 pr-4 font-sans outline outline-1 outline-zinc-300",
"focus:outline-2 focus:outline-offset-0 focus:outline-violet-700",
)}
type="number"
step="1"
{...field}
/>
<span className="absolute left-10 -translate-y-9 text-sm text-zinc-500">
$
</span>
</>
</FormControl>
<FormMessage className="mt-2 font-sans text-xs font-normal leading-tight" />
</FormItem>
)}
/>
<button
className={cn(
"mb-2 inline-flex h-10 w-24 items-center justify-center rounded-3xl bg-zinc-800 px-4 py-2",
"font-sans text-sm font-medium leading-snug text-white",
"transition-colors duration-200 hover:bg-zinc-700 disabled:bg-zinc-500",
)}
type="submit"
disabled={isLoading}
>
Top up
</button>
</form>
</Form>
</TabsContent>
<TabsContent value="auto-refill" className="flex flex-col">
<div className="justify-start font-sans text-sm font-medium leading-snug text-zinc-900">
Auto-refill
</div>
<div className="mt-1 justify-start font-sans text-xs font-normal leading-tight text-zinc-500">
Choose a one-time top-up or set up automatic refills.
</div>
<Form {...autoRefillForm}>
<form
onSubmit={autoRefillForm.handleSubmit(submitAutoTopUpConfig)}
>
<FormField
control={autoRefillForm.control}
name="threshold"
render={({ field }) => (
<FormItem className="mb-6 mt-4">
<FormLabel className="font-sans text-sm font-medium leading-snug text-zinc-800">
Refill when balance drops below:
</FormLabel>
<FormControl>
<>
<Input
className={cn(
"mt-2 rounded-3xl border-0 bg-white py-2 pl-6 pr-4 font-sans outline outline-1 outline-zinc-300",
"focus:outline-2 focus:outline-offset-0 focus:outline-violet-700",
)}
type="number"
step="1"
{...field}
/>
<span className="absolute left-10 -translate-y-9 text-sm text-zinc-500">
$
</span>
</>
</FormControl>
<FormMessage className="mt-2 font-sans text-xs font-normal leading-tight" />
</FormItem>
)}
/>
<FormField
control={autoRefillForm.control}
name="refillAmount"
render={({ field }) => (
<FormItem className="mb-6">
<FormLabel className="font-sans text-sm font-medium leading-snug text-zinc-800">
Add this amount:
</FormLabel>
<FormControl>
<>
<Input
className={cn(
"mt-2 rounded-3xl border-0 bg-white py-2 pl-6 pr-4 font-sans outline outline-1 outline-zinc-300",
"focus:outline-2 focus:outline-offset-0 focus:outline-violet-700",
)}
type="number"
step="1"
{...field}
/>
<span className="absolute left-10 -translate-y-9 text-sm text-zinc-500">
$
</span>
</>
</FormControl>
<FormMessage className="mt-2 font-sans text-xs font-normal leading-tight" />
</FormItem>
)}
/>
<button
className={cn(
"mb-4 inline-flex h-10 w-40 items-center justify-center rounded-3xl bg-zinc-800 px-4 py-2",
"font-sans text-sm font-medium leading-snug text-white",
"transition-colors duration-200 hover:bg-zinc-700 disabled:bg-zinc-500",
)}
type="submit"
disabled={isLoading}
>
Enable Auto-refill
</button>
</form>
</Form>
</TabsContent>
<div className="mb-3 justify-start font-sans text-xs font-normal leading-tight">
<span className="text-zinc-500">
To update your billing details, head to{" "}
</span>
<Link
href="/profile/credits"
className="cursor-pointer text-zinc-800 underline"
>
Billing settings
</Link>
</div>
</div>
</Tabs>
</div>
);
}

View File

@@ -147,13 +147,3 @@
.custom-switch {
padding-left: 2px;
}
input[type="number"]::-webkit-outer-spin-button,
input[type="number"]::-webkit-inner-spin-button {
-webkit-appearance: none;
margin: 0;
}
input[type="number"] {
-moz-appearance: textfield;
}

View File

@@ -1,5 +1,6 @@
import { FC, useEffect, useMemo, useState } from "react";
import { z } from "zod";
import { beautifyString, cn } from "@/lib/utils";
import { cn } from "@/lib/utils";
import { useForm } from "react-hook-form";
import { Input } from "@/components/ui/input";
import { Button } from "@/components/ui/button";
@@ -16,8 +17,8 @@ import {
FaKey,
FaHubspot,
} from "react-icons/fa";
import { FC, useMemo, useState } from "react";
import {
BlockIOCredentialsSubSchema,
CredentialsMetaInput,
CredentialsProviderName,
} from "@/lib/autogpt-server-api/types";
@@ -106,13 +107,18 @@ export type OAuthPopupResultMessage = { message_type: "oauth_popup_result" } & (
);
export const CredentialsInput: FC<{
selfKey: string;
schema: BlockIOCredentialsSubSchema;
className?: string;
selectedCredentials?: CredentialsMetaInput;
onSelectCredentials: (newValue?: CredentialsMetaInput) => void;
}> = ({ selfKey, className, selectedCredentials, onSelectCredentials }) => {
const api = useBackendAPI();
const credentials = useCredentials(selfKey);
siblingInputs?: Record<string, any>;
}> = ({
schema,
className,
selectedCredentials,
onSelectCredentials,
siblingInputs,
}) => {
const [isAPICredentialsModalOpen, setAPICredentialsModalOpen] =
useState(false);
const [
@@ -124,20 +130,47 @@ export const CredentialsInput: FC<{
useState<AbortController | null>(null);
const [oAuthError, setOAuthError] = useState<string | null>(null);
if (!credentials || credentials.isLoading) {
const api = useBackendAPI();
const credentials = useCredentials(schema, siblingInputs);
// Deselect credentials if they do not exist (e.g. provider was changed)
useEffect(() => {
if (!credentials || !("savedCredentials" in credentials)) return;
if (
selectedCredentials &&
!credentials.savedCredentials.some((c) => c.id === selectedCredentials.id)
) {
onSelectCredentials(undefined);
}
}, [credentials, selectedCredentials, onSelectCredentials]);
const singleCredential = useMemo(() => {
if (!credentials || !("savedCredentials" in credentials)) return null;
if (credentials.savedCredentials.length === 1)
return credentials.savedCredentials[0];
return null;
}, [credentials]);
// If only 1 credential is available, auto-select it and hide this input
useEffect(() => {
if (singleCredential && !selectedCredentials) {
onSelectCredentials(singleCredential);
}
}, [singleCredential, selectedCredentials, onSelectCredentials]);
if (!credentials || credentials.isLoading || singleCredential) {
return null;
}
const {
schema,
provider,
providerName,
supportsApiKey,
supportsOAuth2,
supportsUserPassword,
savedApiKeys,
savedOAuthCredentials,
savedUserPasswordCredentials,
savedCredentials,
oAuthCallback,
} = credentials;
@@ -235,13 +268,14 @@ export const CredentialsInput: FC<{
<>
{supportsApiKey && (
<APIKeyCredentialsModal
credentialsFieldName={selfKey}
schema={schema}
open={isAPICredentialsModalOpen}
onClose={() => setAPICredentialsModalOpen(false)}
onCredentialsCreate={(credsMeta) => {
onSelectCredentials(credsMeta);
setAPICredentialsModalOpen(false);
}}
siblingInputs={siblingInputs}
/>
)}
{supportsOAuth2 && (
@@ -253,43 +287,34 @@ export const CredentialsInput: FC<{
)}
{supportsUserPassword && (
<UserPasswordCredentialsModal
credentialsFieldName={selfKey}
schema={schema}
open={isUserPasswordCredentialsModalOpen}
onClose={() => setUserPasswordCredentialsModalOpen(false)}
onCredentialsCreate={(creds) => {
onSelectCredentials(creds);
setUserPasswordCredentialsModalOpen(false);
}}
siblingInputs={siblingInputs}
/>
)}
</>
);
// Deselect credentials if they do not exist (e.g. provider was changed)
if (
selectedCredentials &&
!savedApiKeys
.concat(savedOAuthCredentials)
.concat(savedUserPasswordCredentials)
.some((c) => c.id === selectedCredentials.id)
) {
onSelectCredentials(undefined);
}
const fieldHeader = (
<div className="mb-2 flex gap-1">
<span className="text-m green text-gray-900">
{providerName} Credentials
</span>
<SchemaTooltip description={schema.description} />
</div>
);
// No saved credentials yet
if (
savedApiKeys.length === 0 &&
savedOAuthCredentials.length === 0 &&
savedUserPasswordCredentials.length === 0
) {
if (savedCredentials.length === 0) {
return (
<>
<div className="mb-2 flex gap-1">
<span className="text-m green text-gray-900">
{providerName} Credentials
</span>
<SchemaTooltip description={schema.description} />
</div>
<div>
{fieldHeader}
<div className={cn("flex flex-row space-x-2", className)}>
{supportsOAuth2 && (
<Button onClick={handleOAuthLogin}>
@@ -314,46 +339,10 @@ export const CredentialsInput: FC<{
{oAuthError && (
<div className="mt-2 text-red-500">Error: {oAuthError}</div>
)}
</>
</div>
);
}
const getCredentialCounts = () => ({
apiKeys: savedApiKeys.length,
oauth: savedOAuthCredentials.length,
userPass: savedUserPasswordCredentials.length,
});
const getSingleCredential = () => {
const counts = getCredentialCounts();
const totalCredentials = Object.values(counts).reduce(
(sum, count) => sum + count,
0,
);
if (totalCredentials !== 1) return null;
if (counts.apiKeys === 1) return savedApiKeys[0];
if (counts.oauth === 1) return savedOAuthCredentials[0];
if (counts.userPass === 1) return savedUserPasswordCredentials[0];
return null;
};
const singleCredential = getSingleCredential();
if (singleCredential) {
if (!selectedCredentials) {
onSelectCredentials({
id: singleCredential.id,
type: singleCredential.type,
provider,
title: singleCredential.title,
});
}
return null;
}
function handleValueChange(newValue: string) {
if (newValue === "sign-in") {
// Trigger OAuth2 sign in flow
@@ -362,10 +351,7 @@ export const CredentialsInput: FC<{
// Open API key dialog
setAPICredentialsModalOpen(true);
} else {
const selectedCreds = savedApiKeys
.concat(savedOAuthCredentials)
.concat(savedUserPasswordCredentials)
.find((c) => c.id == newValue)!;
const selectedCreds = savedCredentials.find((c) => c.id == newValue)!;
onSelectCredentials({
id: selectedCreds.id,
@@ -378,38 +364,40 @@ export const CredentialsInput: FC<{
// Saved credentials exist
return (
<>
<div className="flex gap-1">
<span className="text-m green mb-0 text-gray-900">
{providerName} Credentials
</span>
<SchemaTooltip description={schema.description} />
</div>
<div>
{fieldHeader}
<Select value={selectedCredentials?.id} onValueChange={handleValueChange}>
<SelectTrigger>
<SelectValue placeholder={schema.placeholder} />
</SelectTrigger>
<SelectContent className="nodrag">
{savedOAuthCredentials.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
{credentials.username}
</SelectItem>
))}
{savedApiKeys.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
<IconKey className="mr-1.5 inline" />
{credentials.title}
</SelectItem>
))}
{savedUserPasswordCredentials.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
<IconUserPlus className="mr-1.5 inline" />
{credentials.title}
</SelectItem>
))}
{savedCredentials
.filter((c) => c.type == "oauth2")
.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
{credentials.username}
</SelectItem>
))}
{savedCredentials
.filter((c) => c.type == "api_key")
.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
<IconKey className="mr-1.5 inline" />
{credentials.title}
</SelectItem>
))}
{savedCredentials
.filter((c) => c.type == "user_password")
.map((credentials, index) => (
<SelectItem key={index} value={credentials.id}>
<ProviderIcon className="mr-2 inline h-4 w-4" />
<IconUserPlus className="mr-1.5 inline" />
{credentials.title}
</SelectItem>
))}
<SelectSeparator />
{supportsOAuth2 && (
<SelectItem value="sign-in">
@@ -435,17 +423,18 @@ export const CredentialsInput: FC<{
{oAuthError && (
<div className="mt-2 text-red-500">Error: {oAuthError}</div>
)}
</>
</div>
);
};
export const APIKeyCredentialsModal: FC<{
credentialsFieldName: string;
schema: BlockIOCredentialsSubSchema;
open: boolean;
onClose: () => void;
onCredentialsCreate: (creds: CredentialsMetaInput) => void;
}> = ({ credentialsFieldName, open, onClose, onCredentialsCreate }) => {
const credentials = useCredentials(credentialsFieldName);
siblingInputs?: Record<string, any>;
}> = ({ schema, open, onClose, onCredentialsCreate, siblingInputs }) => {
const credentials = useCredentials(schema, siblingInputs);
const formSchema = z.object({
apiKey: z.string().min(1, "API Key is required"),
@@ -466,8 +455,7 @@ export const APIKeyCredentialsModal: FC<{
return null;
}
const { schema, provider, providerName, createAPIKeyCredentials } =
credentials;
const { provider, providerName, createAPIKeyCredentials } = credentials;
async function onSubmit(values: z.infer<typeof formSchema>) {
const expiresAt = values.expiresAt
@@ -576,12 +564,13 @@ export const APIKeyCredentialsModal: FC<{
};
export const UserPasswordCredentialsModal: FC<{
credentialsFieldName: string;
schema: BlockIOCredentialsSubSchema;
open: boolean;
onClose: () => void;
onCredentialsCreate: (creds: CredentialsMetaInput) => void;
}> = ({ credentialsFieldName, open, onClose, onCredentialsCreate }) => {
const credentials = useCredentials(credentialsFieldName);
siblingInputs?: Record<string, any>;
}> = ({ schema, open, onClose, onCredentialsCreate, siblingInputs }) => {
const credentials = useCredentials(schema, siblingInputs);
const formSchema = z.object({
username: z.string().min(1, "Username is required"),
@@ -606,8 +595,7 @@ export const UserPasswordCredentialsModal: FC<{
return null;
}
const { schema, provider, providerName, createUserPasswordCredentials } =
credentials;
const { provider, providerName, createUserPasswordCredentials } = credentials;
async function onSubmit(values: z.infer<typeof formSchema>) {
const newCredentials = await createUserPasswordCredentials({

View File

@@ -68,9 +68,7 @@ type UserPasswordCredentialsCreatable = Omit<
export type CredentialsProviderData = {
provider: CredentialsProviderName;
providerName: string;
savedApiKeys: CredentialsMetaResponse[];
savedOAuthCredentials: CredentialsMetaResponse[];
savedUserPasswordCredentials: CredentialsMetaResponse[];
savedCredentials: CredentialsMetaResponse[];
oAuthCallback: (
code: string,
state_token: string,
@@ -113,28 +111,12 @@ export default function CredentialsProvider({
setProviders((prev) => {
if (!prev || !prev[provider]) return prev;
const updatedProvider = { ...prev[provider] };
if (credentials.type === "api_key") {
updatedProvider.savedApiKeys = [
...updatedProvider.savedApiKeys,
credentials,
];
} else if (credentials.type === "oauth2") {
updatedProvider.savedOAuthCredentials = [
...updatedProvider.savedOAuthCredentials,
credentials,
];
} else if (credentials.type === "user_password") {
updatedProvider.savedUserPasswordCredentials = [
...updatedProvider.savedUserPasswordCredentials,
credentials,
];
}
return {
...prev,
[provider]: updatedProvider,
[provider]: {
...prev[provider],
savedCredentials: [...prev[provider].savedCredentials, credentials],
},
};
});
},
@@ -203,21 +185,14 @@ export default function CredentialsProvider({
setProviders((prev) => {
if (!prev || !prev[provider]) return prev;
const updatedProvider = { ...prev[provider] };
updatedProvider.savedApiKeys = updatedProvider.savedApiKeys.filter(
(cred) => cred.id !== id,
);
updatedProvider.savedOAuthCredentials =
updatedProvider.savedOAuthCredentials.filter(
(cred) => cred.id !== id,
);
updatedProvider.savedUserPasswordCredentials =
updatedProvider.savedUserPasswordCredentials.filter(
(cred) => cred.id !== id,
);
return {
...prev,
[provider]: updatedProvider,
[provider]: {
...prev[provider],
savedCredentials: prev[provider].savedCredentials.filter(
(cred) => cred.id !== id,
),
},
};
});
return result;
@@ -233,29 +208,12 @@ export default function CredentialsProvider({
const credentialsByProvider = response.reduce(
(acc, cred) => {
if (!acc[cred.provider]) {
acc[cred.provider] = {
oauthCreds: [],
apiKeys: [],
userPasswordCreds: [],
};
}
if (cred.type === "oauth2") {
acc[cred.provider].oauthCreds.push(cred);
} else if (cred.type === "api_key") {
acc[cred.provider].apiKeys.push(cred);
} else if (cred.type === "user_password") {
acc[cred.provider].userPasswordCreds.push(cred);
acc[cred.provider] = [];
}
acc[cred.provider].push(cred);
return acc;
},
{} as Record<
CredentialsProviderName,
{
oauthCreds: CredentialsMetaResponse[];
apiKeys: CredentialsMetaResponse[];
userPasswordCreds: CredentialsMetaResponse[];
}
>,
{} as Record<CredentialsProviderName, CredentialsMetaResponse[]>,
);
setProviders((prev) => ({
@@ -265,40 +223,19 @@ export default function CredentialsProvider({
provider,
{
provider,
providerName:
providerDisplayNames[provider as CredentialsProviderName],
savedApiKeys: credentialsByProvider[provider]?.apiKeys ?? [],
savedOAuthCredentials:
credentialsByProvider[provider]?.oauthCreds ?? [],
savedUserPasswordCredentials:
credentialsByProvider[provider]?.userPasswordCreds ?? [],
providerName: providerDisplayNames[provider],
savedCredentials: credentialsByProvider[provider] ?? [],
oAuthCallback: (code: string, state_token: string) =>
oAuthCallback(
provider as CredentialsProviderName,
code,
state_token,
),
oAuthCallback(provider, code, state_token),
createAPIKeyCredentials: (
credentials: APIKeyCredentialsCreatable,
) =>
createAPIKeyCredentials(
provider as CredentialsProviderName,
credentials,
),
) => createAPIKeyCredentials(provider, credentials),
createUserPasswordCredentials: (
credentials: UserPasswordCredentialsCreatable,
) =>
createUserPasswordCredentials(
provider as CredentialsProviderName,
credentials,
),
) => createUserPasswordCredentials(provider, credentials),
deleteCredentials: (id: string, force: boolean = false) =>
deleteCredentials(
provider as CredentialsProviderName,
id,
force,
),
},
deleteCredentials(provider, id, force),
} satisfies CredentialsProviderData,
]),
),
}));

View File

@@ -1,6 +1,6 @@
"use client";
import { useLibraryPageContext } from "@/app/library/state-provider";
import { useLibraryPageContext } from "@/app/(platform)/library/state-provider";
import LibrarySortMenu from "./library-sort-menu";
export default function LibraryActionSubHeader(): React.ReactNode {

View File

@@ -48,7 +48,7 @@ export default function LibraryAgentCard({
/>
)}
<div className="absolute bottom-4 left-4">
<Avatar className="h-16 w-16 border-2 border-white dark:border-gray-800">
<Avatar className="h-16 w-16">
<AvatarImage
src={
creator_image_url
@@ -57,7 +57,7 @@ export default function LibraryAgentCard({
}
alt={`${name} creator avatar`}
/>
<AvatarFallback>{name.charAt(0)}</AvatarFallback>
<AvatarFallback size={64}>{name.charAt(0)}</AvatarFallback>
</Avatar>
</div>
</Link>

View File

@@ -3,7 +3,7 @@ import { useEffect, useState, useCallback } from "react";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { useLibraryPageContext } from "@/app/library/state-provider";
import { useLibraryPageContext } from "@/app/(platform)/library/state-provider";
import { useScrollThreshold } from "@/hooks/useScrollThreshold";
import LibraryAgentCard from "./library-agent-card";

View File

@@ -4,7 +4,7 @@ import debounce from "lodash/debounce";
import { Input } from "@/components/ui/input";
import { Search, X } from "lucide-react";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { useLibraryPageContext } from "@/app/library/state-provider";
import { useLibraryPageContext } from "@/app/(platform)/library/state-provider";
export default function LibrarySearchBar(): React.ReactNode {
const inputRef = useRef<HTMLInputElement>(null);

View File

@@ -1,6 +1,6 @@
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { LibraryAgentSortEnum } from "@/lib/autogpt-server-api/types";
import { useLibraryPageContext } from "@/app/library/state-provider";
import { useLibraryPageContext } from "@/app/(platform)/library/state-provider";
import { ArrowDownNarrowWideIcon } from "lucide-react";
import {
Select,

Some files were not shown because too many files have changed in this diff Show More