Bumps [psutil](https://github.com/giampaolo/psutil) from 6.1.1 to 7.0.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/giampaolo/psutil/blob/master/HISTORY.rst">psutil's
changelog</a>.</em></p>
<blockquote>
<h1>7.0.0</h1>
<p>2025-02-13</p>
<p><strong>Enhancements</strong></p>
<ul>
<li>669_, [Windows]: <code>net_if_addrs()</code>_ also returns the
<code>broadcast</code> address
instead of <code>None</code>.</li>
<li>2480_: Python 2.7 is no longer supported. Latest version supporting
Python
2.7 is psutil 6.1.X. Install it with: <code>pip2 install
psutil==6.1.*</code>.</li>
<li>2490_: removed long deprecated <code>Process.memory_info_ex()</code>
method. It was
deprecated in psutil 4.0.0, released 8 years ago. Substitute is
<code>Process.memory_full_info()</code>.</li>
</ul>
<p><strong>Bug fixes</strong></p>
<ul>
<li>2496_, [Linux]: Avoid segfault (a cPython bug) on
<code>Process.memory_maps()</code>
for processes that use hundreds of GBs of memory.</li>
<li>2502_, [macOS]: <code>virtual_memory()</code>_ now relies on
<code>host_statistics64</code>
instead of <code>host_statistics</code>. This is the same approach used
by <code>vm_stat</code>
CLI tool, and should grant more accurate results.</li>
</ul>
<p><strong>Compatibility notes</strong></p>
<ul>
<li>2480_: Python 2.7 is no longer supported.</li>
<li>2490_: removed long deprecated <code>Process.memory_info_ex()</code>
method.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ea5b55605f"><code>ea5b556</code></a>
pre-release</li>
<li><a
href="d6e28b7a83"><code>d6e28b7</code></a>
try to fix tests</li>
<li><a
href="104bb3228b"><code>104bb32</code></a>
test cpu_times() for process children</li>
<li><a
href="16c091b380"><code>16c091b</code></a>
test cpu_times() for process children</li>
<li><a
href="eee09da72a"><code>eee09da</code></a>
[OSX] proc.c: Fix goo.gl link in comment for source reference (<a
href="https://redirect.github.com/giampaolo/psutil/issues/2505">#2505</a>)</li>
<li><a
href="17e27801e6"><code>17e2780</code></a>
ci: build aarch64 wheel on GHA aarch64 runner (<a
href="https://redirect.github.com/giampaolo/psutil/issues/2503">#2503</a>)</li>
<li><a
href="1ba8667c89"><code>1ba8667</code></a>
pin black version to 24.X, because new 25.X breaks style</li>
<li><a
href="9c114a5137"><code>9c114a5</code></a>
[OSX] use <code>host_statistics64</code> to get memory metrics (<a
href="https://redirect.github.com/giampaolo/psutil/issues/2502">#2502</a>)</li>
<li><a
href="08d7d43894"><code>08d7d43</code></a>
pin black version to 24.X, because new 25.X breaks style</li>
<li><a
href="a509e5aa18"><code>a509e5a</code></a>
669 windows broadcast addr (<a
href="https://redirect.github.com/giampaolo/psutil/issues/2501">#2501</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/giampaolo/psutil/compare/release-6.1.1...release-7.0.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
We keep showing local users error messages that are just not relevant
### Changes 🏗️
Swaps the error messaging logic to be dependent on the behavior of the
specific platform they are on
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test with auth container down (simulates incorrect setup) and
validate that error message is shown
- [x] Try normal path
- Follow-up to #9657
<img width="280" alt="image"
src="https://github.com/user-attachments/assets/2f3cd683-db63-485f-8914-5654c34f1a4c"
/>
<img width="520" alt="image"
src="https://github.com/user-attachments/assets/de7e7cb9-61d4-4071-aea8-393ff5200c54"
/>
### Changes 🏗️
* Implement the input UI for Agent Input subtypes.
* Refactor node-input-component, extra out data type decision logic,
share it with runner/library input.
* Add `format` field for short-text, long-text, and mediafile type.
* Unify UI data type enum.
Out of scope:
- Styling for these inputs.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Use all the available agent input subtypes in an agent and run it
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- fix#9189
Currently, the list of agents on the "Publish Agent" dialog is random. I
have sorted them so that the latest edited ones appear first, similar to
the library page.
Co-authored-by: Bently <tomnoon9@gmail.com>
Remove the debug print statements in the logging module.
Every time an app process is started, it prints:
```
Console logging enabled
```
or similar, depending on the logging config.
- Resolves#8782
### Changes 🏗️
- feat(frontend/library): Use WS subscription to get real-time execution
updates
- feat(backend/ws_api): Send `GraphExecutionUpdate` on all new agent I/O
- Include agent I/O in `GraphExecutionUpdate` (by subclassing
`GraphExecution`)
- Add `IO_BLOCK_IDs` to `.blocks.io`
- feat(backend/ws_api): Add `subscribe_graph_executions` method to
WebSocket API
- feat(backend): Withhold `GraphExecution.node_executions` from requests
by non-graph-owners
- Split `GraphExecutionWithNodes` off of `GraphExecution`
- Use `GraphExecution` as much as possible, as it's a much cheaper query
than `GraphExecutionWithNodes`
- refactor(frontend): Make `GraphExecution.node_executions` optional
- fix(frontend): Parse dates in responses of `/executions` and
`/graphs/{graph_id}/executions`
- refactor(frontend/library): Move sorting logic for agent runs list
from `AgentRunsPage` to `AgentRunsSelectorList`
- refactor(backend/ws_api): Clean up message handler implementations
- refactor(backend/tests): Use `.data.execution.get_graph_execution(..)`
directly instead of `AgentServer.test_get_graph_run_results(..)`
Out-of-scope changes:
- refactor(backend): Remove unnecessary query include from
`.data.graph.get_graph_metadata(..)`
Demo:
https://github.com/user-attachments/assets/8ea6225d-7334-49cb-a522-83f153d840da
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` for an agent with inputs and outputs
- Draft and run a new run
- [x] -> should appear in the list of runs at the top
- [x] -> should be selected as soon as the request finishes
- [x] -> new I/O should appear as it is generated
- [x] -> status should be updated in real-time (both in list and in
adjacent details view)
- Click "Run again"
- [x] -> should appear in the list of runs at the top
- [x] -> should be selected as soon as the request finishes
- [x] -> new I/O should appear as it is generated
- [x] -> status should be updated in real-time (both in list and in
adjacent details view)
- Click "Open in builder" under "Agent actions"; run the agent from the
builder
- [x] -> should work the same as before
- [x] -> node I/O should appear in real-time
- [x] -> node execution statuses should update in real-time
This pull request includes several changes to improve the backend
functionality and configuration of the `autogpt_platform`. The most
important changes involve adding a RabbitMQ service for testing,
enhancing logging configuration, updating the linter script to handle
errors gracefully, and modifying test configurations.
Backend configuration improvements:
*
[`autogpt_platform/backend/docker-compose.test.yaml`](diffhunk://#diff-f6a211ff1c6d96d19adb5641ee287258a6af8d72a99e33dafb4a334094205a43R29-R43):
Added RabbitMQ service configuration for testing, including health
checks and environment variables.
*
[`autogpt_platform/backend/.env.example`](diffhunk://#diff-62020caf1b9a15e0e3b9b3b1b69d5f6464bf7643f62354cbbaabf755d57b6064R191-R192):
Added a section delimiter for optional API keys for use in finding the
optional keys end when auto generating integrations.
Error handling and logging enhancements:
*
[`autogpt_platform/backend/linter.py`](diffhunk://#diff-0787e3ef718ac9963df64d9ab1d8e7a3b35dc4ab0cb874c65da6c2901e1e4991R3):
Updated the `run` function to handle `subprocess.CalledProcessError`
exceptions and print error output to `stderr` and prevent raising a
stack trace when it should not.
[[1]](diffhunk://#diff-0787e3ef718ac9963df64d9ab1d8e7a3b35dc4ab0cb874c65da6c2901e1e4991R3)
[[2]](diffhunk://#diff-0787e3ef718ac9963df64d9ab1d8e7a3b35dc4ab0cb874c65da6c2901e1e4991L13-R23)
Testing configuration updates:
*
[`autogpt_platform/backend/pyproject.toml`](diffhunk://#diff-26ebebd91da791c6484f07d9d91484a66f52836708f5294b24365603438b880cR111):
Added `asyncio_default_fixture_loop_scope` to pytest configuration for
better control over asyncio fixtures.
*
[`autogpt_platform/backend/run_tests.py`](diffhunk://#diff-f09930577243a4ef5213bf6191a3c500a4b8d3dcfee2d4b452cf7ce66b3c494fL55):
Removed the `postgres-test` service from the test setup script as we
need all of docker services up for the tests to run.
This PR publicly exposes all the agents listed in the store to the
internet.
### Changes 🏗️
Remove the auth requirement for an agent to download the agent.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Accessing
`http://localhost:8006/api/store/download/agents/{agentId}` without
authorization key.
When we are cancelling a running graph execution, it's possible that the
graph is already terminated.
We need to allow this process to proceed and update the rest of its node
execution to terminate.
### Changes 🏗️
Instead of erroring out the graph execution status update, we proceed on
updating the node execution status.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Stop an already terminated graph
Currently, when an agent execution fails to be executed, the front-end
does not display any feedback to the user.
The scope of this change is providing that.
### Changes 🏗️
* Extracted `useToastOnFail` from `credits` page into a unified helper
method.
* Uses `useToastOnFail` on agent execution requests on library pages.
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/2daa0597-eb93-457d-8887-0f00c4db89ac"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/1a541c98-fb95-424f-8ffe-972332b3ce01"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run agent with invalid input
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
- Resolves#9693
### Changes 🏗️
- Catch the DB error and log a descriptive error message
- Add `NotFoundError` to `backend.util.exceptions`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] ~~I have tested my changes according to the test plan:~~
- Low-stakes change, high effort to test: we'll see if it works from the
production logs
- Prep work for #8782
- Prep work for #8779
### Changes 🏗️
- refactor(platform): Differentiate graph/node execution events
- fix(platform): Subscribe to execution updates by `graph_exec_id`
instead of `graph_id`+`graph_version`
- refactor(backend): Move all execution related models and functions
from `.data.graph` to `.data.execution`
- refactor(backend): Reorganize & refactor `.data.execution`
- fix(libs): Remove `load_dotenv` in `.auth.config` to fix test config
issues
- dx: Bump version of `black` in pre-commit config to v24.10.0 to match
poetry.lock
- Other minor refactoring in both frontend and backend
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Run an agent in the builder
- [x] -> works normally, node I/O is updated in real time
- Run an agent in the library
- [x] -> works normally
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
This is a follow up to
https://github.com/Significant-Gravitas/AutoGPT/pull/9511 fixing some
issues and updating onboarding.
### Changes 🏗️
- Update `UserOnboarding` data
- Update schema and add migration
- Change `step` in `UserOnboarding` to `completedSteps` array with
`OnboardingStep` enum
- Remove `isCompleted`: this is now inferred from `completedSteps`
values
- Don't onboard if <2 marketplace agents; that prevents self-host
onboarding
- Add endpoints:
- `is_onboarding_enabled`: to check if users should be onboarded (not if
they finished onboarding); now check if there are at least 2 marketplace
agents
- `get_store_agent`: returns `StoreAgentDetails` for given
`store_listing_version_id`
- `get_graph_meta_by_store_listing_version_id`: returns `GraphMeta`
- Add agent to Library just before running it and not when chosen and
remove code that was responsible for removing agent that wasn't run
- Move onboarding to `OnboardingProvider` (it'll be needed globally for
Phase 2)
- Multiple fixes, renames for clarity
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Don't onboard if less than 2 marketplace agents
- [x] Avoid non-input and credentials agents
- [x] Onboarding works and can be finished
- [x] Onboarding resumes
- [x] Onboarding agent runs correctly
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Some existing nodes use models that no longer exist as values on
`LlmModel` enum.
### Changes 🏗️
- Update models for all blocks with `LlmModel` fields that do not exist
in `LlmModel` enum to `gpt-4o`, directly in `AgentNode->constantInput`
db column, on server startup
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Updates wrong models to `gpt-4o` for all affected `AgentNode`s
- [x] Doesn't update correct models
- [x] Doesn't insert model when unnecessary
- [x] Doesn't break other values in jsonb
The current block web requests utility has a logic to avoid the system
firing into blocklisted IPs.
However, the current logic is still prone to a few security issues:
* DNS rebinding attack: due to the lack of guarantee on the used IP not
being changed during the IP checking and firing step.
* Open redirect: due to the request sensitive request headers are still
being propagated throughout the web redirect.
### Changes 🏗️
* Uses IP pinning to request the web.
* Strip `Authorization`, `Proxy-Authorization`, `Cookie` upon web
redirects.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test the web request block, add more tests with different
validation scenarios.
(cherry picked from commit f0df4c9174)
The current block web requests utility has a logic to avoid the system
firing into blocklisted IPs.
However, the current logic is still prone to a few security issues:
* DNS rebinding attack: due to the lack of guarantee on the used IP not
being changed during the IP checking and firing step.
* Open redirect: due to the request sensitive request headers are still
being propagated throughout the web redirect.
### Changes 🏗️
* Uses IP pinning to request the web.
* Strip `Authorization`, `Proxy-Authorization`, `Cookie` upon web
redirects.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test the web request block, add more tests with different
validation scenarios.
Currently, an import statement like `from backend.blocks.basic import
AgentInputBlock` will initialize `backend.blocks` and thereby load all
other blocks. This has quite high potential to cause circular import
issues, and it's bad for performance in cases where we don't want to
load all blocks (yet).
The same goes for `backend.integrations.webhooks`.
### Changes 🏗️
- Change `__init__.py` of `backend.blocks` and
`backend.integrations.webhooks` to cached loader functions rather than
init-time code
- Change type of `BlockWebhookConfig.provider` to `ProviderName`
<!-- test edit to check that this doesn't break anything -->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Set up and use an agent with a webhook-triggered block
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Agent Input Block subtypes do not have a proper input UI yet on the
library & run input page.
### Changes 🏗️
Provide a toggle to enable these blocks and set it to False by default.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- Clearly explain the need for these changes: -->
We need an admin agent approval UI for handling the submissions to the
marketplace
### Changes 🏗️
- Adds routes to the admin routes list
- Fixes the db query for submitting new versions of existing agents
- Add models for responses that include version details
- add the admin pages for agent
- Adds the Admin Agent Data Table
- Add all the new endpoints to the client.ts
Models changes
- convert the Submission status to an enum
- remove is_approved from models which was left incorrectly
- Add StoreListingWithVersions
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the admin dashboard for
- [x] Reject
- [x] Accept
- [x] Updating listing
- [x] More version submissions
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Blocks that are not defined in the block cost are pretty much free. The
lack of cost control makes it hard to control its quota. The scope of
this change is providing a way to charge any executions based on the
number of block being executed in real-time.
### Changes 🏗️
* Add execution charge logic based on the number of blocks executed,
controlled by these two configurations:
* `execution_cost_count_threshold`: We will charge the execution based
on the multiple of this number.
* `execution_cost_per_threshold`: The amount we are charging on its
threshold multiple.
* Make charging logic on the graph execution logic (as opposed to node
level) so it's being done serially and insufficient fund error is
guaranteed to stop the graph execution.
* Moved cost calculation logic into backend/executor/util.py
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute graph with configured threshold & cost and test the
balance being deducted on that.
- [x] Existing cost calculation is still being done without any issue.
- [x] Low balance stop the whole graph execution.
### Changes 🏗️
Added these types of input blocks:
* TextShort
* TextLong
* Number
* Date
* Time
* FileUpload
* Dropdown
* Toggle
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test in respective block codes.
<!-- Clearly explain the need for these changes: -->
The store listing and submissions were previously just a best guess
without much implementation. This updates the database models and
queries and such to be based on discussion around what the process
should look like. It also adds and update the relevant routers for this
change
### Changes 🏗️
Store Listing
- change isApproved to hasApprovedVersion
- Move slug into store listing
- mark an active version in store listing
Store Version
- Move submissions into version
- make name optional
- have state transition timestamps for submitted and approved/rejected
- added a changes field
- added internal comments and clarified review comments field
SubmissionStatus
- Fixed DAFT to DRAFT
StoreListingSubmission
- Dropped table
Graph
- Used more modern format for the params for prisma -- no other changes
Added migrations for all the model movements
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Use the store codepaths from the release testplan doc as the test
plan (claude I can't publish the testplan but I am a maintainer lol,
trust me here my guy, you're supposed to be lenient)
- [x] Check the db is used as appropriate following the rules
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Defaults need to be handled as special cases every time there's no
`hardcodedValues` in node data. This causes multiple issues such as
`useCredentials` not taking into account default model and requiring
user to manually switch model back and forth.
### Changes 🏗️
- Set default values from `inputSchema` to `hardcodedValues` when new
node is placed.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Newly placed node has defaults set as `hardcodedValues`
- [x] AI Blocks: Model is recognised, node shows price and credentials
work correctly without the need to switch
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
A graph that processes tens of thousands of nodes will cripple the
system since the API tries to load all of them and dump them into the
browser. The scope of this change is to avoid such an issue by only
returning the last 1000 node executions.
### Changes 🏗️
* Return only 1000 node executions from `AgentNodeExecutions` reference.
* Unify the include clause for fetching `AgentNodeExecutions` in one
place and its format.
* Fix & optimize `cancel_execution` logic to always set both the graph &
node execution status in batch.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a graph in a loop that executes 10000 nodes, it should
only display the last 1000 nodes when refreshed. Cancelling the graph
should also not cripple the server.
Having the starting nodes of the execution marked as incomplete misled
the users.
### Changes 🏗️
Mark the starting nodes during the executions as QUEUED instead of
INCOMPLETE.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Executed the graph, the incomplete initial starting node is no
more.
The current execution update is unreliable, once you lose WebSocket
connection, you will receive no updates.
### Changes 🏗️
Fix web socket re-connection logic.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app and execute an agent, then restart the API server, and
re-execute the app without refreshing the page.
### Changes 🏗️
Avoid connecting the same host and falling-back to defined api_host.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Define a custom DBMANAGER_HOST, RestService should access the
`db_manager` service using localhost.
DB Manager uses DB connections, and multiple instances of it hog the
very limited Database connection quota. We need this service to be a
unified place to control the limited db connection.
### Changes 🏗️
Connect all the database manager usage in a single place, currently
running on the rest service.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
- Resolves#9631
### Changes 🏗️
- Truncate library agent card title (2 lines) and description (3 lines)
- Make "See runs" and "Open in builder" stick to bottom of card
regardless of other content
- Reduce number of grid columns (4 -> 3) in `lg` layout on `/library` to
give items more horizontal space

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Visually test the changes made on different screen sizes
- Related to #8784
### Changes 🏗️
- feat(frontend/library): Improve agent output styling & fix content
overflow issue
- fix(frontend/library): Fix overlap between content and inset button of
expandable input fields (#9650)
- fix(backend): Unbreak loading graph executions with missing inputs

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Run an agent with at least one input *not* filled out; view this run
in the Library
- [x] -> page should load normally
- [x] -> agent inputs should load and show normally
- Run an agent that generates long output; view this run in the Library
- [x] -> output should not overflow its container or stretch the page
layout
- [x] -> visually check that the output section looks slick
### Issue
The SendWebRequestBlock currently fails to properly route HTTP error
responses (4xx, 5xx) to their designated output pins (`client_error` and
`server_error`). Instead, these errors are being sent to the default
"Error" pin, breaking expected workflows that depend on proper error
handling.
### Root Cause
The underlying issue is that our custom `requests` module from
`backend.util.request` appears to automatically raise exceptions for
error status codes (similar to how `raise_for_status()` works in the
standard requests library). When these exceptions are thrown, the
block's conditional logic for handling different status codes is
bypassed entirely.
### Changes
This PR adds proper exception handling to catch HTTP errors raised by
the requests module and routes them to the appropriate output pins:
- Added a try-except block to capture `requests.exceptions.HTTPError`
- Extract status code and response data from the caught exception
- Yield to the proper pin based on the status code (4xx → client_error,
5xx → server_error)
- Maintain consistent behavior with the original design intent
### Additional Context
This change maintains backward compatibility while ensuring the block
behaves according to its documented functionality. Users can now
properly handle 4xx and 5xx errors in their workflows as originally
intended.
<!-- Clearly explain the need for these changes: -->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the block with new changes and old and ensure expected
behavior
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
<!-- Clearly explain the need for these changes: -->
We accidently send several emails within 10 mins and we're gonna get
blocked for spam if we keep it up
### Changes 🏗️
- moves 1 min to 60 min timer
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test via batching a series of notiications over an hour
- Resolves#9622
### Changes 🏗️
- Add pop-out button + modal to input fields in Agent Run Draft view on
`/library/agents/[id]`
- Fix `icon`-variant button styling


### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to an agent's page -> click "+ New run"
- [x] -> pop-out button should show on all input fields
- Enter a value in one of the inputs; click the pop-out button on that
input
- [x] -> input modal with large text field should open
- [x] -> the value you just entered should be present in the modal's
text field
- Edit the value & click "Save"
- [x] -> the modal should close
- [x] -> the value in the corresponding input field should be updated