Swifty da16397882 feat(blocks): update exa websets implementation (#10521)
## Summary

This PR fixes and enhances the Exa Websets implementation to resolve
issues with the expand_items parameter and improve the overall block
functionality. The changes address UI limitations with nested response
objects while providing a more comprehensive and user-friendly interface
for creating and managing Exa websets.


[Websets_v14.json](https://github.com/user-attachments/files/21596313/Websets_v14.json)
<img width="1335" height="949" alt="Screenshot 2025-08-05 at 11 45 07"
src="https://github.com/user-attachments/assets/3a9b3da0-3950-4388-96b2-e5dfa9df9b67"
/>

**Why these changes are necessary:**

1. **UI Compatibility**: The current implementation returns deeply
nested objects that cause the UI to crash. This PR flattens the input
parameters and returns simplified response objects to work around these
UI limitations.

2. **Expand Items Issue**: The `expand_items` toggle in the GetWebset
block was causing failures. This parameter has been removed as it's not
essential for the basic functionality.

3. **Missing SDK Integration**: The previous implementation used raw
HTTP requests instead of the official Exa SDK, making it harder to
maintain and more prone to errors.

4. **Limited Functionality**: The original implementation lacked support
for many Exa API features like imports, enrichments, and scope
configuration.

### Changes 🏗️

<\!-- Concisely describe all of the changes made in this pull request:
-->

1. **Added Pydantic models** (`model.py`):
   - Created comprehensive type definitions for all Exa webset objects
   - Added proper enums for status values and types
   - Structured models to match the Exa API response format

2. **Refactored websets.py**:
   - Replaced raw HTTP requests with the official `exa-py` SDK
- Flattened nested input parameters to avoid UI issues with complex
objects
   - Enhanced `ExaCreateWebsetBlock` with support for:
- Search configuration with entity types, criteria, exclude/scope
sources
     - Import functionality from existing sources
     - Enrichment configuration with multiple formats
- Removed problematic `expand_items` parameter from `ExaGetWebsetBlock`
- Updated response objects to use simplified `Webset` model that returns
dicts for nested objects

3. **Updated webhook_blocks.py**:
- Disabled the webhook block temporarily (`disabled=True`) as it needs
further testing

4. **Added exa-py dependency**:
   - Added official Exa Python SDK to `pyproject.toml` and `poetry.lock`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <\!-- Put your test plan here: -->
- [x] Created a new webset using the ExaCreateWebsetBlock with basic
search parameters
- [x] Verified the webset was created successfully in the Exa dashboard
- [x] Listed websets using ExaListWebsetsBlock and confirmed pagination
works
- [x] Retrieved individual webset details using ExaGetWebsetBlock
without expand_items
- [x] Tested advanced features including entity types, criteria, and
exclude sources
- [x] Confirmed the UI no longer crashes when displaying webset
responses
- [x] Verified the Docker environment builds successfully with the new
exa-py dependency

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
  - Added `exa-py` dependency to backend requirements

### Additional Notes

- The webhook functionality has been temporarily disabled pending
further testing and UI improvements
- The flattened parameter approach is a workaround for current UI
limitations with nested objects
- Future improvements could include re-enabling nested objects once the
UI supports them better
2025-08-08 15:14:52 +00:00
2025-01-29 10:31:57 -06:00
2024-05-04 09:38:37 -05:00
2025-03-24 18:11:56 +00:00
2025-07-25 15:39:29 +01:00
2025-07-25 15:27:47 +01:00

AutoGPT: Build, Deploy, and Run AI Agents

Discord Follow Twitter Follow

AutoGPT is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.

Hosting Options

  • Download to self-host (Free!)
  • Join the Waitlist for the cloud-hosted beta (Closed Beta - Public release Coming Soon!)

How to Self-Host the AutoGPT Platform

Note

Setting up and hosting the AutoGPT Platform yourself is a technical process. If you'd rather something that just works, we recommend joining the waitlist for the cloud-hosted beta.

System Requirements

Before proceeding with the installation, ensure your system meets the following requirements:

Hardware Requirements

  • CPU: 4+ cores recommended
  • RAM: Minimum 8GB, 16GB recommended
  • Storage: At least 10GB of free space

Software Requirements

  • Operating Systems:
    • Linux (Ubuntu 20.04 or newer recommended)
    • macOS (10.15 or newer)
    • Windows 10/11 with WSL2
  • Required Software (with minimum versions):
    • Docker Engine (20.10.0 or newer)
    • Docker Compose (2.0.0 or newer)
    • Git (2.30 or newer)
    • Node.js (16.x or newer)
    • npm (8.x or newer)
    • VSCode (1.60 or newer) or any modern code editor

Network Requirements

  • Stable internet connection
  • Access to required ports (will be configured in Docker)
  • Ability to make outbound HTTPS connections

Updated Setup Instructions:

We've moved to a fully maintained and regularly updated documentation site.

👉 Follow the official self-hosting guide here

This tutorial assumes you have Docker, VSCode, git and npm installed.


Skip the manual steps and get started in minutes using our automatic setup script.

For macOS/Linux:

curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh

For Windows (PowerShell):

powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"

This will install dependencies, configure Docker, and launch your local instance — all in one go.

🧱 AutoGPT Frontend

The AutoGPT frontend is where users interact with our powerful AI automation platform. It offers multiple ways to engage with and leverage our AI agents. This is the interface where you'll bring your AI automation ideas to life:

Agent Builder: For those who want to customize, our intuitive, low-code interface allows you to design and configure your own AI agents.

Workflow Management: Build, modify, and optimize your automation workflows with ease. You build your agent by connecting blocks, where each block performs a single action.

Deployment Controls: Manage the lifecycle of your agents, from testing to production.

Ready-to-Use Agents: Don't want to build? Simply select from our library of pre-configured agents and put them to work immediately.

Agent Interaction: Whether you've built your own or are using pre-configured agents, easily run and interact with them through our user-friendly interface.

Monitoring and Analytics: Keep track of your agents' performance and gain insights to continually improve your automation processes.

Read this guide to learn how to build your own custom blocks.

💽 AutoGPT Server

The AutoGPT Server is the powerhouse of our platform This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously. It contains all the essential components that make AutoGPT run smoothly.

Source Code: The core logic that drives our agents and automation processes.

Infrastructure: Robust systems that ensure reliable and scalable performance.

Marketplace: A comprehensive marketplace where you can find and deploy a wide range of pre-built agents.

🐙 Example Agents

Here are two examples of what you can do with AutoGPT:

  1. Generate Viral Videos from Trending Topics

    • This agent reads topics on Reddit.
    • It identifies trending topics.
    • It then automatically creates a short-form video based on the content.
  2. Identify Top Quotes from Videos for Social Media

    • This agent subscribes to your YouTube channel.
    • When you post a new video, it transcribes it.
    • It uses AI to identify the most impactful quotes to generate a summary.
    • Then, it writes a post to automatically publish to your social media.

These examples show just a glimpse of what you can achieve with AutoGPT! You can create customized workflows to build agents for any use case.


License Overview:

🛡️ Polyform Shield License: All code and content within the autogpt_platform folder is licensed under the Polyform Shield License. This new project is our in-developlemt platform for building, deploying and managing agents.
Read more about this effort

🦉 MIT License: All other portions of the AutoGPT repository (i.e., everything outside the autogpt_platform folder) are licensed under the MIT License. This includes the original stand-alone AutoGPT Agent, along with projects such as Forge, agbenchmark and the AutoGPT Classic GUI.
We also publish additional work under the MIT Licence in other repositories, such as GravitasML which is developed for and used in the AutoGPT Platform. See also our MIT Licenced Code Ability project.


Mission

Our mission is to provide the tools, so that you can focus on what matters:

  • 🏗️ Building - Lay the foundation for something amazing.
  • 🧪 Testing - Fine-tune your agent to perfection.
  • 🤝 Delegating - Let AI work for you, and have your ideas come to life.

Be part of the revolution! AutoGPT is here to stay, at the forefront of AI innovation.

📖 Documentation | 🚀 Contributing


🤖 AutoGPT Classic

Below is information about the classic version of AutoGPT.

🛠️ Build your own Agent - Quickstart

🏗️ Forge

Forge your own agent! Forge is a ready-to-go toolkit to build your own agent application. It handles most of the boilerplate code, letting you channel all your creativity into the things that set your agent apart. All tutorials are located here. Components from forge can also be used individually to speed up development and reduce boilerplate in your agent project.

🚀 Getting Started with Forge This guide will walk you through the process of creating your own agent and using the benchmark and user interface.

📘 Learn More about Forge

🎯 Benchmark

Measure your agent's performance! The agbenchmark can be used with any agent that supports the agent protocol, and the integration with the project's CLI makes it even easier to use with AutoGPT and forge-based agents. The benchmark offers a stringent testing environment. Our framework allows for autonomous, objective performance evaluations, ensuring your agents are primed for real-world action.

📦 agbenchmark on Pypi | 📘 Learn More about the Benchmark

💻 UI

Makes agents easy to use! The frontend gives you a user-friendly interface to control and monitor your agents. It connects to agents through the agent protocol, ensuring compatibility with many agents from both inside and outside of our ecosystem.

The frontend works out-of-the-box with all agents in the repo. Just use the CLI to run your agent of choice!

📘 Learn More about the Frontend

⌨️ CLI

To make it as easy as possible to use all of the tools offered by the repository, a CLI is included at the root of the repo:

$ ./run
Usage: cli.py [OPTIONS] COMMAND [ARGS]...

Options:
  --help  Show this message and exit.

Commands:
  agent      Commands to create, start and stop agents
  benchmark  Commands to start the benchmark and list tests and categories
  setup      Installs dependencies needed for your system.

Just clone the repo, install dependencies with ./run setup, and you should be good to go!

🤔 Questions? Problems? Suggestions?

Get help - Discord 💬

Join us on Discord

To report a bug or request a feature, create a GitHub Issue. Please ensure someone else hasn't created an issue for the same topic.

🤝 Sister projects

🔄 Agent Protocol

To maintain a uniform standard and ensure seamless compatibility with many current and future applications, AutoGPT employs the agent protocol standard by the AI Engineer Foundation. This standardizes the communication pathways from your agent to the frontend and benchmark.


Stars stats

Star History Chart

Contributors

Contributors
Description
No description provided
Readme MIT Cite this repository 614 MiB
Languages
Python 62.5%
TypeScript 32.3%
Dart 2%
JavaScript 1.2%
PLpgSQL 0.7%
Other 1.1%