Compare commits

..

6 Commits

Author SHA1 Message Date
Aarushi
3860a9b6e4 remove work dir 2024-09-22 12:22:46 +01:00
Aarushi
1414b83cf8 wip 2024-09-22 11:57:22 +01:00
Zamil Majdy
612e7cfed5 feat(rnd): Route to /login on authenticated requests (#8111) 2024-09-21 23:50:55 +07:00
Zamil Majdy
52ee846744 fix(platform): Fix logging incomplete information & LLM missing error (#8128) 2024-09-21 15:18:36 +00:00
Zamil Majdy
62a3e1c127 fix(rnd): Fix broken list input pin execution ordering & unlinked dynamic pins on save (#8108) 2024-09-21 22:11:35 +07:00
Swifty
ef7cfbb860 refactor: AutoGPT Platform Stealth Launch Repo Re-Org (#8113)
Restructuring the Repo to make it clear the difference between classic autogpt and the autogpt platform:
* Move the "classic" projects `autogpt`, `forge`, `frontend`, and `benchmark` into a `classic` folder
  * Also rename `autogpt` to `original_autogpt` for absolute clarity
* Rename `rnd/` to `autogpt_platform/`
  * `rnd/autogpt_builder` -> `autogpt_platform/frontend`
  * `rnd/autogpt_server` -> `autogpt_platform/backend`
* Adjust any paths accordingly
2024-09-20 16:50:43 +02:00
273 changed files with 383 additions and 332 deletions

26
.github/labeler.yml vendored
View File

@@ -1,27 +1,27 @@
AutoGPT Agent:
Classic AutoGPT Agent:
- changed-files:
- any-glob-to-any-file: classic/original_autogpt/**
Classic Benchmark:
- changed-files:
- any-glob-to-any-file: classic/benchmark/**
Classic Frontend:
- changed-files:
- any-glob-to-any-file: classic/frontend/**
Forge:
- changed-files:
- any-glob-to-any-file: classic/forge/**
Benchmark:
- changed-files:
- any-glob-to-any-file: classic/benchmark/**
Frontend:
- changed-files:
- any-glob-to-any-file: classic/frontend/**
documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Builder:
platform/frontend:
- changed-files:
- any-glob-to-any-file: autogpt_platform/autogpt_builder/**
- any-glob-to-any-file: autogpt_platform/frontend/**
Server:
platform/backend:
- changed-files:
- any-glob-to-any-file: autogpt_platform/autogpt_server/**
- any-glob-to-any-file: autogpt_platform/backend/**

View File

@@ -8,11 +8,11 @@ on:
- 'ci-test*' # This will match any branch that starts with "ci-test"
paths:
- 'classic/frontend/**'
- '.github/workflows/frontend-ci.yml'
- '.github/workflows/classic-frontend-ci.yml'
pull_request:
paths:
- 'classic/frontend/**'
- '.github/workflows/frontend-ci.yml'
- '.github/workflows/classic-frontend-ci.yml'
jobs:
build:
@@ -21,7 +21,7 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
env:
BUILD_BRANCH: ${{ format('frontend-build/{0}', github.ref_name) }}
BUILD_BRANCH: ${{ format('classic-frontend-build/{0}', github.ref_name) }}
steps:
- name: Checkout Repo

View File

@@ -4,7 +4,7 @@ on:
push:
branches: [ master, development, ci-test* ]
paths:
- '.github/workflows/lint-ci.yml'
- '.github/workflows/classic-python-checks-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
@@ -13,7 +13,7 @@ on:
pull_request:
branches: [ master, development, release-* ]
paths:
- '.github/workflows/lint-ci.yml'
- '.github/workflows/classic-python-checks-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
@@ -21,7 +21,7 @@ on:
- '!classic/forge/tests/vcr_cassettes'
concurrency:
group: ${{ format('lint-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
group: ${{ format('classic-python-checks-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:

View File

@@ -0,0 +1,40 @@
name: AutoGPT Server Docker Build & Push
on:
push:
branches: [ update-docker-ci ]
paths:
- '**'
defaults:
run:
shell: bash
env:
PROJECT_ID: agpt-dev
IMAGE_NAME: agpt-server-dev
REGION: us-central1
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v0.2.1
with:
project_id: ${{ env.PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Configure Docker
run: gcloud auth configure-docker ${{ env.REGION }}-docker.pkg.dev
- name: Build Docker image
run: docker build -t ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.IMAGE_NAME }}:${{ github.sha }} -f autogpt_platform/backend/Dockerfile .
- name: Push Docker image
run: docker push ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

View File

@@ -1,14 +1,14 @@
name: Platform - AutoGPT Builder Infra
name: AutoGPT Platform - Infra
on:
push:
branches: [ master ]
paths:
- '.github/workflows/autogpt-infra-ci.yml'
- '.github/workflows/platform-autogpt-infra-ci.yml'
- 'autogpt_platform/infra/**'
pull_request:
paths:
- '.github/workflows/autogpt-infra-ci.yml'
- '.github/workflows/platform-autogpt-infra-ci.yml'
- 'autogpt_platform/infra/**'
defaults:
@@ -53,4 +53,4 @@ jobs:
- name: Run chart-testing (lint)
if: steps.list-changed.outputs.changed == 'true'
run: ct lint --target-branch ${{ github.event.repository.default_branch }}
run: ct lint --target-branch ${{ github.event.repository.default_branch }}

View File

@@ -1,25 +1,25 @@
name: Platform - AutoGPT Server CI
name: AutoGPT Platform - Backend CI
on:
push:
branches: [master, development, ci-test*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "autogpt_platform/autogpt_server/**"
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
pull_request:
branches: [master, development, release-*]
paths:
- ".github/workflows/autogpt-server-ci.yml"
- "autogpt_platform/autogpt_server/**"
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
concurrency:
group: ${{ format('autogpt-server-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
group: ${{ format('backend-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ startsWith(github.event_name, 'pull_request') }}
defaults:
run:
shell: bash
working-directory: autogpt_platform/autogpt_server
working-directory: autogpt_platform/backend
jobs:
test:
@@ -90,7 +90,7 @@ jobs:
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/autogpt_server/poetry.lock') }}
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
@@ -152,4 +152,4 @@ jobs:
# uses: codecov/codecov-action@v4
# with:
# token: ${{ secrets.CODECOV_TOKEN }}
# flags: autogpt-server,${{ runner.os }}
# flags: backend,${{ runner.os }}

View File

@@ -1,20 +1,20 @@
name: Platform - AutoGPT Builder CI
name: AutoGPT Platform - Frontend CI
on:
push:
branches: [ master ]
paths:
- '.github/workflows/autogpt-builder-ci.yml'
- 'autogpt_platform/autogpt_builder/**'
- '.github/workflows/platform-frontend-ci.yml'
- 'autogpt_platform/frontend/**'
pull_request:
paths:
- '.github/workflows/autogpt-builder-ci.yml'
- 'autogpt_platform/autogpt_builder/**'
- '.github/workflows/platform-frontend-ci.yml'
- 'autogpt_platform/frontend/**'
defaults:
run:
shell: bash
working-directory: autogpt_platform/autogpt_builder
working-directory: autogpt_platform/frontend
jobs:

2
.gitignore vendored
View File

@@ -170,4 +170,4 @@ pri*
ig*
.github_access_token
LICENSE.rtf
autogpt_platform/autogpt_server/settings.py
autogpt_platform/backend/settings.py

View File

@@ -19,7 +19,7 @@ To run the AutoGPT Platform, follow these steps:
```
git submodule update --init --recursive
```
4. Navigate back to rnd (cd ..)
4. Navigate back to autogpt_platform (cd ..)
5. Run the following command:
```
cp supabase/docker/.env.example .env
@@ -32,7 +32,7 @@ To run the AutoGPT Platform, follow these steps:
```
This command will start all the necessary backend services defined in the `docker-compose.combined.yml` file in detached mode.
7. Navigate to autogpt_platform/autogpt_builder.
7. Navigate to autogpt_platform/frontend.
8. Run the following command:
```
cp .env.example .env.local

View File

@@ -25,13 +25,13 @@ RUN pip3 install poetry
# Copy and install dependencies
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY autogpt_platform/autogpt_server/poetry.lock autogpt_platform/autogpt_server/pyproject.toml /app/autogpt_platform/autogpt_server/
WORKDIR /app/autogpt_platform/autogpt_server
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/autogpt_platform/backend
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi
# Generate Prisma client
COPY autogpt_platform/autogpt_server/schema.prisma ./
COPY autogpt_platform/backend/schema.prisma ./
RUN poetry config virtualenvs.create false \
&& poetry run prisma generate
@@ -60,20 +60,19 @@ COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-pyth
ENV PATH="/app/.venv/bin:$PATH"
RUN mkdir -p /app/autogpt_platform/autogpt_libs
RUN mkdir -p /app/autogpt_platform/autogpt_server
RUN mkdir -p /app/autogpt_platform/backend
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY autogpt_platform/autogpt_server/poetry.lock autogpt_platform/autogpt_server/pyproject.toml /app/autogpt_platform/autogpt_server/
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/autogpt_platform/autogpt_server
WORKDIR /app/autogpt_platform/backend
FROM server_dependencies AS server
COPY autogpt_platform/autogpt_server /app/autogpt_platform/autogpt_server
COPY autogpt_platform/backend /app/autogpt_platform/backend
ENV DATABASE_URL=""
ENV PORT=8000
CMD ["poetry", "run", "rest"]

View File

@@ -48,7 +48,7 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
> ```
>
> Then run the generation again. The path *should* look something like this:
> `<some path>/pypoetry/virtualenvs/autogpt-server-TQIRSwR6-py3.12/bin/prisma`
> `<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma`
6. Run the postgres database from the /rnd folder
@@ -57,10 +57,10 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
docker compose up -d
```
7. Run the migrations (from the autogpt_server folder)
7. Run the migrations (from the backend folder)
```sh
cd ../autogpt_server
cd ../backend
prisma migrate dev --schema postgres/schema.prisma
```

View File

@@ -53,7 +53,7 @@ We use the Poetry to manage the dependencies. To set up the project, follow thes
> ```
>
> Then run the generation again. The path *should* look something like this:
> `<some path>/pypoetry/virtualenvs/autogpt-server-TQIRSwR6-py3.12/bin/prisma`
> `<some path>/pypoetry/virtualenvs/backend-TQIRSwR6-py3.12/bin/prisma`
6. Migrate the database. Be careful because this deletes current data in the database.
@@ -193,7 +193,7 @@ Rest Server Daemon: 8004
## Adding a New Agent Block
To add a new agent block, you need to create a new class that inherits from `Block` and provides the following information:
* All the block code should live in the `blocks` (`autogpt_server.blocks`) module.
* All the block code should live in the `blocks` (`backend.blocks`) module.
* `input_schema`: the schema of the input data, represented by a Pydantic object.
* `output_schema`: the schema of the output data, represented by a Pydantic object.
* `run` method: the main logic of the block.

View File

@@ -1,7 +1,7 @@
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from autogpt_server.util.process import AppProcess
from backend.util.process import AppProcess
def run_processes(*processes: "AppProcess", **kwargs):
@@ -24,8 +24,8 @@ def main(**kwargs):
Run all the processes required for the AutoGPT-server (REST and WebSocket APIs).
"""
from autogpt_server.executor import ExecutionManager, ExecutionScheduler
from autogpt_server.server import AgentServer, WebsocketServer
from backend.executor import ExecutionManager, ExecutionScheduler
from backend.server import AgentServer, WebsocketServer
run_processes(
ExecutionManager(),

View File

@@ -4,9 +4,9 @@ import os
import re
from pathlib import Path
from autogpt_server.data.block import Block
from backend.data.block import Block
# Dynamically load all modules under autogpt_server.blocks
# Dynamically load all modules under backend.blocks
AVAILABLE_MODULES = []
current_dir = os.path.dirname(__file__)
modules = glob.glob(os.path.join(current_dir, "*.py"))

View File

@@ -4,15 +4,15 @@ from typing import Any, List
from jinja2 import BaseLoader, Environment
from pydantic import Field
from autogpt_server.data.block import (
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockUIType,
)
from autogpt_server.data.model import SchemaField
from autogpt_server.util.mock import MockObject
from backend.data.model import SchemaField
from backend.util.mock import MockObject
jinja = Environment(loader=BaseLoader())
@@ -85,7 +85,6 @@ class PrintToConsoleBlock(Block):
class FindInDictionaryBlock(Block):
class Input(BlockSchema):
input: Any = Field(description="Dictionary to lookup from")
key: str | int = Field(description="Key to lookup in the dictionary")

View File

@@ -2,7 +2,7 @@ import os
import re
from typing import Type
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
class BlockInstallationBlock(Block):
@@ -48,7 +48,7 @@ class BlockInstallationBlock(Block):
block_dir = os.path.dirname(__file__)
file_path = f"{block_dir}/{file_name}.py"
module_name = f"autogpt_server.blocks.{file_name}"
module_name = f"backend.blocks.{file_name}"
with open(file_path, "w") as f:
f.write(code)
@@ -57,7 +57,7 @@ class BlockInstallationBlock(Block):
block_class: Type[Block] = getattr(module, class_name)
block = block_class()
from autogpt_server.util.test import execute_block_test
from backend.util.test import execute_block_test
execute_block_test(block)
yield "success", "Block installed successfully."

View File

@@ -1,8 +1,8 @@
from enum import Enum
from typing import Any
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class ComparisonOperator(Enum):

View File

@@ -1,5 +1,5 @@
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import ContributorDetails
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import ContributorDetails
class ReadCsvBlock(Block):

View File

@@ -4,8 +4,8 @@ import aiohttp
import discord
from pydantic import Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SecretField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SecretField
class ReadDiscordMessagesBlock(Block):

View File

@@ -4,8 +4,8 @@ from email.mime.text import MIMEText
from pydantic import BaseModel, ConfigDict, Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SchemaField, SecretField
class EmailCredentials(BaseModel):

View File

@@ -3,7 +3,7 @@ from enum import Enum
import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
class HttpMethod(Enum):

View File

@@ -1,7 +1,7 @@
from typing import Any, List, Tuple
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class ListIteratorBlock(Block):

View File

@@ -8,9 +8,9 @@ import ollama
import openai
from groq import Groq
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField
from autogpt_server.util import json
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SchemaField, SecretField
from backend.util import json
logger = logging.getLogger(__name__)
@@ -320,7 +320,7 @@ class AITextGeneratorBlock(Block):
if output_name == "response":
return output_data["response"]
else:
raise output_data
raise RuntimeError(output_data)
raise ValueError("Failed to get a response from the LLM.")
def run(self, input_data: Input) -> BlockOutput:

View File

@@ -2,8 +2,8 @@ import operator
from enum import Enum
from typing import Any
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class Operation(Enum):

View File

@@ -2,8 +2,8 @@ from typing import List
import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SchemaField, SecretField
class PublishToMediumBlock(Block):

View File

@@ -4,9 +4,9 @@ from typing import Iterator
import praw
from pydantic import BaseModel, ConfigDict, Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SecretField
from autogpt_server.util.mock import MockObject
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SecretField
from backend.util.mock import MockObject
class RedditCredentials(BaseModel):

View File

@@ -5,8 +5,8 @@ from typing import Any
import feedparser
import pydantic
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class RSSEntry(pydantic.BaseModel):

View File

@@ -3,8 +3,8 @@ from collections import defaultdict
from enum import Enum
from typing import Any, Dict, List, Optional, Union
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class SamplingMethod(str, Enum):

View File

@@ -3,8 +3,8 @@ from urllib.parse import quote
import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SecretField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SecretField
class GetRequest:

View File

@@ -3,8 +3,8 @@ from typing import Literal
import requests
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import BlockSecret, SchemaField, SecretField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import BlockSecret, SchemaField, SecretField
class CreateTalkingAvatarVideoBlock(Block):

View File

@@ -4,8 +4,8 @@ from typing import Any
from jinja2 import BaseLoader, Environment
from pydantic import Field
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.util import json
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.util import json
jinja = Environment(loader=BaseLoader())

View File

@@ -2,7 +2,7 @@ import time
from datetime import datetime, timedelta
from typing import Any, Union
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
class GetCurrentTimeBlock(Block):
@@ -23,7 +23,7 @@ class GetCurrentTimeBlock(Block):
{"trigger": "Hello", "format": "{time}"},
],
test_output=[
("time", time.strftime("%H:%M:%S")),
("time", lambda _: time.strftime("%H:%M:%S")),
],
)
@@ -130,7 +130,6 @@ class CountdownTimerBlock(Block):
)
def run(self, input_data: Input) -> BlockOutput:
seconds = int(input_data.seconds)
minutes = int(input_data.minutes)
hours = int(input_data.hours)

View File

@@ -3,8 +3,8 @@ from urllib.parse import parse_qs, urlparse
from youtube_transcript_api import YouTubeTranscriptApi
from youtube_transcript_api.formatters import TextFormatter
from autogpt_server.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from autogpt_server.data.model import SchemaField
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
class TranscribeYouTubeVideoBlock(Block):

View File

@@ -8,8 +8,8 @@ import pathlib
import click
import psutil
from autogpt_server import app
from autogpt_server.util.process import AppProcess
from backend import app
from backend.util.process import AppProcess
def get_pid_path() -> pathlib.Path:
@@ -109,7 +109,7 @@ def reddit(server_address: str):
"""
import requests
from autogpt_server.usecases.reddit_marketing import create_test_graph
from backend.usecases.reddit_marketing import create_test_graph
test_graph = create_test_graph()
url = f"{server_address}/graphs"
@@ -130,7 +130,7 @@ def populate_db(server_address: str):
"""
import requests
from autogpt_server.usecases.sample import create_test_graph
from backend.usecases.sample import create_test_graph
test_graph = create_test_graph()
url = f"{server_address}/graphs"
@@ -166,7 +166,7 @@ def graph(server_address: str):
"""
import requests
from autogpt_server.usecases.sample import create_test_graph
from backend.usecases.sample import create_test_graph
url = f"{server_address}/graphs"
headers = {"Content-Type": "application/json"}
@@ -219,7 +219,7 @@ def websocket(server_address: str, graph_id: str):
import websockets
from autogpt_server.server.ws_api import ExecutionSubscription, Methods, WsMessage
from backend.server.ws_api import ExecutionSubscription, Methods, WsMessage
async def send_message(server_address: str):
uri = f"ws://{server_address}"

View File

@@ -7,8 +7,8 @@ import jsonschema
from prisma.models import AgentBlock
from pydantic import BaseModel
from autogpt_server.data.model import ContributorDetails
from autogpt_server.util import json
from backend.data.model import ContributorDetails
from backend.util import json
BlockData = tuple[str, Any] # Input & Output data should be a tuple of (name, data).
BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data.
@@ -225,7 +225,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
def get_blocks() -> dict[str, Block]:
from autogpt_server.blocks import AVAILABLE_BLOCKS # noqa: E402
from backend.blocks import AVAILABLE_BLOCKS # noqa: E402
return AVAILABLE_BLOCKS

View File

@@ -9,7 +9,7 @@ from prisma.enums import UserBlockCreditType
from prisma.models import UserBlockCredit
from pydantic import BaseModel
from autogpt_server.blocks.llm import (
from backend.blocks.llm import (
MODEL_METADATA,
AIConversationBlock,
AIStructuredResponseGeneratorBlock,
@@ -17,9 +17,9 @@ from autogpt_server.blocks.llm import (
AITextSummarizerBlock,
LlmModel,
)
from autogpt_server.blocks.talking_head import CreateTalkingAvatarVideoBlock
from autogpt_server.data.block import Block, BlockInput
from autogpt_server.util.settings import Config
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.data.block import Block, BlockInput
from backend.util.settings import Config
class BlockCostType(str, Enum):

View File

@@ -16,8 +16,8 @@ from prisma.types import (
)
from pydantic import BaseModel
from autogpt_server.data.block import BlockData, BlockInput, CompletedBlockOutput
from autogpt_server.util import json, mock
from backend.data.block import BlockData, BlockInput, CompletedBlockOutput
from backend.util import json, mock
class GraphExecution(BaseModel):
@@ -396,19 +396,19 @@ def merge_execution_input(data: BlockInput) -> BlockInput:
# Merge all input with <input_name>_$_<index> into a single list.
items = list(data.items())
list_input: list[Any] = []
for key, value in items:
if LIST_SPLIT not in key:
continue
name, index = key.split(LIST_SPLIT)
if not index.isdigit():
list_input.append((name, value, 0))
else:
list_input.append((name, value, int(index)))
raise ValueError(f"Invalid key: {key}, #{index} index must be an integer.")
for name, value, _ in sorted(list_input, key=lambda x: x[2]):
data[name] = data.get(name, [])
data[name].append(value)
if int(index) >= len(data[name]):
# Pad list with empty string on missing indices.
data[name].extend([""] * (int(index) - len(data[name]) + 1))
data[name][int(index)] = value
# Merge all input with <input_name>_#_<index> into a single dict.
for key, value in items:

View File

@@ -9,11 +9,11 @@ from prisma.models import AgentGraph, AgentNode, AgentNodeLink
from pydantic import BaseModel, PrivateAttr
from pydantic_core import PydanticUndefinedType
from autogpt_server.blocks.basic import AgentInputBlock, AgentOutputBlock
from autogpt_server.data.block import BlockInput, get_block, get_blocks
from autogpt_server.data.db import BaseDbModel, transaction
from autogpt_server.data.user import DEFAULT_USER_ID
from autogpt_server.util import json
from backend.blocks.basic import AgentInputBlock, AgentOutputBlock
from backend.data.block import BlockInput, get_block, get_blocks
from backend.data.db import BaseDbModel, transaction
from backend.data.user import DEFAULT_USER_ID
from backend.util import json
logger = logging.getLogger(__name__)
@@ -274,7 +274,6 @@ class Graph(GraphMeta):
PydanticUndefinedType,
)
):
input_schema.append(
InputSchemaItem(
node_id=node.id,

View File

@@ -11,7 +11,7 @@ from pydantic_core import (
core_schema,
)
from autogpt_server.util.settings import Secrets
from backend.util.settings import Secrets
T = TypeVar("T")
logger = logging.getLogger(__name__)

View File

@@ -6,7 +6,7 @@ from datetime import datetime
from redis.asyncio import Redis
from autogpt_server.data.execution import ExecutionResult
from backend.data.execution import ExecutionResult
logger = logging.getLogger(__name__)
@@ -37,7 +37,6 @@ class AsyncEventQueue(ABC):
class AsyncRedisEventQueue(AsyncEventQueue):
def __init__(self):
self.host = os.getenv("REDIS_HOST", "localhost")
self.port = int(os.getenv("REDIS_PORT", "6379"))

View File

@@ -3,9 +3,9 @@ from typing import Optional
from prisma.models import AgentGraphExecutionSchedule
from autogpt_server.data.block import BlockInput
from autogpt_server.data.db import BaseDbModel
from autogpt_server.util import json
from backend.data.block import BlockInput
from backend.data.db import BaseDbModel
from backend.util import json
class ExecutionSchedule(BaseDbModel):

View File

@@ -3,14 +3,13 @@ from typing import Optional
from fastapi import HTTPException
from prisma.models import User
from autogpt_server.data.db import prisma
from backend.data.db import prisma
DEFAULT_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
DEFAULT_EMAIL = "default@example.com"
async def get_or_create_user(user_data: dict) -> User:
user_id = user_data.get("sub")
if not user_id:
raise HTTPException(status_code=401, detail="User ID not found in token")

View File

@@ -1,5 +1,5 @@
from autogpt_server.app import run_processes
from autogpt_server.executor import ExecutionManager
from backend.app import run_processes
from backend.executor import ExecutionManager
def main():

View File

@@ -12,13 +12,13 @@ from multiprocessing.pool import AsyncResult, Pool
from typing import TYPE_CHECKING, Any, Coroutine, Generator, TypeVar
if TYPE_CHECKING:
from autogpt_server.server.rest_api import AgentServer
from backend.server.rest_api import AgentServer
from autogpt_server.blocks.basic import AgentInputBlock
from autogpt_server.data import db
from autogpt_server.data.block import Block, BlockData, BlockInput, get_block
from autogpt_server.data.credit import get_user_credit_model
from autogpt_server.data.execution import (
from backend.blocks.basic import AgentInputBlock
from backend.data import db
from backend.data.block import Block, BlockData, BlockInput, get_block
from backend.data.credit import get_user_credit_model
from backend.data.execution import (
ExecutionQueue,
ExecutionResult,
ExecutionStatus,
@@ -36,13 +36,13 @@ from autogpt_server.data.execution import (
upsert_execution_input,
upsert_execution_output,
)
from autogpt_server.data.graph import Graph, Link, Node, get_graph, get_node
from autogpt_server.util import json
from autogpt_server.util.decorator import error_logged, time_measured
from autogpt_server.util.logging import configure_logging
from autogpt_server.util.service import AppService, expose, get_service_client
from autogpt_server.util.settings import Config
from autogpt_server.util.type import convert
from backend.data.graph import Graph, Link, Node, get_graph, get_node
from backend.util import json
from backend.util.decorator import error_logged, time_measured
from backend.util.logging import configure_logging
from backend.util.service import AppService, expose, get_service_client
from backend.util.settings import Config
from backend.util.type import convert
logger = logging.getLogger(__name__)
@@ -69,20 +69,28 @@ class LogMetadata:
self.prefix = f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|nid:{node_eid}|{block_name}]"
def info(self, msg: str, **extra):
msg = self._wrap(msg, **extra)
logger.info(msg, extra={"json_fields": {**self.metadata, **extra}})
def warning(self, msg: str, **extra):
msg = self._wrap(msg, **extra)
logger.warning(msg, extra={"json_fields": {**self.metadata, **extra}})
def error(self, msg: str, **extra):
msg = self._wrap(msg, **extra)
logger.error(msg, extra={"json_fields": {**self.metadata, **extra}})
def debug(self, msg: str, **extra):
msg = self._wrap(msg, **extra)
logger.debug(msg, extra={"json_fields": {**self.metadata, **extra}})
def exception(self, msg: str, **extra):
msg = self._wrap(msg, **extra)
logger.exception(msg, extra={"json_fields": {**self.metadata, **extra}})
def _wrap(self, msg: str, **extra):
return f"{self.prefix} {msg} {extra}"
T = TypeVar("T")
ExecutionStream = Generator[NodeExecution, None, None]
@@ -382,7 +390,7 @@ def validate_exec(
def get_agent_server_client() -> "AgentServer":
from autogpt_server.server.rest_api import AgentServer
from backend.server.rest_api import AgentServer
return get_service_client(AgentServer, Config().agent_server_port)

View File

@@ -5,11 +5,11 @@ from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from autogpt_server.data import schedule as model
from autogpt_server.data.block import BlockInput
from autogpt_server.executor.manager import ExecutionManager
from autogpt_server.util.service import AppService, expose, get_service_client
from autogpt_server.util.settings import Config
from backend.data import schedule as model
from backend.data.block import BlockInput
from backend.executor.manager import ExecutionManager
from backend.util.service import AppService, expose, get_service_client
from backend.util.settings import Config
logger = logging.getLogger(__name__)

View File

@@ -1,6 +1,6 @@
from autogpt_server.app import run_processes
from autogpt_server.executor import ExecutionScheduler
from autogpt_server.server import AgentServer
from backend.app import run_processes
from backend.executor import ExecutionScheduler
from backend.server import AgentServer
def main():

View File

@@ -2,8 +2,8 @@ from typing import Dict, Set
from fastapi import WebSocket
from autogpt_server.data import execution
from autogpt_server.server.model import Methods, WsMessage
from backend.data import execution
from backend.server.model import Methods, WsMessage
class ConnectionManager:

View File

@@ -3,7 +3,7 @@ import typing
import pydantic
import autogpt_server.data.graph
import backend.data.graph
class Methods(enum.Enum):
@@ -34,7 +34,7 @@ class SubscriptionDetails(pydantic.BaseModel):
class CreateGraph(pydantic.BaseModel):
template_id: str | None = None
template_version: int | None = None
graph: autogpt_server.data.graph.Graph | None = None
graph: backend.data.graph.Graph | None = None
class SetGraphActiveVersion(pydantic.BaseModel):

View File

@@ -10,19 +10,19 @@ from fastapi import APIRouter, Body, Depends, FastAPI, HTTPException, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from autogpt_server.data import block, db
from autogpt_server.data import execution as execution_db
from autogpt_server.data import graph as graph_db
from autogpt_server.data import user as user_db
from autogpt_server.data.block import BlockInput, CompletedBlockOutput
from autogpt_server.data.credit import get_block_costs, get_user_credit_model
from autogpt_server.data.queue import AsyncEventQueue, AsyncRedisEventQueue
from autogpt_server.data.user import get_or_create_user
from autogpt_server.executor import ExecutionManager, ExecutionScheduler
from autogpt_server.server.model import CreateGraph, SetGraphActiveVersion
from autogpt_server.util.lock import KeyedMutex
from autogpt_server.util.service import AppService, expose, get_service_client
from autogpt_server.util.settings import Config, Settings
from backend.data import block, db
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data import user as user_db
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.data.credit import get_block_costs, get_user_credit_model
from backend.data.queue import AsyncEventQueue, AsyncRedisEventQueue
from backend.data.user import get_or_create_user
from backend.executor import ExecutionManager, ExecutionScheduler
from backend.server.model import CreateGraph, SetGraphActiveVersion
from backend.util.lock import KeyedMutex
from backend.util.service import AppService, expose, get_service_client
from backend.util.settings import Config, Settings
from .utils import get_user_id
@@ -78,18 +78,18 @@ class AgentServer(AppService):
api_router.dependencies.append(Depends(auth_middleware))
# Import & Attach sub-routers
import autogpt_server.server.routers.analytics
import autogpt_server.server.routers.integrations
import backend.server.routers.analytics
import backend.server.routers.integrations
api_router.include_router(
autogpt_server.server.routers.integrations.router,
backend.server.routers.integrations.router,
prefix="/integrations",
tags=["integrations"],
dependencies=[Depends(auth_middleware)],
)
api_router.include_router(
autogpt_server.server.routers.analytics.router,
backend.server.routers.analytics.router,
prefix="/analytics",
tags=["analytics"],
dependencies=[Depends(auth_middleware)],

View File

@@ -4,8 +4,8 @@ from typing import Annotated
import fastapi
import autogpt_server.data.analytics
from autogpt_server.server.utils import get_user_id
import backend.data.analytics
from backend.server.utils import get_user_id
router = fastapi.APIRouter()
@@ -17,7 +17,7 @@ async def log_raw_metric(
metric_value: Annotated[float, fastapi.Body(..., embed=True)],
data_string: Annotated[str, fastapi.Body(..., embed=True)],
):
result = await autogpt_server.data.analytics.log_raw_metric(
result = await backend.data.analytics.log_raw_metric(
user_id=user_id,
metric_name=metric_name,
metric_value=metric_value,
@@ -43,7 +43,7 @@ async def log_raw_analytics(
),
],
):
result = await autogpt_server.data.analytics.log_raw_analytics(
result = await backend.data.analytics.log_raw_analytics(
user_id, type, data, data_index
)
return result.id

View File

@@ -12,8 +12,8 @@ from fastapi import APIRouter, Body, Depends, HTTPException, Path, Query, Reques
from pydantic import BaseModel
from supabase import Client
from autogpt_server.integrations.oauth import HANDLERS_BY_NAME, BaseOAuthHandler
from autogpt_server.util.settings import Settings
from backend.integrations.oauth import HANDLERS_BY_NAME, BaseOAuthHandler
from backend.util.settings import Settings
from ..utils import get_supabase, get_user_id

View File

@@ -2,8 +2,8 @@ from autogpt_libs.auth.middleware import auth_middleware
from fastapi import Depends, HTTPException
from supabase import Client, create_client
from autogpt_server.data.user import DEFAULT_USER_ID
from autogpt_server.util.settings import Settings
from backend.data.user import DEFAULT_USER_ID
from backend.util.settings import Settings
settings = Settings()

View File

@@ -6,12 +6,12 @@ from autogpt_libs.auth import parse_jwt_token
from fastapi import Depends, FastAPI, WebSocket, WebSocketDisconnect
from fastapi.middleware.cors import CORSMiddleware
from autogpt_server.data.queue import AsyncRedisEventQueue
from autogpt_server.data.user import DEFAULT_USER_ID
from autogpt_server.server.conn_manager import ConnectionManager
from autogpt_server.server.model import ExecutionSubscription, Methods, WsMessage
from autogpt_server.util.service import AppProcess
from autogpt_server.util.settings import Config, Settings
from backend.data.queue import AsyncRedisEventQueue
from backend.data.user import DEFAULT_USER_ID
from backend.server.conn_manager import ConnectionManager
from backend.server.model import ExecutionSubscription, Methods, WsMessage
from backend.util.service import AppProcess
from backend.util.settings import Config, Settings
logger = logging.getLogger(__name__)
settings = Settings()

View File

@@ -2,17 +2,14 @@ from pathlib import Path
from prisma.models import User
from autogpt_server.blocks.basic import StoreValueBlock
from autogpt_server.blocks.block import BlockInstallationBlock
from autogpt_server.blocks.http import SendWebRequestBlock
from autogpt_server.blocks.llm import AITextGeneratorBlock
from autogpt_server.blocks.text import (
ExtractTextInformationBlock,
FillTextTemplateBlock,
)
from autogpt_server.data.graph import Graph, Link, Node, create_graph
from autogpt_server.data.user import get_or_create_user
from autogpt_server.util.test import SpinTestServer, wait_execution
from backend.blocks.basic import StoreValueBlock
from backend.blocks.block import BlockInstallationBlock
from backend.blocks.http import SendWebRequestBlock
from backend.blocks.llm import AITextGeneratorBlock
from backend.blocks.text import ExtractTextInformationBlock, FillTextTemplateBlock
from backend.data.graph import Graph, Link, Node, create_graph
from backend.data.user import get_or_create_user
from backend.util.test import SpinTestServer, wait_execution
sample_block_modules = {
"llm": "Block that calls the AI model to generate text.",

View File

@@ -1,11 +1,11 @@
from prisma.models import User
from autogpt_server.blocks.llm import AIStructuredResponseGeneratorBlock
from autogpt_server.blocks.reddit import GetRedditPostsBlock, PostRedditCommentBlock
from autogpt_server.blocks.text import FillTextTemplateBlock, MatchTextPatternBlock
from autogpt_server.data.graph import Graph, Link, Node, create_graph
from autogpt_server.data.user import get_or_create_user
from autogpt_server.util.test import SpinTestServer, wait_execution
from backend.blocks.llm import AIStructuredResponseGeneratorBlock
from backend.blocks.reddit import GetRedditPostsBlock, PostRedditCommentBlock
from backend.blocks.text import FillTextTemplateBlock, MatchTextPatternBlock
from backend.data.graph import Graph, Link, Node, create_graph
from backend.data.user import get_or_create_user
from backend.util.test import SpinTestServer, wait_execution
def create_test_graph() -> Graph:

View File

@@ -1,11 +1,11 @@
from prisma.models import User
from autogpt_server.blocks.basic import AgentInputBlock, PrintToConsoleBlock
from autogpt_server.blocks.text import FillTextTemplateBlock
from autogpt_server.data import graph
from autogpt_server.data.graph import create_graph
from autogpt_server.data.user import get_or_create_user
from autogpt_server.util.test import SpinTestServer, wait_execution
from backend.blocks.basic import AgentInputBlock, PrintToConsoleBlock
from backend.blocks.text import FillTextTemplateBlock
from backend.data import graph
from backend.data.graph import create_graph
from backend.data.user import get_or_create_user
from backend.util.test import SpinTestServer, wait_execution
async def create_test_user() -> User:

View File

@@ -1,6 +1,6 @@
import sentry_sdk
from autogpt_server.util.settings import Settings
from backend.util.settings import Settings
def sentry_init():

View File

@@ -6,8 +6,8 @@ from abc import ABC, abstractmethod
from multiprocessing import Process, set_start_method
from typing import Optional
from autogpt_server.util.logging import configure_logging
from autogpt_server.util.metrics import sentry_init
from backend.util.logging import configure_logging
from backend.util.metrics import sentry_init
logger = logging.getLogger(__name__)

View File

@@ -9,11 +9,11 @@ from typing import Any, Callable, Coroutine, Type, TypeVar, cast
import Pyro5.api
from Pyro5 import api as pyro
from autogpt_server.data import db
from autogpt_server.data.queue import AsyncEventQueue, AsyncRedisEventQueue
from autogpt_server.util.process import AppProcess
from autogpt_server.util.retry import conn_retry
from autogpt_server.util.settings import Config
from backend.data import db
from backend.data.queue import AsyncEventQueue, AsyncRedisEventQueue
from backend.util.process import AppProcess
from backend.util.retry import conn_retry
from backend.util.settings import Config
logger = logging.getLogger(__name__)
T = TypeVar("T")

View File

@@ -10,7 +10,7 @@ from pydantic_settings import (
SettingsConfigDict,
)
from autogpt_server.util.data import get_config_path, get_data_path, get_secrets_path
from backend.util.data import get_config_path, get_data_path, get_secrets_path
T = TypeVar("T", bound=BaseSettings)

View File

@@ -1,14 +1,14 @@
import asyncio
import time
from autogpt_server.data import db
from autogpt_server.data.block import Block, initialize_blocks
from autogpt_server.data.execution import ExecutionResult, ExecutionStatus
from autogpt_server.data.queue import AsyncEventQueue
from autogpt_server.data.user import create_default_user
from autogpt_server.executor import ExecutionManager, ExecutionScheduler
from autogpt_server.server import AgentServer
from autogpt_server.server.rest_api import get_user_id
from backend.data import db
from backend.data.block import Block, initialize_blocks
from backend.data.execution import ExecutionResult, ExecutionStatus
from backend.data.queue import AsyncEventQueue
from backend.data.user import create_default_user
from backend.executor import ExecutionManager, ExecutionScheduler
from backend.server import AgentServer
from backend.server.rest_api import get_user_id
log = print

View File

@@ -1,5 +1,5 @@
from autogpt_server.app import run_processes
from autogpt_server.server.ws_api import WebsocketServer
from backend.app import run_processes
from backend.server.ws_api import WebsocketServer
def main():

Some files were not shown because too many files have changed in this diff Show More