Compare commits

..

40 Commits

Author SHA1 Message Date
Aarushi
c51b8e7291 attempt to fix 2025-01-08 13:54:28 +00:00
Aarushi
f9ae76123a fix lock 2025-01-08 13:34:12 +00:00
Aarushi
281ae65dcb Merge branch 'dev' into twitter-integration 2025-01-08 13:29:54 +00:00
Aarushi
bd815fc9d7 udpate poetry lock 2025-01-08 13:24:39 +00:00
Nicholas Tindle
cdaa2ee456 Merge branch 'dev' into feature/twitter-integration 2025-01-08 00:16:00 -06:00
Nicholas Tindle
70dfaf1ef5 fix: lock 2025-01-07 22:59:46 -06:00
Nicholas Tindle
565774f77a Discard changes to autogpt_platform/frontend/yarn.lock 2025-01-07 22:45:01 -06:00
Nicholas Tindle
4c6dd35310 Merge remote-tracking branch 'origin/dev' into pr/8754 2025-01-07 12:49:31 -06:00
abhi1992002
632a39e877 upadate twitter documentation 2025-01-03 13:02:25 +05:30
abhi1992002
e297eff27c remove multi select 2025-01-03 12:50:24 +05:30
abhi1992002
57aa6745da update twitter env variable comment 2025-01-03 12:48:12 +05:30
abhi1992002
fb9d42f466 fix formatting 2025-01-03 12:35:03 +05:30
abhi1992002
eb25e731fc revert img change 2025-01-03 12:22:16 +05:30
abhi1992002
d75c08e348 adding documentation for twitter block 2025-01-03 11:45:53 +05:30
abhi1992002
428c012a43 write documentation for oneOf 2025-01-02 16:50:19 +05:30
abhi1992002
4fe135e472 add support for oneOf and optional oneOf 2025-01-02 16:24:00 +05:30
abhi1992002
1be3a29dc0 fix yarn lock 2025-01-02 11:01:35 +05:30
abhi1992002
e8ae8ccd6d fix backend tests 2025-01-02 11:01:35 +05:30
abhi1992002
4aa36fda55 fix expansions in tweet block 2025-01-02 11:01:35 +05:30
abhi1992002
2e19f3e9e2 fix tweet blocks expansions 2025-01-02 11:01:35 +05:30
abhi1992002
a85f671237 1. add optional datetime and multi select support 2025-01-02 11:01:27 +05:30
Nicholas Tindle
6c3a401ceb fix: linting came back with a vengence 2025-01-02 11:01:08 +05:30
Nicholas Tindle
949139ed7a fix: credentials issues 2025-01-02 11:01:08 +05:30
Nicholas Tindle
5bdf541bce fix: only add tweepy (and down bumb related :() 2025-01-02 11:01:08 +05:30
Nicholas Tindle
a1a3f9e179 fix: project problems 2025-01-02 11:01:00 +05:30
Nicholas Tindle
d14045c7b7 Discard changes to autogpt_platform/frontend/yarn.lock 2025-01-02 11:00:30 +05:30
Nicholas Tindle
84c30be37d fix: formatting 2025-01-02 11:00:30 +05:30
abhi
bc0aab9c73 fix:oauth2 2025-01-02 11:00:30 +05:30
Abhimanyu Yadav
19b6dfd0f7 Update credentials-input.tsx 2025-01-02 11:00:30 +05:30
Abhimanyu Yadav
e85f593dab Update credentials-provider.tsx 2025-01-02 11:00:30 +05:30
abhi
5200250ffb fix multiselect 2025-01-02 11:00:30 +05:30
abhi
8eba862723 fix multiselect 2025-01-02 11:00:30 +05:30
abhi
240e030a36 initial changes 2025-01-02 11:00:30 +05:30
abhi1992002
8782caf39a refactor: Enhance Supabase integration and Twitter OAuth handling
- Updated `store.py` to improve state token management by adding PKCE support and simplifying code challenge generation.
- Modified environment variable names in `.env.example` for consistency.
- Removed unnecessary `is_multi_select` attributes from various Twitter-related input schemas to streamline the code.
- Cleaned up exception handling in Twitter list management and tweet management blocks by removing redundant error logging.
- Removed debug print statements from various components to clean up the codebase.
- Fixed a minor error message in the Twitter OAuth handler for clarity.
2025-01-02 11:00:30 +05:30
Abhimanyu Yadav
779cec003c Update pyproject.toml 2025-01-02 11:00:30 +05:30
Abhimanyu Yadav
5a48f6cec4 Update credentials-input.tsx 2025-01-02 11:00:30 +05:30
Abhimanyu Yadav
e7056e5642 Update credentials-provider.tsx 2025-01-02 11:00:30 +05:30
abhi1992002
ee0a75027a fix: test 2025-01-02 11:00:30 +05:30
abhi1992002
1fc5a7beae fix linting 2025-01-02 11:00:30 +05:30
abhi1992002
c4f77d4074 add twitter credentials with some frontend changes
# Conflicts:
#	autogpt_platform/backend/backend/data/model.py
#	autogpt_platform/backend/pyproject.toml
#	autogpt_platform/frontend/src/components/integrations/credentials-input.tsx
2025-01-02 11:00:30 +05:30
23 changed files with 985 additions and 1166 deletions

View File

@@ -121,18 +121,6 @@ REPLICATE_API_KEY=
# Ideogram
IDEOGRAM_API_KEY=
# Fal
FAL_API_KEY=
# Exa
EXA_API_KEY=
# E2B
E2B_API_KEY=
# Nvidia
NVIDIA_API_KEY=
# Logging Configuration
LOG_LEVEL=INFO
ENABLE_CLOUD_LOGGING=false

View File

@@ -241,7 +241,7 @@ class AgentOutputBlock(Block):
advanced=True,
)
format: str = SchemaField(
description="The format string to be used to format the recorded_value. Use Jinja2 syntax.",
description="The format string to be used to format the recorded_value.",
default="",
advanced=True,
)

View File

@@ -26,10 +26,8 @@ from backend.data.model import (
)
from backend.util import json
from backend.util.settings import BehaveAs, Settings
from backend.util.text import TextFormatter
logger = logging.getLogger(__name__)
fmt = TextFormatter()
LLMProviderName = Literal[
ProviderName.ANTHROPIC,
@@ -111,7 +109,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
LLAMA3_1_70B = "llama-3.1-70b-versatile"
LLAMA3_1_8B = "llama-3.1-8b-instant"
# Ollama models
OLLAMA_LLAMA3_2 = "llama3.2"
OLLAMA_LLAMA3_8B = "llama3"
OLLAMA_LLAMA3_405B = "llama3.1:405b"
OLLAMA_DOLPHIN = "dolphin-mistral:latest"
@@ -166,7 +163,6 @@ MODEL_METADATA = {
# Limited to 16k during preview
LlmModel.LLAMA3_1_70B: ModelMetadata("groq", 131072),
LlmModel.LLAMA3_1_8B: ModelMetadata("groq", 131072),
LlmModel.OLLAMA_LLAMA3_2: ModelMetadata("ollama", 8192),
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata("ollama", 8192),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata("ollama", 8192),
LlmModel.OLLAMA_DOLPHIN: ModelMetadata("ollama", 32768),
@@ -238,9 +234,7 @@ class AIStructuredResponseGeneratorBlock(Block):
description="Number of times to retry the LLM call if the response does not match the expected format.",
)
prompt_values: dict[str, str] = SchemaField(
advanced=False,
default={},
description="Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}.",
advanced=False, default={}, description="Values used to fill in the prompt."
)
max_tokens: int | None = SchemaField(
advanced=True,
@@ -454,8 +448,8 @@ class AIStructuredResponseGeneratorBlock(Block):
values = input_data.prompt_values
if values:
input_data.prompt = fmt.format_string(input_data.prompt, values)
input_data.sys_prompt = fmt.format_string(input_data.sys_prompt, values)
input_data.prompt = input_data.prompt.format(**values)
input_data.sys_prompt = input_data.sys_prompt.format(**values)
if input_data.sys_prompt:
prompt.append({"role": "system", "content": input_data.sys_prompt})
@@ -582,9 +576,7 @@ class AITextGeneratorBlock(Block):
description="Number of times to retry the LLM call if the response does not match the expected format.",
)
prompt_values: dict[str, str] = SchemaField(
advanced=False,
default={},
description="Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}.",
advanced=False, default={}, description="Values used to fill in the prompt."
)
ollama_host: str = SchemaField(
advanced=True,

View File

@@ -141,10 +141,10 @@ class ExtractTextInformationBlock(Block):
class FillTextTemplateBlock(Block):
class Input(BlockSchema):
values: dict[str, Any] = SchemaField(
description="Values (dict) to be used in format. These values can be used by putting them in double curly braces in the format template. e.g. {{value_name}}.",
description="Values (dict) to be used in format"
)
format: str = SchemaField(
description="Template to format the text using `values`. Use Jinja2 syntax."
description="Template to format the text using `values`"
)
class Output(BlockSchema):
@@ -160,7 +160,7 @@ class FillTextTemplateBlock(Block):
test_input=[
{
"values": {"name": "Alice", "hello": "Hello", "world": "World!"},
"format": "{{hello}}, {{ world }} {{name}}",
"format": "{hello}, {world} {{name}}",
},
{
"values": {"list": ["Hello", " World!"]},

View File

@@ -51,7 +51,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.LLAMA3_1_405B: 1,
LlmModel.LLAMA3_1_70B: 1,
LlmModel.LLAMA3_1_8B: 1,
LlmModel.OLLAMA_LLAMA3_2: 1,
LlmModel.OLLAMA_LLAMA3_8B: 1,
LlmModel.OLLAMA_LLAMA3_405B: 1,
LlmModel.OLLAMA_DOLPHIN: 1,

View File

@@ -93,34 +93,6 @@ open_router_credentials = APIKeyCredentials(
title="Use Credits for Open Router",
expires_at=None,
)
fal_credentials = APIKeyCredentials(
id="6c0f5bd0-9008-4638-9d79-4b40b631803e",
provider="fal",
api_key=SecretStr(settings.secrets.fal_api_key),
title="Use Credits for FAL",
expires_at=None,
)
exa_credentials = APIKeyCredentials(
id="96153e04-9c6c-4486-895f-5bb683b1ecec",
provider="exa",
api_key=SecretStr(settings.secrets.exa_api_key),
title="Use Credits for Exa search",
expires_at=None,
)
e2b_credentials = APIKeyCredentials(
id="78d19fd7-4d59-4a16-8277-3ce310acf2b7",
provider="e2b",
api_key=SecretStr(settings.secrets.e2b_api_key),
title="Use Credits for E2B",
expires_at=None,
)
nvidia_credentials = APIKeyCredentials(
id="96b83908-2789-4dec-9968-18f0ece4ceb3",
provider="nvidia",
api_key=SecretStr(settings.secrets.nvidia_api_key),
title="Use Credits for Nvidia",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
@@ -134,10 +106,6 @@ DEFAULT_CREDENTIALS = [
jina_credentials,
unreal_credentials,
open_router_credentials,
fal_credentials,
exa_credentials,
e2b_credentials,
nvidia_credentials,
]
@@ -189,14 +157,6 @@ class IntegrationCredentialsStore:
all_credentials.append(unreal_credentials)
if settings.secrets.open_router_api_key:
all_credentials.append(open_router_credentials)
if settings.secrets.fal_api_key:
all_credentials.append(fal_credentials)
if settings.secrets.exa_api_key:
all_credentials.append(exa_credentials)
if settings.secrets.e2b_api_key:
all_credentials.append(e2b_credentials)
if settings.secrets.nvidia_api_key:
all_credentials.append(nvidia_credentials)
return all_credentials
def get_creds_by_id(self, user_id: str, credentials_id: str) -> Credentials | None:

View File

@@ -38,7 +38,7 @@ def create_test_graph() -> graph.Graph:
graph.Node(
block_id=FillTextTemplateBlock().id,
input_default={
"format": "{{a}}, {{b}}{{c}}",
"format": "{a}, {b}{c}",
"values_#_c": "!!!",
},
),

View File

@@ -304,10 +304,7 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
jina_api_key: str = Field(default="", description="Jina API Key")
unreal_speech_api_key: str = Field(default="", description="Unreal Speech API Key")
fal_api_key: str = Field(default="", description="FAL API key")
exa_api_key: str = Field(default="", description="Exa API key")
e2b_api_key: str = Field(default="", description="E2B API key")
nvidia_api_key: str = Field(default="", description="Nvidia API key")
fal_key: str = Field(default="", description="FAL API key")
# Add more secret fields as needed

View File

@@ -1,3 +1,5 @@
import re
from jinja2 import BaseLoader
from jinja2.sandbox import SandboxedEnvironment
@@ -13,5 +15,8 @@ class TextFormatter:
self.env.globals.clear()
def format_string(self, template_str: str, values=None, **kwargs) -> str:
# For python.format compatibility: replace all {...} with {{..}}.
# But avoid replacing {{...}} to {{{...}}}.
template_str = re.sub(r"(?<!{){[ a-zA-Z0-9_]+}", r"{\g<0>}", template_str)
template = self.env.from_string(template_str)
return template.render(values or {}, **kwargs)

View File

@@ -1,86 +0,0 @@
/*
Warnings:
- You are about replace a single brace string input format for the following blocks:
- AgentOutputBlock
- FillTextTemplateBlock
- AITextGeneratorBlock
- AIStructuredResponseGeneratorBlock
with a double brace format.
- This migration can be slow for a large updated AgentNode tables.
*/
BEGIN;
SET LOCAL statement_timeout = '10min';
WITH to_update AS (
SELECT
"id",
"agentBlockId",
"constantInput"::jsonb AS j
FROM "AgentNode"
WHERE
"agentBlockId" IN (
'363ae599-353e-4804-937e-b2ee3cef3da4', -- AgentOutputBlock
'db7d8f02-2f44-4c55-ab7a-eae0941f0c30', -- FillTextTemplateBlock
'1f292d4a-41a4-4977-9684-7c8d560b9f91', -- AITextGeneratorBlock
'ed55ac19-356e-4243-a6cb-bc599e9b716f' -- AIStructuredResponseGeneratorBlock
)
AND (
"constantInput"::jsonb->>'format' ~ '(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})'
OR "constantInput"::jsonb->>'prompt' ~ '(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})'
OR "constantInput"::jsonb->>'sys_prompt' ~ '(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})'
)
),
updated_rows AS (
SELECT
"id",
"agentBlockId",
(
j
-- Update "format" if it has a single-brace placeholder
|| CASE WHEN j->>'format' ~ '(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})'
THEN jsonb_build_object(
'format',
regexp_replace(
j->>'format',
'(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})',
'{{\1}}',
'g'
)
)
ELSE '{}'::jsonb
END
-- Update "prompt" if it has a single-brace placeholder
|| CASE WHEN j->>'prompt' ~ '(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})'
THEN jsonb_build_object(
'prompt',
regexp_replace(
j->>'prompt',
'(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})',
'{{\1}}',
'g'
)
)
ELSE '{}'::jsonb
END
-- Update "sys_prompt" if it has a single-brace placeholder
|| CASE WHEN j->>'sys_prompt' ~ '(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})'
THEN jsonb_build_object(
'sys_prompt',
regexp_replace(
j->>'sys_prompt',
'(?<!\{)\{\s*([a-zA-Z_][a-zA-Z0-9_]*)\s*\}(?!\})',
'{{\1}}',
'g'
)
)
ELSE '{}'::jsonb
END
)::text AS "newConstantInput"
FROM to_update
)
UPDATE "AgentNode" AS an
SET "constantInput" = ur."newConstantInput"
FROM updated_rows ur
WHERE an."id" = ur."id";
COMMIT;

File diff suppressed because it is too large Load Diff

View File

@@ -39,7 +39,7 @@ python-dotenv = "^1.0.1"
redis = "^5.2.0"
sentry-sdk = "2.19.2"
strenum = "^0.4.9"
supabase = "2.11.0"
supabase = "^2.10.0"
tenacity = "^9.0.0"
tweepy = "^4.14.0"
uvicorn = { extras = ["standard"], version = "^0.34.0" }

View File

@@ -102,7 +102,7 @@ async def assert_sample_graph_executions(
assert exec.graph_exec_id == graph_exec_id
assert exec.output_data == {"output": ["Hello, World!!!"]}
assert exec.input_data == {
"format": "{{a}}, {{b}}{{c}}",
"format": "{a}, {b}{c}",
"values": {"a": "Hello", "b": "World", "c": "!!!"},
"values_#_a": "Hello",
"values_#_b": "World",

View File

@@ -6,11 +6,9 @@ import {
Carousel,
CarouselContent,
CarouselItem,
CarouselPrevious,
CarouselNext,
CarouselIndicator,
} from "@/components/ui/carousel";
import { useCallback, useState } from "react";
import { IconLeftArrow, IconRightArrow } from "@/components/ui/icons";
import { useRouter } from "next/navigation";
const BACKGROUND_COLORS = [
@@ -65,24 +63,27 @@ export const FeaturedSection: React.FC<FeaturedSectionProps> = ({
return (
<div className="flex w-full flex-col items-center justify-center">
<div className="w-[99vw]">
<h2 className="font-poppins mx-auto mb-8 max-w-[1360px] px-4 text-2xl font-semibold leading-7 text-neutral-800 dark:text-neutral-200">
<div className="w-full">
<h2 className="font-poppins mb-8 text-2xl font-semibold leading-7 text-neutral-800 dark:text-neutral-200">
Featured agents
</h2>
<div className="w-[99vw] pb-[60px]">
<div>
<Carousel
className="mx-auto pb-10"
opts={{
align: "center",
loop: true,
startIndex: currentSlide,
duration: 500,
align: "start",
containScroll: "trimSnaps",
}}
className="w-full overflow-x-hidden"
>
<CarouselContent className="ml-[calc(50vw-690px)]">
<CarouselContent className="transition-transform duration-500">
{featuredAgents.map((agent, index) => (
<CarouselItem
key={index}
className="max-w-[460px] flex-[0_0_auto]"
className="max-w-[460px] flex-[0_0_auto] pr-8"
>
<FeaturedStoreCard
agentName={agent.agent_name}
@@ -98,13 +99,37 @@ export const FeaturedSection: React.FC<FeaturedSectionProps> = ({
</CarouselItem>
))}
</CarouselContent>
<div className="relative mx-auto w-full max-w-[1360px] pl-4">
<CarouselIndicator />
<CarouselPrevious afterClick={handlePrevSlide} />
<CarouselNext afterClick={handleNextSlide} />
</div>
</Carousel>
</div>
<div className="mt-8 flex w-full items-center justify-between">
<div className="flex h-3 items-center gap-2">
{featuredAgents.map((_, index) => (
<div
key={index}
className={`${
currentSlide === index
? "h-3 w-[52px] rounded-[39px] bg-neutral-800 transition-all duration-500 dark:bg-neutral-200"
: "h-3 w-3 rounded-full bg-neutral-300 transition-all duration-500 dark:bg-neutral-600"
}`}
/>
))}
</div>
<div className="mb-[60px] flex items-center gap-3">
<button
onClick={handlePrevSlide}
className="mb:h-12 mb:w-12 flex h-10 w-10 items-center justify-center rounded-full border border-neutral-400 bg-white dark:border-neutral-600 dark:bg-neutral-800"
>
<IconLeftArrow className="h-8 w-8 text-neutral-800 dark:text-neutral-200" />
</button>
<button
onClick={handleNextSlide}
className="mb:h-12 mb:w-12 flex h-10 w-10 items-center justify-center rounded-full border border-neutral-900 bg-white dark:border-neutral-600 dark:bg-neutral-800"
>
<IconRightArrow className="h-8 w-8 text-neutral-800 dark:text-neutral-200" />
</button>
</div>
</div>
</div>
</div>
);

View File

@@ -1,11 +1,10 @@
// This file has been updated for the Store's "Featured Agent Section". If you want to add Carousel, keep these components in mind: CarouselIndicator, CarouselPrevious, and CarouselNext.
"use client";
import * as React from "react";
import useEmblaCarousel, {
type UseEmblaCarouselType,
} from "embla-carousel-react";
import { ChevronLeft, ChevronRight } from "lucide-react";
import { ArrowLeft, ArrowRight } from "lucide-react";
import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -197,137 +196,67 @@ CarouselItem.displayName = "CarouselItem";
const CarouselPrevious = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<typeof Button> & { afterClick?: () => void }
>(
(
{ className, afterClick, variant = "outline", size = "icon", ...props },
ref,
) => {
const { orientation, scrollPrev, canScrollPrev } = useCarousel();
React.ComponentProps<typeof Button>
>(({ className, variant = "outline", size = "icon", ...props }, ref) => {
const { orientation, scrollPrev, canScrollPrev } = useCarousel();
return (
<Button
ref={ref}
variant={variant}
size={size}
className={cn(
"absolute h-[52px] w-[52px] rounded-full",
orientation === "horizontal"
? "-bottom-20 right-24 -translate-y-1/2"
: "-top-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
disabled={!canScrollPrev}
onClick={() => {
scrollPrev();
if (afterClick) {
afterClick();
}
}}
{...props}
>
<ChevronLeft className="h-8 w-8" strokeWidth={1.25} />
<span className="sr-only">Previous slide</span>
</Button>
);
},
);
return (
<Button
ref={ref}
variant={variant}
size={size}
className={cn(
"absolute h-8 w-8 rounded-full",
orientation === "horizontal"
? "-left-12 top-1/2 -translate-y-1/2"
: "-top-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
disabled={!canScrollPrev}
onClick={scrollPrev}
{...props}
>
<ArrowLeft className="h-4 w-4" />
<span className="sr-only">Previous slide</span>
</Button>
);
});
CarouselPrevious.displayName = "CarouselPrevious";
const CarouselNext = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<typeof Button> & { afterClick?: () => void }
>(
(
{ className, afterClick, variant = "outline", size = "icon", ...props },
ref,
) => {
const { orientation, scrollNext, canScrollNext } = useCarousel();
const handleClick = () => {
scrollNext();
if (afterClick) {
afterClick();
}
};
return (
<Button
ref={ref}
variant={variant}
size={size}
className={cn(
"absolute h-[52px] w-[52px] rounded-full",
orientation === "horizontal"
? "-bottom-20 right-4 -translate-y-1/2"
: "-bottom-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
disabled={!canScrollNext}
onClick={handleClick}
{...props}
>
<ChevronRight className="h-8 w-8" strokeWidth={1.25} />
<span className="sr-only">Next slide</span>
</Button>
);
},
);
CarouselNext.displayName = "CarouselNext";
const CarouselIndicator = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => {
const { api } = useCarousel();
const [selectedIndex, setSelectedIndex] = React.useState(0);
const [scrollSnaps, setScrollSnaps] = React.useState<number[]>([]);
const scrollTo = React.useCallback(
(index: number) => {
api?.scrollTo(index);
},
[api],
);
React.useEffect(() => {
if (!api) return;
setScrollSnaps(api.scrollSnapList());
api.on("select", () => {
setSelectedIndex(api.selectedScrollSnap());
});
}, [api]);
React.ComponentProps<typeof Button>
>(({ className, variant = "outline", size = "icon", ...props }, ref) => {
const { orientation, scrollNext, canScrollNext } = useCarousel();
return (
<div
<Button
ref={ref}
className={cn("relative top-10 flex h-3 items-center gap-2", className)}
variant={variant}
size={size}
className={cn(
"absolute h-8 w-8 rounded-full",
orientation === "horizontal"
? "-right-12 top-1/2 -translate-y-1/2"
: "-bottom-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
disabled={!canScrollNext}
onClick={scrollNext}
{...props}
>
{scrollSnaps.map((_, index) => (
<div
key={index}
onClick={() => scrollTo(index)}
className={cn(
selectedIndex === index
? "h-3 w-[52px] rounded-[39px] bg-neutral-800 transition-all duration-500 dark:bg-neutral-200"
: "h-3 w-3 rounded-full bg-neutral-300 transition-all duration-500 dark:bg-neutral-600",
"cursor-pointer",
)}
/>
))}
</div>
<ArrowRight className="h-4 w-4" />
<span className="sr-only">Next slide</span>
</Button>
);
});
CarouselIndicator.displayName = "CarouselIndicator";
CarouselNext.displayName = "CarouselNext";
export {
type CarouselApi,
Carousel,
CarouselContent,
CarouselItem,
CarouselIndicator,
CarouselPrevious,
CarouselNext,
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

View File

@@ -1,78 +1,37 @@
# Running Ollama with AutoGPT
> **Important**: Ollama integration is only available when self-hosting the AutoGPT platform. It cannot be used with the cloud-hosted version.
Follow these steps to set up and run Ollama and your AutoGPT project:
Follow these steps to set up and run Ollama with the AutoGPT platform.
1. **Run Ollama**
- Open a terminal
- Execute the following command:
```
ollama run llama3
```
- Leave this terminal running
## Prerequisites
2. **Run the Backend**
- Open a new terminal
- Navigate to the backend directory in the AutoGPT project:
```
cd autogpt_platform/backend/
```
- Start the backend using Poetry:
```
poetry run app
```
1. Make sure you have gone through and completed the [AutoGPT Setup](/platform/getting-started) steps, if not please do so before continuing with this guide.
2. Before starting, ensure you have [Ollama installed](https://ollama.com/download) on your machine.
3. **Run the Frontend**
- Open another terminal
- Navigate to the frontend directory in the AutoGPT project:
```
cd autogpt_platform/frontend/
```
- Start the frontend development server:
```
npm run dev
```
## Setup Steps
### 1. Launch Ollama
Open a new terminal and execute:
```bash
ollama run llama3.2
```
> **Note**: This will download the [llama3.2](https://ollama.com/library/llama3.2) model and start the service. Keep this terminal running in the background.
### 2. Start the Backend
Open a new terminal and navigate to the autogpt_platform directory:
```bash
cd autogpt_platform
docker compose up -d --build
```
### 3. Start the Frontend
Open a new terminal and navigate to the frontend directory:
```bash
cd autogpt_platform/frontend
npm run dev
```
Then visit [http://localhost:3000](http://localhost:3000) to see the frontend running, after registering an account/logging in, navigate to the build page at [http://localhost:3000/build](http://localhost:3000/build)
### 4. Using Ollama with AutoGPT
Now that both Ollama and the AutoGPT platform are running we can move onto using Ollama with AutoGPT:
1. Add an AI Text Generator block to your workspace (it can work with any AI LLM block but for this example will be using the AI Text Generator block):
![Add AI Text Generator Block](../imgs/ollama/Select-AI-block.png)
2. In the "LLM Model" dropdown, select "llama3.2" (This is the model we downloaded earlier)
![Select Ollama Model](../imgs/ollama/Ollama-Select-Llama32.png)
3. You will see it ask for "Ollama Credentials", simply press "Enter API key"
![Ollama Credentials](../imgs/ollama/Ollama-Enter-API-key.png)
And you will see "Add new API key for Ollama", In the API key field you can enter anything you want as Ollama does not require an API key, I usually just enter a space, for the Name call it "Ollama" then press "Save & use this API key"
![Ollama Credentials](../imgs/ollama/Ollama-Credentials.png)
4. After that you will now see the block again, add your prompts then save and then run the graph:
![Add Prompt](../imgs/ollama/Ollama-Add-Prompts.png)
That's it! You've successfully setup the AutoGPT platform and made a LLM call to Ollama.
![Ollama Output](../imgs/ollama/Ollama-Output.png)
### Using Ollama on a Remote Server with AutoGPT
For running Ollama on a remote server, simply make sure the Ollama server is running and is accessible from other devices on your network/remotely through the port 11434, then you can use the same steps above but you need to add the Ollama servers IP address to the "Ollama Host" field in the block settings like so:
![Ollama Remote Host](../imgs/ollama/Ollama-Remote-Host.png)
## Troubleshooting
If you encounter any issues, verify that:
- Ollama is properly installed and running
- All terminals remain open during operation
- Docker is running before starting the backend
For common errors:
1. **Connection Refused**: Make sure Ollama is running and the host address is correct (also make sure the port is correct, its default is 11434)
2. **Model Not Found**: Try running `ollama pull llama3.2` manually first
3. **Docker Issues**: Ensure Docker daemon is running with `docker ps`
4. **Choose the Ollama Model**
- Add LLMBlock in the UI
- Choose the last option in the model selection dropdown