Add Llama API Support (#9899)

The changes in this PR are to add Llama API support.

### Changes 🏗️
We add both backend and frontend support.

**Backend**:
- Add llama_api provider
- Include models supported by Llama API along with configs
- llm_call
- credential store and llama_api_key field in Settings

**Frontend**:
- Llama API as a type
- Credentials input and provider for Llama API
 

### Checklist 📋

#### For code changes:
- [X] I have clearly listed my changes in the PR description
- [X] I have tested my changes according to the test plan:

**Test Plan**:

<details>
  <summary>AI Text Generator</summary>
  
  - [X] Start-up backend and frontend:
- Start backend with Docker services: `docker compose up -d --build`
     - Start frontend: `npm install && npm run dev`
- By visiting http://localhost:3000/, test inference and structured
outputs
  - [X] Create from scratch 
  - [X] Request for Llama API Credentials
  
<img width="2015" alt="image"
src="https://github.com/user-attachments/assets/3dede402-3718-4441-9327-ecab25c63ebf"
/>

  - [X] Execute an agent with at least 3 blocks
 
<img width="2026" alt="image"
src="https://github.com/user-attachments/assets/59d6d56b-2ccc-4af5-b511-4af312c3f7f8"
/>

  - [X] Confirm it executes correctly
</details>

<details>
  <summary>Structured Response Generator</summary>
  
  - [X] Start-up backend and frontend:
- Start backend with Docker services: `docker compose up -d --build`
     - Start frontend: `npm install && npm run dev`
- By visiting http://localhost:3000/, test inference and structured
outputs
  - [X] Create from scratch 
  - [X] Execute an agent 
<img width="2023" alt="image"
src="https://github.com/user-attachments/assets/d1107638-bf1b-45b1-a296-1e0fac29525b"
/>

  - [X] Confirm it executes correctly
</details>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
This commit is contained in:
Chirag Modi
2025-05-14 12:45:40 -07:00
committed by GitHub
parent cd6deb87c3
commit 2dc038b6c0
11 changed files with 113 additions and 4 deletions

View File

@@ -156,4 +156,18 @@ The block formulates a prompt based on the given focus or source data, sends it
| Error | Any error message if the process fails |
### Possible use case
Automatically generating a list of key points or action items from a long meeting transcript or summarizing the main topics discussed in a series of documents.
Automatically generating a list of key points or action items from a long meeting transcript or summarizing the main topics discussed in a series of documents.
# Providers
There are severals providers that AutoGPT users can use for running inference with LLM models.
## Llama API
Llama API is a Meta-hosted API service that helps you integrate Llama models quickly and efficiently. Using OpenAI comptability endpoints, you can easily access the power of Llama models without the need for complex setup or configuration!
Join the [waitlist](https://llama.developer.meta.com?utm_source=partner-autogpt&utm_medium=readme) to get access!
Try the Llama API provider by selecting any of the following LLM Model names from the AI blocks mentioned above:
- Llama-4-Scout-17B-16E-Instruct-FP8
- Llama-4-Maverick-17B-128E-Instruct-FP8
- Llama-3.3-8B-Instruct
- Llama-3-70B-Instruct