## Summary
Addresses SENTRY-1051: Shiki warning about multiple highlighter
instances.
## Problem
The `@streamdown/code` package creates a **new Shiki highlighter for
each language** encountered. When users view AI chat responses with code
blocks in multiple languages (JavaScript, Python, JSON, YAML, etc.),
this creates 10+ highlighter instances, triggering Shiki's warning:
> "10 instances have been created. Shiki is supposed to be used as a
singleton, consider refactoring your code to cache your highlighter
instance"
This causes memory bloat and performance degradation.
## Solution
Introduced a custom code highlighting plugin that properly implements
the singleton pattern:
### New files:
- `src/lib/shiki-highlighter.ts` - Singleton highlighter management
- `src/lib/streamdown-code-plugin.ts` - Drop-in replacement for
`@streamdown/code`
### Key features:
- **Single shared highlighter** - One instance serves all code blocks
- **Preloaded common languages** - JS, TS, Python, JSON, Bash, YAML,
etc.
- **Lazy loading** - Additional languages loaded on demand
- **Result caching** - Avoids re-highlighting identical code blocks
### Changes:
- Added `shiki` as direct dependency
- Updated `message.tsx` to use the new plugin
## Testing
- [ ] Verify code blocks render correctly in AI chat
- [ ] Confirm no Shiki singleton warnings in console
- [ ] Test with multiple languages in same conversation
## Related
- Linear: SENTRY-1051
- Sentry: Multiple Shiki instances warning
<!-- greptile_comment -->
<details><summary><h3>Greptile Summary</h3></summary>
Replaced `@streamdown/code` with a custom singleton-based Shiki
highlighter implementation to resolve memory bloat from creating
multiple highlighter instances per language. The new implementation
creates a single shared highlighter with preloaded common languages (JS,
TS, Python, JSON, etc.) and lazy-loads additional languages on demand.
Results are cached to avoid re-highlighting identical code blocks.
**Key changes:**
- Added `shiki` v3.21.0 as a direct dependency
- Created `shiki-highlighter.ts` with singleton pattern and language
management utilities
- Created `streamdown-code-plugin.ts` as a drop-in replacement for
`@streamdown/code`
- Updated `message.tsx` to import from the new plugin instead of
`@streamdown/code`
The implementation follows React best practices with async highlighting
and callback-based notifications. The cache key uses code length +
prefix/suffix for efficient lookups on large code blocks.
</details>
<details><summary><h3>Confidence Score: 4/5</h3></summary>
- Safe to merge with minor considerations for edge cases
- The implementation is solid with proper singleton pattern, caching,
and async handling. The code is well-structured and addresses the stated
problem. However, there's a subtle potential race condition in the
callback handling where multiple concurrent requests for the same cache
key could trigger duplicate highlight operations before the first
completes. The cache key generation using prefix/suffix could
theoretically cause false cache hits for large files with identical
prefixes and suffixes. Despite these edge cases, the implementation
should work correctly for the vast majority of use cases.
- No files require special attention
</details>
<details><summary><h3>Sequence Diagram</h3></summary>
```mermaid
sequenceDiagram
participant UI as Streamdown Component
participant Plugin as Custom Code Plugin
participant Cache as Token Cache
participant Singleton as Shiki Highlighter (Singleton)
participant Callbacks as Pending Callbacks
UI->>Plugin: highlight(code, lang)
Plugin->>Cache: Check cache key
alt Cache hit
Cache-->>Plugin: Return cached result
Plugin-->>UI: Return highlighted tokens
else Cache miss
Plugin->>Callbacks: Register callback
Plugin->>Singleton: Get highlighter instance
alt First call
Singleton->>Singleton: Create highlighter with preloaded languages
end
Singleton-->>Plugin: Return highlighter
alt Language not loaded
Plugin->>Singleton: Load language dynamically
end
Plugin->>Singleton: codeToTokens(code, lang, themes)
Singleton-->>Plugin: Return tokens
Plugin->>Cache: Store result
Plugin->>Callbacks: Notify all waiting callbacks
Callbacks-->>UI: Async callback with result
end
```
</details>
<sub>Last reviewed commit: 96c793b</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
AutoGPT: Build, Deploy, and Run AI Agents
Deutsch | Español | français | 日本語 | 한국어 | Português | Русский | 中文
AutoGPT is a powerful platform that allows you to create, deploy, and manage continuous AI agents that automate complex workflows.
Hosting Options
- Download to self-host (Free!)
- Join the Waitlist for the cloud-hosted beta (Closed Beta - Public release Coming Soon!)
How to Self-Host the AutoGPT Platform
Note
Setting up and hosting the AutoGPT Platform yourself is a technical process. If you'd rather something that just works, we recommend joining the waitlist for the cloud-hosted beta.
System Requirements
Before proceeding with the installation, ensure your system meets the following requirements:
Hardware Requirements
- CPU: 4+ cores recommended
- RAM: Minimum 8GB, 16GB recommended
- Storage: At least 10GB of free space
Software Requirements
- Operating Systems:
- Linux (Ubuntu 20.04 or newer recommended)
- macOS (10.15 or newer)
- Windows 10/11 with WSL2
- Required Software (with minimum versions):
- Docker Engine (20.10.0 or newer)
- Docker Compose (2.0.0 or newer)
- Git (2.30 or newer)
- Node.js (16.x or newer)
- npm (8.x or newer)
- VSCode (1.60 or newer) or any modern code editor
Network Requirements
- Stable internet connection
- Access to required ports (will be configured in Docker)
- Ability to make outbound HTTPS connections
Updated Setup Instructions:
We've moved to a fully maintained and regularly updated documentation site.
👉 Follow the official self-hosting guide here
This tutorial assumes you have Docker, VSCode, git and npm installed.
⚡ Quick Setup with One-Line Script (Recommended for Local Hosting)
Skip the manual steps and get started in minutes using our automatic setup script.
For macOS/Linux:
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
For Windows (PowerShell):
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
This will install dependencies, configure Docker, and launch your local instance — all in one go.
🧱 AutoGPT Frontend
The AutoGPT frontend is where users interact with our powerful AI automation platform. It offers multiple ways to engage with and leverage our AI agents. This is the interface where you'll bring your AI automation ideas to life:
Agent Builder: For those who want to customize, our intuitive, low-code interface allows you to design and configure your own AI agents.
Workflow Management: Build, modify, and optimize your automation workflows with ease. You build your agent by connecting blocks, where each block performs a single action.
Deployment Controls: Manage the lifecycle of your agents, from testing to production.
Ready-to-Use Agents: Don't want to build? Simply select from our library of pre-configured agents and put them to work immediately.
Agent Interaction: Whether you've built your own or are using pre-configured agents, easily run and interact with them through our user-friendly interface.
Monitoring and Analytics: Keep track of your agents' performance and gain insights to continually improve your automation processes.
Read this guide to learn how to build your own custom blocks.
💽 AutoGPT Server
The AutoGPT Server is the powerhouse of our platform This is where your agents run. Once deployed, agents can be triggered by external sources and can operate continuously. It contains all the essential components that make AutoGPT run smoothly.
Source Code: The core logic that drives our agents and automation processes.
Infrastructure: Robust systems that ensure reliable and scalable performance.
Marketplace: A comprehensive marketplace where you can find and deploy a wide range of pre-built agents.
🐙 Example Agents
Here are two examples of what you can do with AutoGPT:
-
Generate Viral Videos from Trending Topics
- This agent reads topics on Reddit.
- It identifies trending topics.
- It then automatically creates a short-form video based on the content.
-
Identify Top Quotes from Videos for Social Media
- This agent subscribes to your YouTube channel.
- When you post a new video, it transcribes it.
- It uses AI to identify the most impactful quotes to generate a summary.
- Then, it writes a post to automatically publish to your social media.
These examples show just a glimpse of what you can achieve with AutoGPT! You can create customized workflows to build agents for any use case.
License Overview:
🛡️ Polyform Shield License:
All code and content within the autogpt_platform folder is licensed under the Polyform Shield License. This new project is our in-developlemt platform for building, deploying and managing agents.
Read more about this effort
🦉 MIT License:
All other portions of the AutoGPT repository (i.e., everything outside the autogpt_platform folder) are licensed under the MIT License. This includes the original stand-alone AutoGPT Agent, along with projects such as Forge, agbenchmark and the AutoGPT Classic GUI.
We also publish additional work under the MIT Licence in other repositories, such as GravitasML which is developed for and used in the AutoGPT Platform. See also our MIT Licenced Code Ability project.
Mission
Our mission is to provide the tools, so that you can focus on what matters:
- 🏗️ Building - Lay the foundation for something amazing.
- 🧪 Testing - Fine-tune your agent to perfection.
- 🤝 Delegating - Let AI work for you, and have your ideas come to life.
Be part of the revolution! AutoGPT is here to stay, at the forefront of AI innovation.
📖 Documentation | 🚀 Contributing
🤖 AutoGPT Classic
Below is information about the classic version of AutoGPT.
🛠️ Build your own Agent - Quickstart
🏗️ Forge
Forge your own agent! – Forge is a ready-to-go toolkit to build your own agent application. It handles most of the boilerplate code, letting you channel all your creativity into the things that set your agent apart. All tutorials are located here. Components from forge can also be used individually to speed up development and reduce boilerplate in your agent project.
🚀 Getting Started with Forge – This guide will walk you through the process of creating your own agent and using the benchmark and user interface.
📘 Learn More about Forge
🎯 Benchmark
Measure your agent's performance! The agbenchmark can be used with any agent that supports the agent protocol, and the integration with the project's CLI makes it even easier to use with AutoGPT and forge-based agents. The benchmark offers a stringent testing environment. Our framework allows for autonomous, objective performance evaluations, ensuring your agents are primed for real-world action.
📦 agbenchmark on Pypi
|
📘 Learn More about the Benchmark
💻 UI
Makes agents easy to use! The frontend gives you a user-friendly interface to control and monitor your agents. It connects to agents through the agent protocol, ensuring compatibility with many agents from both inside and outside of our ecosystem.
The frontend works out-of-the-box with all agents in the repo. Just use the CLI to run your agent of choice!
📘 Learn More about the Frontend
⌨️ CLI
To make it as easy as possible to use all of the tools offered by the repository, a CLI is included at the root of the repo:
$ ./run
Usage: cli.py [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
agent Commands to create, start and stop agents
benchmark Commands to start the benchmark and list tests and categories
setup Installs dependencies needed for your system.
Just clone the repo, install dependencies with ./run setup, and you should be good to go!
🤔 Questions? Problems? Suggestions?
Get help - Discord 💬
To report a bug or request a feature, create a GitHub Issue. Please ensure someone else hasn't created an issue for the same topic.
🤝 Sister projects
🔄 Agent Protocol
To maintain a uniform standard and ensure seamless compatibility with many current and future applications, AutoGPT employs the agent protocol standard by the AI Engineer Foundation. This standardizes the communication pathways from your agent to the frontend and benchmark.