mirror of
https://github.com/simstudioai/sim.git
synced 2026-01-10 15:38:00 -05:00
Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
85cdca28f1 | ||
|
|
75963eb851 | ||
|
|
ed7d7a7101 | ||
|
|
d5c13b72e9 | ||
|
|
e164e32c5b | ||
|
|
7461ddf8f7 | ||
|
|
f94258ef83 | ||
|
|
05e689bc60 | ||
|
|
a3a5bf1d76 | ||
|
|
e43e78fb48 | ||
|
|
1b0d304a87 | ||
|
|
94368eb1c2 | ||
|
|
062e2a2c40 | ||
|
|
746b87743a | ||
|
|
9f2ff7e9cd | ||
|
|
be65bf795f |
10
.github/CONTRIBUTING.md
vendored
10
.github/CONTRIBUTING.md
vendored
@@ -164,10 +164,14 @@ Access the application at [http://localhost:3000/](http://localhost:3000/)
|
||||
|
||||
To use local models with Sim:
|
||||
|
||||
1. Pull models using our helper script:
|
||||
1. Install Ollama and pull models:
|
||||
|
||||
```bash
|
||||
./apps/sim/scripts/ollama_docker.sh pull <model_name>
|
||||
# Install Ollama (if not already installed)
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
|
||||
# Pull a model (e.g., gemma3:4b)
|
||||
ollama pull gemma3:4b
|
||||
```
|
||||
|
||||
2. Start Sim with local model support:
|
||||
@@ -533,7 +537,7 @@ This visibility system ensures clean user interfaces while maintaining full flex
|
||||
|
||||
### Guidelines & Best Practices
|
||||
|
||||
- **Code Style:** Follow the project's ESLint and Prettier configurations. Use meaningful variable names and small, focused functions.
|
||||
- **Code Style:** Follow the project's Biome configurations. Use meaningful variable names and small, focused functions.
|
||||
- **Documentation:** Clearly document the purpose, inputs, outputs, and any special behavior for your block/tool.
|
||||
- **Error Handling:** Implement robust error handling and provide user-friendly error messages.
|
||||
- **Parameter Visibility:** Always specify the appropriate visibility level for each parameter to ensure proper UI behavior and LLM integration.
|
||||
|
||||
24
README.md
24
README.md
@@ -59,27 +59,21 @@ docker compose -f docker-compose.prod.yml up -d
|
||||
|
||||
Access the application at [http://localhost:3000/](http://localhost:3000/)
|
||||
|
||||
#### Using Local Models
|
||||
#### Using Local Models with Ollama
|
||||
|
||||
To use local models with Sim:
|
||||
|
||||
1. Pull models using our helper script:
|
||||
Run Sim with local AI models using [Ollama](https://ollama.ai) - no external APIs required:
|
||||
|
||||
```bash
|
||||
./apps/sim/scripts/ollama_docker.sh pull <model_name>
|
||||
# Start with GPU support (automatically downloads gemma3:4b model)
|
||||
docker compose -f docker-compose.ollama.yml --profile setup up -d
|
||||
|
||||
# For CPU-only systems:
|
||||
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d
|
||||
```
|
||||
|
||||
2. Start Sim with local model support:
|
||||
|
||||
Wait for the model to download, then visit [http://localhost:3000](http://localhost:3000). Add more models with:
|
||||
```bash
|
||||
# With NVIDIA GPU support
|
||||
docker compose --profile local-gpu -f docker-compose.ollama.yml up -d
|
||||
|
||||
# Without GPU (CPU only)
|
||||
docker compose --profile local-cpu -f docker-compose.ollama.yml up -d
|
||||
|
||||
# If hosting on a server, update the environment variables in the docker-compose.prod.yml file to include the server's public IP then start again (OLLAMA_URL to i.e. http://1.1.1.1:11434)
|
||||
docker compose -f docker-compose.prod.yml up -d
|
||||
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b
|
||||
```
|
||||
|
||||
### Option 3: Dev Containers
|
||||
|
||||
@@ -61,7 +61,7 @@ The user prompt represents the primary input data for inference processing. This
|
||||
|
||||
The Agent block supports multiple LLM providers through a unified inference interface. Available models include:
|
||||
|
||||
**OpenAI Models**: GPT-4o, o1, o3, o4-mini, gpt-4.1 (API-based inference)
|
||||
**OpenAI Models**: GPT-5, GPT-4o, o1, o3, o4-mini, gpt-4.1 (API-based inference)
|
||||
**Anthropic Models**: Claude 3.7 Sonnet (API-based inference)
|
||||
**Google Models**: Gemini 2.5 Pro, Gemini 2.0 Flash (API-based inference)
|
||||
**Alternative Providers**: Groq, Cerebras, xAI, DeepSeek (API-based inference)
|
||||
|
||||
@@ -84,7 +84,7 @@ Different block types produce different output structures. Here's what you can e
|
||||
```json
|
||||
{
|
||||
"content": "Evaluation summary",
|
||||
"model": "gpt-4o",
|
||||
"model": "gpt-5",
|
||||
"tokens": {
|
||||
"prompt": 120,
|
||||
"completion": 85,
|
||||
|
||||
@@ -50,7 +50,7 @@ The File Parser tool is particularly useful for scenarios where your agents need
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Upload and extract contents from structured file formats including PDFs, CSV spreadsheets, and Word documents (DOCX). Upload files directly. Specialized parsers extract text and metadata from each format. You can upload multiple files at once and access them individually or as a combined document.
|
||||
Upload and extract contents from structured file formats including PDFs, CSV spreadsheets, and Word documents (DOCX). You can either provide a URL to a file or upload files directly. Specialized parsers extract text and metadata from each format. You can upload multiple files at once and access them individually or as a combined document.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ With Hunter.io, you can:
|
||||
In Sim, the Hunter.io integration enables your agents to programmatically search for and verify email addresses, discover companies, and enrich contact data using Hunter.io’s API. This allows you to automate lead generation, contact enrichment, and email verification directly within your workflows. Your agents can leverage Hunter.io’s tools to streamline outreach, keep your CRM up-to-date, and power intelligent automation scenarios for sales, recruiting, and more.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Search for email addresses, verify their deliverability, discover companies, and enrich contact data using Hunter.io's powerful email finding capabilities.
|
||||
|
||||
@@ -64,7 +64,7 @@ Search for similar content in a knowledge base using vector similarity
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `knowledgeBaseId` | string | Yes | ID of the knowledge base to search in |
|
||||
| `query` | string | Yes | Search query text |
|
||||
| `query` | string | No | Search query text \(optional when using tag filters\) |
|
||||
| `topK` | number | No | Number of most similar results to return \(1-100\) |
|
||||
| `tagFilters` | any | No | Array of tag filters with tagName and tagValue properties |
|
||||
|
||||
|
||||
@@ -29,9 +29,11 @@
|
||||
"mem0",
|
||||
"memory",
|
||||
"microsoft_excel",
|
||||
"microsoft_planner",
|
||||
"microsoft_teams",
|
||||
"mistral_parse",
|
||||
"notion",
|
||||
"onedrive",
|
||||
"openai",
|
||||
"outlook",
|
||||
"perplexity",
|
||||
@@ -41,6 +43,7 @@
|
||||
"s3",
|
||||
"schedule",
|
||||
"serper",
|
||||
"sharepoint",
|
||||
"slack",
|
||||
"stagehand",
|
||||
"stagehand_agent",
|
||||
|
||||
178
apps/docs/content/docs/tools/microsoft_planner.mdx
Normal file
178
apps/docs/content/docs/tools/microsoft_planner.mdx
Normal file
@@ -0,0 +1,178 @@
|
||||
---
|
||||
title: Microsoft Planner
|
||||
description: Read and create tasks in Microsoft Planner
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="microsoft_planner"
|
||||
color="#E0E0E0"
|
||||
icon={true}
|
||||
iconSvg={`<svg className="block-icon" fill='currentColor' viewBox='-1 -1 27 27' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient
|
||||
id='paint0_linear_3984_11038'
|
||||
x1='6.38724'
|
||||
y1='3.74167'
|
||||
x2='2.15779'
|
||||
y2='12.777'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#8752E0' />
|
||||
<stop offset='1' stopColor='#541278' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint1_linear_3984_11038'
|
||||
x1='8.38032'
|
||||
y1='11.0696'
|
||||
x2='4.94062'
|
||||
y2='7.69244'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop offset='0.12172' stopColor='#3D0D59' />
|
||||
<stop offset='1' stopColor='#7034B0' stopOpacity='0' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint2_linear_3984_11038'
|
||||
x1='18.3701'
|
||||
y1='-3.33385e-05'
|
||||
x2='9.85717'
|
||||
y2='20.4192'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#DB45E0' />
|
||||
<stop offset='1' stopColor='#6C0F71' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint3_linear_3984_11038'
|
||||
x1='18.3701'
|
||||
y1='-3.33385e-05'
|
||||
x2='9.85717'
|
||||
y2='20.4192'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#DB45E0' />
|
||||
<stop offset='0.677403' stopColor='#A829AE' />
|
||||
<stop offset='1' stopColor='#8F28B3' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint4_linear_3984_11038'
|
||||
x1='18.0002'
|
||||
y1='7.49958'
|
||||
x2='14.0004'
|
||||
y2='23.9988'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#3DCBFF' />
|
||||
<stop offset='1' stopColor='#00479E' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint5_linear_3984_11038'
|
||||
x1='18.2164'
|
||||
y1='7.92626'
|
||||
x2='10.5237'
|
||||
y2='22.9363'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#3DCBFF' />
|
||||
<stop offset='1' stopColor='#4A40D4' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<path
|
||||
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
|
||||
fill='url(#paint0_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
|
||||
fill='url(#paint1_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
|
||||
fill='url(#paint2_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
|
||||
fill='url(#paint3_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
|
||||
fill='url(#paint4_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
|
||||
fill='url(#paint5_linear_3984_11038)'
|
||||
/>
|
||||
</svg>`}
|
||||
/>
|
||||
|
||||
{/* MANUAL-CONTENT-START:intro */}
|
||||
[Microsoft Planner](https://www.microsoft.com/en-us/microsoft-365/planner) is a task management tool that helps teams organize work visually using boards, tasks, and buckets. Integrated with Microsoft 365, it offers a simple, intuitive way to manage team projects, assign responsibilities, and track progress.
|
||||
|
||||
With Microsoft Planner, you can:
|
||||
|
||||
- **Create and manage tasks**: Add new tasks with due dates, priorities, and assigned users
|
||||
- **Organize with buckets**: Group tasks by phase, status, or category to reflect your team’s workflow
|
||||
- **Visualize project status**: Use boards, charts, and filters to monitor workload and track progress
|
||||
- **Stay integrated with Microsoft 365**: Seamlessly connect tasks with Teams, Outlook, and other Microsoft tools
|
||||
|
||||
In Sim, the Microsoft Planner integration allows your agents to programmatically create, read, and manage tasks as part of their workflows. Agents can generate new tasks based on incoming requests, retrieve task details to drive decisions, and track status across projects — all without human intervention. Whether you're building workflows for client onboarding, internal project tracking, or follow-up task generation, integrating Microsoft Planner with Sim gives your agents a structured way to coordinate work, automate task creation, and keep teams aligned.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Integrate Microsoft Planner functionality to manage tasks. Read all user tasks, tasks from specific plans, individual tasks, or create new tasks with various properties like title, description, due date, and assignees using OAuth authentication.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `microsoft_planner_read_task`
|
||||
|
||||
Read tasks from Microsoft Planner - get all user tasks or all tasks from a specific plan
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the Microsoft Planner API |
|
||||
| `planId` | string | No | The ID of the plan to get tasks from \(if not provided, gets all user tasks\) |
|
||||
| `taskId` | string | No | The ID of the task to get |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `task` | json | The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees. |
|
||||
| `metadata` | json | Additional metadata about the operation, such as timestamps, request status, or other relevant information. |
|
||||
|
||||
### `microsoft_planner_create_task`
|
||||
|
||||
Create a new task in Microsoft Planner
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the Microsoft Planner API |
|
||||
| `planId` | string | Yes | The ID of the plan where the task will be created |
|
||||
| `title` | string | Yes | The title of the task |
|
||||
| `description` | string | No | The description of the task |
|
||||
| `dueDateTime` | string | No | The due date and time for the task \(ISO 8601 format\) |
|
||||
| `assigneeUserId` | string | No | The user ID to assign the task to |
|
||||
| `bucketId` | string | No | The bucket ID to place the task in |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `task` | json | The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees. |
|
||||
| `metadata` | json | Additional metadata about the operation, such as timestamps, request status, or other relevant information. |
|
||||
|
||||
|
||||
|
||||
## Notes
|
||||
|
||||
- Category: `tools`
|
||||
- Type: `microsoft_planner`
|
||||
@@ -79,7 +79,7 @@ The Mistral Parse tool is particularly useful for scenarios where your agents ne
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Extract text and structure from PDF documents using Mistral's OCR API. Configure processing options and get the content in your preferred format. For URLs, they must be publicly accessible and point to a valid PDF file. Note: Google Drive, Dropbox, and other cloud storage links are not supported; use a direct download URL from a web server instead.
|
||||
Extract text and structure from PDF documents using Mistral's OCR API. Either enter a URL to a PDF document or upload a PDF file directly. Configure processing options and get the content in your preferred format. For URLs, they must be publicly accessible and point to a valid PDF file. Note: Google Drive, Dropbox, and other cloud storage links are not supported; use a direct download URL from a web server instead.
|
||||
|
||||
|
||||
|
||||
|
||||
127
apps/docs/content/docs/tools/onedrive.mdx
Normal file
127
apps/docs/content/docs/tools/onedrive.mdx
Normal file
@@ -0,0 +1,127 @@
|
||||
---
|
||||
title: OneDrive
|
||||
description: Create, upload, and list files
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="onedrive"
|
||||
color="#E0E0E0"
|
||||
icon={true}
|
||||
iconSvg={`<svg className="block-icon" fill='currentColor' viewBox='0 0 32 32' xmlns='http://www.w3.org/2000/svg'>
|
||||
<g>
|
||||
<path
|
||||
d='M12.20245,11.19292l.00031-.0011,6.71765,4.02379,4.00293-1.68451.00018.00068A6.4768,6.4768,0,0,1,25.5,13c.14764,0,.29358.0067.43878.01639a10.00075,10.00075,0,0,0-18.041-3.01381C7.932,10.00215,7.9657,10,8,10A7.96073,7.96073,0,0,1,12.20245,11.19292Z'
|
||||
fill='#0364b8'
|
||||
/>
|
||||
<path
|
||||
d='M12.20276,11.19182l-.00031.0011A7.96073,7.96073,0,0,0,8,10c-.0343,0-.06805.00215-.10223.00258A7.99676,7.99676,0,0,0,1.43732,22.57277l5.924-2.49292,2.63342-1.10819,5.86353-2.46746,3.06213-1.28859Z'
|
||||
fill='#0078d4'
|
||||
/>
|
||||
<path
|
||||
d='M25.93878,13.01639C25.79358,13.0067,25.64764,13,25.5,13a6.4768,6.4768,0,0,0-2.57648.53178l-.00018-.00068-4.00293,1.68451,1.16077.69528L23.88611,18.19l1.66009.99438,5.67633,3.40007a6.5002,6.5002,0,0,0-5.28375-9.56805Z'
|
||||
fill='#1490df'
|
||||
/>
|
||||
<path
|
||||
d='M25.5462,19.18437,23.88611,18.19l-3.80493-2.2791-1.16077-.69528L15.85828,16.5042,9.99475,18.97166,7.36133,20.07985l-5.924,2.49292A7.98889,7.98889,0,0,0,8,26H25.5a6.49837,6.49837,0,0,0,5.72253-3.41556Z'
|
||||
fill='#28a8ea'
|
||||
/>
|
||||
</g>
|
||||
</svg>`}
|
||||
/>
|
||||
|
||||
{/* MANUAL-CONTENT-START:intro */}
|
||||
[OneDrive](https://onedrive.live.com) is Microsoft’s cloud storage and file synchronization service that allows users to securely store, access, and share files across devices. Integrated deeply into the Microsoft 365 ecosystem, OneDrive supports seamless collaboration, version control, and real-time access to content across teams and organizations.
|
||||
|
||||
Learn how to integrate the OneDrive tool in Sim to automatically pull, manage, and organize your cloud files within your workflows. This tutorial walks you through connecting OneDrive, setting up file access, and using stored content to power automation. Ideal for syncing essential documents and media with your agents in real time.
|
||||
|
||||
With OneDrive, you can:
|
||||
|
||||
- **Store files securely in the cloud**: Upload and access documents, images, and other files from any device
|
||||
- **Organize your content**: Create structured folders and manage file versions with ease
|
||||
- **Collaborate in real time**: Share files, edit them simultaneously with others, and track changes
|
||||
- **Access across devices**: Use OneDrive from desktop, mobile, and web platforms
|
||||
- **Integrate with Microsoft 365**: Work seamlessly with Word, Excel, PowerPoint, and Teams
|
||||
- **Control permissions**: Share files and folders with custom access settings and expiration controls
|
||||
|
||||
In Sim, the OneDrive integration enables your agents to directly interact with your cloud storage. Agents can upload new files to specific folders, retrieve and read existing files, and list folder contents to dynamically organize and access information. This integration allows your agents to incorporate file operations into intelligent workflows — automating document intake, content analysis, and structured storage management. By connecting Sim with OneDrive, you empower your agents to manage and use cloud documents programmatically, eliminating manual steps and enhancing automation with secure, real-time file access.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Integrate OneDrive functionality to manage files and folders. Upload new files, create new folders, and list contents of folders using OAuth authentication. Supports file operations with custom MIME types and folder organization.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `onedrive_upload`
|
||||
|
||||
Upload a file to OneDrive
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the OneDrive API |
|
||||
| `fileName` | string | Yes | The name of the file to upload |
|
||||
| `content` | string | Yes | The content of the file to upload |
|
||||
| `folderSelector` | string | No | Select the folder to upload the file to |
|
||||
| `folderId` | string | No | The ID of the folder to upload the file to \(internal use\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `file` | json | The OneDrive file object, including details such as id, name, size, and more. |
|
||||
| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. |
|
||||
|
||||
### `onedrive_create_folder`
|
||||
|
||||
Create a new folder in OneDrive
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the OneDrive API |
|
||||
| `folderName` | string | Yes | Name of the folder to create |
|
||||
| `folderSelector` | string | No | Select the parent folder to create the folder in |
|
||||
| `folderId` | string | No | ID of the parent folder \(internal use\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `file` | json | The OneDrive file object, including details such as id, name, size, and more. |
|
||||
| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. |
|
||||
|
||||
### `onedrive_list`
|
||||
|
||||
List files and folders in OneDrive
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the OneDrive API |
|
||||
| `folderSelector` | string | No | Select the folder to list files from |
|
||||
| `folderId` | string | No | The ID of the folder to list files from \(internal use\) |
|
||||
| `query` | string | No | A query to filter the files |
|
||||
| `pageSize` | number | No | The number of files to return |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `file` | json | The OneDrive file object, including details such as id, name, size, and more. |
|
||||
| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. |
|
||||
|
||||
|
||||
|
||||
## Notes
|
||||
|
||||
- Category: `tools`
|
||||
- Type: `onedrive`
|
||||
@@ -158,6 +158,10 @@ Send emails using Outlook
|
||||
| `to` | string | Yes | Recipient email address |
|
||||
| `subject` | string | Yes | Email subject |
|
||||
| `body` | string | Yes | Email body content |
|
||||
| `replyToMessageId` | string | No | Message ID to reply to \(for threading\) |
|
||||
| `conversationId` | string | No | Conversation ID for threading |
|
||||
| `cc` | string | No | CC recipients \(comma-separated\) |
|
||||
| `bcc` | string | No | BCC recipients \(comma-separated\) |
|
||||
|
||||
#### Output
|
||||
|
||||
|
||||
135
apps/docs/content/docs/tools/sharepoint.mdx
Normal file
135
apps/docs/content/docs/tools/sharepoint.mdx
Normal file
@@ -0,0 +1,135 @@
|
||||
---
|
||||
title: Sharepoint
|
||||
description: Read and create pages
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="sharepoint"
|
||||
color="#E0E0E0"
|
||||
icon={true}
|
||||
iconSvg={`<svg className="block-icon" fill='currentColor' viewBox='0 0 32 32' xmlns='http://www.w3.org/2000/svg'>
|
||||
<circle fill='#036C70' cx='16.31' cy='8.90' r='8.90' />
|
||||
<circle fill='#1A9BA1' cx='23.72' cy='17.05' r='8.15' />
|
||||
<circle fill='#37C6D0' cx='17.42' cy='24.83' r='6.30' />
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.1'
|
||||
d='M17.79,8.03v15.82c0,0.55-0.34,1.04-0.85,1.25c-0.16,0.07-0.34,0.10-0.51,0.10H11.13c-0.01-0.13-0.01-0.24-0.01-0.37c0-0.12,0-0.25,0.01-0.37c0.14-2.37,1.59-4.46,3.77-5.40v-1.38c-4.85-0.77-8.15-5.32-7.39-10.17c0.01-0.03,0.01-0.07,0.02-0.10c0.04-0.25,0.09-0.50,0.16-0.74h8.74c0.74,0,1.36,0.60,1.36,1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.2'
|
||||
d='M15.69,7.41H7.54c-0.82,4.84,2.43,9.43,7.27,10.25c0.15,0.02,0.29,0.05,0.44,0.06c-2.30,1.09-3.97,4.18-4.12,6.73c-0.01,0.12-0.02,0.25-0.01,0.37c0,0.13,0,0.24,0.01,0.37c0.01,0.25,0.05,0.50,0.10,0.74h4.47c0.55,0,1.04-0.34,1.25-0.85c0.07-0.16,0.10-0.34,0.10-0.51V8.77c0-0.75-0.61-1.36-1.36-1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.2'
|
||||
d='M15.69,7.41H7.54c-0.82,4.84,2.43,9.43,7.27,10.26c0.10,0.02,0.20,0.03,0.30,0.05c-2.22,1.17-3.83,4.26-3.97,6.75h4.56c0.75,0,1.35-0.61,1.36-1.36V8.77c0-0.75-0.61-1.36-1.36-1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.2'
|
||||
d='M14.95,7.41H7.54c-0.78,4.57,2.08,8.97,6.58,10.11c-1.84,2.43-2.27,5.61-2.58,7.22h3.82c0.75,0,1.35-0.61,1.36-1.36V8.77c0-0.75-0.61-1.36-1.36-1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#008789'
|
||||
d='M1.36,7.41h13.58c0.75,0,1.36,0.61,1.36,1.36v13.58c0,0.75-0.61,1.36-1.36,1.36H1.36c-0.75,0-1.36-0.61-1.36-1.36V8.77C0,8.02,0.61,7.41,1.36,7.41z'
|
||||
/>
|
||||
<path
|
||||
fill='#FFFFFF'
|
||||
d='M6.07,15.42c-0.32-0.21-0.58-0.49-0.78-0.82c-0.19-0.34-0.28-0.73-0.27-1.12c-0.02-0.53,0.16-1.05,0.50-1.46c0.36-0.41,0.82-0.71,1.34-0.87c0.59-0.19,1.21-0.29,1.83-0.28c0.82-0.03,1.63,0.08,2.41,0.34v1.71c-0.34-0.20-0.71-0.35-1.09-0.44c-0.42-0.10-0.84-0.15-1.27-0.15c-0.45-0.02-0.90,0.08-1.31,0.28c-0.31,0.14-0.52,0.44-0.52,0.79c0,0.21,0.08,0.41,0.22,0.56c0.17,0.18,0.37,0.32,0.59,0.42c0.25,0.12,0.62,0.29,1.11,0.49c0.05,0.02,0.11,0.04,0.16,0.06c0.49,0.19,0.96,0.42,1.40,0.69c0.34,0.21,0.62,0.49,0.83,0.83c0.21,0.39,0.31,0.82,0.30,1.26c0.02,0.54-0.14,1.08-0.47,1.52c-0.33,0.40-0.77,0.69-1.26,0.85c-0.58,0.18-1.19,0.27-1.80,0.26c-0.55,0-1.09-0.04-1.63-0.13c-0.45-0.07-0.90-0.20-1.32-0.39v-1.80c0.40,0.29,0.86,0.50,1.34,0.64c0.48,0.15,0.97,0.23,1.47,0.24c0.46,0.03,0.92-0.07,1.34-0.28c0.29-0.16,0.46-0.47,0.46-0.80c0-0.23-0.09-0.45-0.25-0.61c-0.20-0.20-0.44-0.36-0.69-0.48c-0.30-0.15-0.73-0.34-1.31-0.59C6.91,16.14,6.48,15.80,6.07,15.42z'
|
||||
/>
|
||||
</svg>`}
|
||||
/>
|
||||
|
||||
{/* MANUAL-CONTENT-START:intro */}
|
||||
[SharePoint](https://www.microsoft.com/en-us/microsoft-365/sharepoint/collaboration) is a collaborative platform from Microsoft that enables users to build and manage internal websites, share documents, and organize team resources. It provides a powerful, flexible solution for creating digital workspaces and streamlining content management across organizations.
|
||||
|
||||
With SharePoint, you can:
|
||||
|
||||
- **Create team and communication sites**: Set up pages and portals to support collaboration, announcements, and content distribution
|
||||
- **Organize and share content**: Store documents, manage files, and enable version control with secure sharing capabilities
|
||||
- **Customize pages**: Add text parts to tailor each site to your team's needs
|
||||
- **Improve discoverability**: Use metadata, search, and navigation tools to help users quickly find what they need
|
||||
- **Collaborate securely**: Control access with robust permission settings and Microsoft 365 integration
|
||||
|
||||
In Sim, the SharePoint integration empowers your agents to create and access SharePoint sites and pages as part of their workflows. This enables automated document management, knowledge sharing, and workspace creation without manual effort. Agents can generate new project pages, upload or retrieve files, and organize resources dynamically, based on workflow inputs. By connecting Sim with SharePoint, you bring structured collaboration and content management into your automation flows — giving your agents the ability to coordinate team activities, surface key information, and maintain a single source of truth across your organization.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Integrate Sharepoint functionality to manage pages. Read and create pages, and list sites using OAuth authentication. Supports page operations with custom MIME types and folder organization.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `sharepoint_create_page`
|
||||
|
||||
Create a new page in a SharePoint site
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the SharePoint API |
|
||||
| `siteId` | string | No | The ID of the SharePoint site \(internal use\) |
|
||||
| `siteSelector` | string | No | Select the SharePoint site |
|
||||
| `pageName` | string | Yes | The name of the page to create |
|
||||
| `pageTitle` | string | No | The title of the page \(defaults to page name if not provided\) |
|
||||
| `pageContent` | string | No | The content of the page |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. |
|
||||
|
||||
### `sharepoint_read_page`
|
||||
|
||||
Read a specific page from a SharePoint site
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the SharePoint API |
|
||||
| `siteSelector` | string | No | Select the SharePoint site |
|
||||
| `siteId` | string | No | The ID of the SharePoint site \(internal use\) |
|
||||
| `pageId` | string | No | The ID of the page to read |
|
||||
| `pageName` | string | No | The name of the page to read \(alternative to pageId\) |
|
||||
| `maxPages` | number | No | Maximum number of pages to return when listing all pages \(default: 10, max: 50\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. |
|
||||
|
||||
### `sharepoint_list_sites`
|
||||
|
||||
List details of all SharePoint sites
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `accessToken` | string | Yes | The access token for the SharePoint API |
|
||||
| `siteSelector` | string | No | Select the SharePoint site |
|
||||
| `groupId` | string | No | The group ID for accessing a group team site |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. |
|
||||
|
||||
|
||||
|
||||
## Notes
|
||||
|
||||
- Category: `tools`
|
||||
- Type: `sharepoint`
|
||||
@@ -15,3 +15,6 @@ ENCRYPTION_KEY=your_encryption_key # Use `openssl rand -hex 32` to generate
|
||||
# RESEND_API_KEY= # Uncomment and add your key from https://resend.com to send actual emails
|
||||
# If left commented out, emails will be logged to console instead
|
||||
|
||||
# Local AI Models (Optional)
|
||||
# OLLAMA_URL=http://localhost:11434 # URL for local Ollama server - uncomment if using local models
|
||||
|
||||
|
||||
@@ -2,9 +2,12 @@
|
||||
|
||||
import Image from 'next/image'
|
||||
import Link from 'next/link'
|
||||
import { useBrandConfig } from '@/lib/branding/branding'
|
||||
import { GridPattern } from '@/app/(landing)/components/grid-pattern'
|
||||
|
||||
export default function AuthLayout({ children }: { children: React.ReactNode }) {
|
||||
const brand = useBrandConfig()
|
||||
|
||||
return (
|
||||
<main className='relative flex min-h-screen flex-col bg-[#0C0C0C] font-geist-sans text-white'>
|
||||
{/* Background pattern */}
|
||||
@@ -21,7 +24,17 @@ export default function AuthLayout({ children }: { children: React.ReactNode })
|
||||
<div className='relative z-10 px-6 pt-9'>
|
||||
<div className='mx-auto max-w-7xl'>
|
||||
<Link href='/' className='inline-flex'>
|
||||
<Image src='/sim.svg' alt='Sim Logo' width={42} height={42} />
|
||||
{brand.logoUrl ? (
|
||||
<img
|
||||
src={brand.logoUrl}
|
||||
alt={`${brand.name} Logo`}
|
||||
width={42}
|
||||
height={42}
|
||||
className='h-[42px] w-[42px] object-contain'
|
||||
/>
|
||||
) : (
|
||||
<Image src='/sim.svg' alt={`${brand.name} Logo`} width={42} height={42} />
|
||||
)}
|
||||
</Link>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -15,6 +15,7 @@ import {
|
||||
SheetTitle,
|
||||
SheetTrigger,
|
||||
} from '@/components/ui/sheet'
|
||||
import { useBrandConfig } from '@/lib/branding/branding'
|
||||
import { usePrefetchOnHover } from '@/app/(landing)/utils/prefetch'
|
||||
|
||||
// --- Framer Motion Variants ---
|
||||
@@ -165,6 +166,7 @@ export default function NavClient({
|
||||
const [isMobile, setIsMobile] = useState(initialIsMobile ?? false)
|
||||
const [isSheetOpen, setIsSheetOpen] = useState(false)
|
||||
const _router = useRouter()
|
||||
const brand = useBrandConfig()
|
||||
|
||||
useEffect(() => {
|
||||
setMounted(true)
|
||||
@@ -199,7 +201,17 @@ export default function NavClient({
|
||||
<div className='flex flex-1 items-center'>
|
||||
<div className='inline-block'>
|
||||
<Link href='/' className='inline-flex'>
|
||||
<Image src='/sim.svg' alt='Sim Logo' width={42} height={42} />
|
||||
{brand.logoUrl ? (
|
||||
<img
|
||||
src={brand.logoUrl}
|
||||
alt={`${brand.name} Logo`}
|
||||
width={42}
|
||||
height={42}
|
||||
className='h-[42px] w-[42px] object-contain'
|
||||
/>
|
||||
) : (
|
||||
<Image src='/sim.svg' alt={`${brand.name} Logo`} width={42} height={42} />
|
||||
)}
|
||||
</Link>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -6,8 +6,6 @@ import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { db } from '@/db'
|
||||
import { account, user } from '@/db/schema'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
const logger = createLogger('OAuthConnectionsAPI')
|
||||
|
||||
interface GoogleIdToken {
|
||||
|
||||
@@ -9,8 +9,6 @@ import { member } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('UnifiedBillingAPI')
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
/**
|
||||
* Unified Billing Endpoint
|
||||
*/
|
||||
|
||||
132
apps/sim/app/api/copilot/chat/file-utils.ts
Normal file
132
apps/sim/app/api/copilot/chat/file-utils.ts
Normal file
@@ -0,0 +1,132 @@
|
||||
export interface FileAttachment {
|
||||
id: string
|
||||
s3_key: string
|
||||
filename: string
|
||||
media_type: string
|
||||
size: number
|
||||
}
|
||||
|
||||
export interface AnthropicMessageContent {
|
||||
type: 'text' | 'image' | 'document'
|
||||
text?: string
|
||||
source?: {
|
||||
type: 'base64'
|
||||
media_type: string
|
||||
data: string
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mapping of MIME types to Anthropic content types
|
||||
*/
|
||||
export const MIME_TYPE_MAPPING: Record<string, 'image' | 'document'> = {
|
||||
// Images
|
||||
'image/jpeg': 'image',
|
||||
'image/jpg': 'image',
|
||||
'image/png': 'image',
|
||||
'image/gif': 'image',
|
||||
'image/webp': 'image',
|
||||
'image/svg+xml': 'image',
|
||||
|
||||
// Documents
|
||||
'application/pdf': 'document',
|
||||
'text/plain': 'document',
|
||||
'text/csv': 'document',
|
||||
'application/json': 'document',
|
||||
'application/xml': 'document',
|
||||
'text/xml': 'document',
|
||||
'text/html': 'document',
|
||||
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': 'document', // .docx
|
||||
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': 'document', // .xlsx
|
||||
'application/vnd.openxmlformats-officedocument.presentationml.presentation': 'document', // .pptx
|
||||
'application/msword': 'document', // .doc
|
||||
'application/vnd.ms-excel': 'document', // .xls
|
||||
'application/vnd.ms-powerpoint': 'document', // .ppt
|
||||
'text/markdown': 'document',
|
||||
'application/rtf': 'document',
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the Anthropic content type for a given MIME type
|
||||
*/
|
||||
export function getAnthropicContentType(mimeType: string): 'image' | 'document' | null {
|
||||
return MIME_TYPE_MAPPING[mimeType.toLowerCase()] || null
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a MIME type is supported by Anthropic
|
||||
*/
|
||||
export function isSupportedFileType(mimeType: string): boolean {
|
||||
return mimeType.toLowerCase() in MIME_TYPE_MAPPING
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a file buffer to base64
|
||||
*/
|
||||
export function bufferToBase64(buffer: Buffer): string {
|
||||
return buffer.toString('base64')
|
||||
}
|
||||
|
||||
/**
|
||||
* Create Anthropic message content from file data
|
||||
*/
|
||||
export function createAnthropicFileContent(
|
||||
fileBuffer: Buffer,
|
||||
mimeType: string
|
||||
): AnthropicMessageContent | null {
|
||||
const contentType = getAnthropicContentType(mimeType)
|
||||
if (!contentType) {
|
||||
return null
|
||||
}
|
||||
|
||||
return {
|
||||
type: contentType,
|
||||
source: {
|
||||
type: 'base64',
|
||||
media_type: mimeType,
|
||||
data: bufferToBase64(fileBuffer),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract file extension from filename
|
||||
*/
|
||||
export function getFileExtension(filename: string): string {
|
||||
const lastDot = filename.lastIndexOf('.')
|
||||
return lastDot !== -1 ? filename.slice(lastDot + 1).toLowerCase() : ''
|
||||
}
|
||||
|
||||
/**
|
||||
* Get MIME type from file extension (fallback if not provided)
|
||||
*/
|
||||
export function getMimeTypeFromExtension(extension: string): string {
|
||||
const extensionMimeMap: Record<string, string> = {
|
||||
// Images
|
||||
jpg: 'image/jpeg',
|
||||
jpeg: 'image/jpeg',
|
||||
png: 'image/png',
|
||||
gif: 'image/gif',
|
||||
webp: 'image/webp',
|
||||
svg: 'image/svg+xml',
|
||||
|
||||
// Documents
|
||||
pdf: 'application/pdf',
|
||||
txt: 'text/plain',
|
||||
csv: 'text/csv',
|
||||
json: 'application/json',
|
||||
xml: 'application/xml',
|
||||
html: 'text/html',
|
||||
htm: 'text/html',
|
||||
docx: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
|
||||
xlsx: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
|
||||
pptx: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
|
||||
doc: 'application/msword',
|
||||
xls: 'application/vnd.ms-excel',
|
||||
ppt: 'application/vnd.ms-powerpoint',
|
||||
md: 'text/markdown',
|
||||
rtf: 'application/rtf',
|
||||
}
|
||||
|
||||
return extensionMimeMap[extension.toLowerCase()] || 'application/octet-stream'
|
||||
}
|
||||
@@ -13,12 +13,25 @@ import { getCopilotModel } from '@/lib/copilot/config'
|
||||
import { TITLE_GENERATION_SYSTEM_PROMPT, TITLE_GENERATION_USER_PROMPT } from '@/lib/copilot/prompts'
|
||||
import { env } from '@/lib/env'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { downloadFile } from '@/lib/uploads'
|
||||
import { downloadFromS3WithConfig } from '@/lib/uploads/s3/s3-client'
|
||||
import { S3_COPILOT_CONFIG, USE_S3_STORAGE } from '@/lib/uploads/setup'
|
||||
import { db } from '@/db'
|
||||
import { copilotChats } from '@/db/schema'
|
||||
import { executeProviderRequest } from '@/providers'
|
||||
import { createAnthropicFileContent, isSupportedFileType } from './file-utils'
|
||||
|
||||
const logger = createLogger('CopilotChatAPI')
|
||||
|
||||
// Schema for file attachments
|
||||
const FileAttachmentSchema = z.object({
|
||||
id: z.string(),
|
||||
s3_key: z.string(),
|
||||
filename: z.string(),
|
||||
media_type: z.string(),
|
||||
size: z.number(),
|
||||
})
|
||||
|
||||
// Schema for chat messages
|
||||
const ChatMessageSchema = z.object({
|
||||
message: z.string().min(1, 'Message is required'),
|
||||
@@ -29,6 +42,7 @@ const ChatMessageSchema = z.object({
|
||||
createNewChat: z.boolean().optional().default(false),
|
||||
stream: z.boolean().optional().default(true),
|
||||
implicitFeedback: z.string().optional(),
|
||||
fileAttachments: z.array(FileAttachmentSchema).optional(),
|
||||
})
|
||||
|
||||
// Sim Agent API configuration
|
||||
@@ -145,6 +159,7 @@ export async function POST(req: NextRequest) {
|
||||
createNewChat,
|
||||
stream,
|
||||
implicitFeedback,
|
||||
fileAttachments,
|
||||
} = ChatMessageSchema.parse(body)
|
||||
|
||||
logger.info(`[${tracker.requestId}] Processing copilot chat request`, {
|
||||
@@ -195,15 +210,91 @@ export async function POST(req: NextRequest) {
|
||||
}
|
||||
}
|
||||
|
||||
// Process file attachments if present
|
||||
const processedFileContents: any[] = []
|
||||
if (fileAttachments && fileAttachments.length > 0) {
|
||||
logger.info(`[${tracker.requestId}] Processing ${fileAttachments.length} file attachments`)
|
||||
|
||||
for (const attachment of fileAttachments) {
|
||||
try {
|
||||
// Check if file type is supported
|
||||
if (!isSupportedFileType(attachment.media_type)) {
|
||||
logger.warn(`[${tracker.requestId}] Unsupported file type: ${attachment.media_type}`)
|
||||
continue
|
||||
}
|
||||
|
||||
// Download file from S3
|
||||
logger.info(`[${tracker.requestId}] Downloading file: ${attachment.s3_key}`)
|
||||
let fileBuffer: Buffer
|
||||
if (USE_S3_STORAGE) {
|
||||
fileBuffer = await downloadFromS3WithConfig(attachment.s3_key, S3_COPILOT_CONFIG)
|
||||
} else {
|
||||
// Fallback to generic downloadFile for other storage providers
|
||||
fileBuffer = await downloadFile(attachment.s3_key)
|
||||
}
|
||||
|
||||
// Convert to Anthropic format
|
||||
const fileContent = createAnthropicFileContent(fileBuffer, attachment.media_type)
|
||||
if (fileContent) {
|
||||
processedFileContents.push(fileContent)
|
||||
logger.info(
|
||||
`[${tracker.requestId}] Processed file: ${attachment.filename} (${attachment.media_type})`
|
||||
)
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`[${tracker.requestId}] Failed to process file ${attachment.filename}:`,
|
||||
error
|
||||
)
|
||||
// Continue processing other files
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build messages array for sim agent with conversation history
|
||||
const messages = []
|
||||
|
||||
// Add conversation history
|
||||
// Add conversation history (need to rebuild these with file support if they had attachments)
|
||||
for (const msg of conversationHistory) {
|
||||
messages.push({
|
||||
role: msg.role,
|
||||
content: msg.content,
|
||||
})
|
||||
if (msg.fileAttachments && msg.fileAttachments.length > 0) {
|
||||
// This is a message with file attachments - rebuild with content array
|
||||
const content: any[] = [{ type: 'text', text: msg.content }]
|
||||
|
||||
// Process file attachments for historical messages
|
||||
for (const attachment of msg.fileAttachments) {
|
||||
try {
|
||||
if (isSupportedFileType(attachment.media_type)) {
|
||||
let fileBuffer: Buffer
|
||||
if (USE_S3_STORAGE) {
|
||||
fileBuffer = await downloadFromS3WithConfig(attachment.s3_key, S3_COPILOT_CONFIG)
|
||||
} else {
|
||||
// Fallback to generic downloadFile for other storage providers
|
||||
fileBuffer = await downloadFile(attachment.s3_key)
|
||||
}
|
||||
const fileContent = createAnthropicFileContent(fileBuffer, attachment.media_type)
|
||||
if (fileContent) {
|
||||
content.push(fileContent)
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`[${tracker.requestId}] Failed to process historical file ${attachment.filename}:`,
|
||||
error
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
messages.push({
|
||||
role: msg.role,
|
||||
content,
|
||||
})
|
||||
} else {
|
||||
// Regular text-only message
|
||||
messages.push({
|
||||
role: msg.role,
|
||||
content: msg.content,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Add implicit feedback if provided
|
||||
@@ -214,11 +305,27 @@ export async function POST(req: NextRequest) {
|
||||
})
|
||||
}
|
||||
|
||||
// Add current user message
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content: message,
|
||||
})
|
||||
// Add current user message with file attachments
|
||||
if (processedFileContents.length > 0) {
|
||||
// Message with files - use content array format
|
||||
const content: any[] = [{ type: 'text', text: message }]
|
||||
|
||||
// Add file contents
|
||||
for (const fileContent of processedFileContents) {
|
||||
content.push(fileContent)
|
||||
}
|
||||
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content,
|
||||
})
|
||||
} else {
|
||||
// Text-only message
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content: message,
|
||||
})
|
||||
}
|
||||
|
||||
// Start title generation in parallel if this is a new chat with first message
|
||||
if (actualChatId && !currentChat?.title && conversationHistory.length === 0) {
|
||||
@@ -270,6 +377,7 @@ export async function POST(req: NextRequest) {
|
||||
role: 'user',
|
||||
content: message,
|
||||
timestamp: new Date().toISOString(),
|
||||
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
|
||||
}
|
||||
|
||||
// Create a pass-through stream that captures the response
|
||||
@@ -590,6 +698,7 @@ export async function POST(req: NextRequest) {
|
||||
role: 'user',
|
||||
content: message,
|
||||
timestamp: new Date().toISOString(),
|
||||
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
|
||||
}
|
||||
|
||||
const assistantMessage = {
|
||||
|
||||
@@ -24,6 +24,17 @@ const UpdateMessagesSchema = z.object({
|
||||
timestamp: z.string(),
|
||||
toolCalls: z.array(z.any()).optional(),
|
||||
contentBlocks: z.array(z.any()).optional(),
|
||||
fileAttachments: z
|
||||
.array(
|
||||
z.object({
|
||||
id: z.string(),
|
||||
s3_key: z.string(),
|
||||
filename: z.string(),
|
||||
media_type: z.string(),
|
||||
size: z.number(),
|
||||
})
|
||||
)
|
||||
.optional(),
|
||||
})
|
||||
),
|
||||
})
|
||||
|
||||
@@ -354,7 +354,14 @@ describe('Copilot Methods API Route', () => {
|
||||
86400
|
||||
)
|
||||
expect(mockRedisGet).toHaveBeenCalledWith('tool_call:tool-call-123')
|
||||
expect(mockToolRegistryExecute).toHaveBeenCalledWith('interrupt-tool', { key: 'value' })
|
||||
expect(mockToolRegistryExecute).toHaveBeenCalledWith('interrupt-tool', {
|
||||
key: 'value',
|
||||
confirmationMessage: 'User approved',
|
||||
fullData: {
|
||||
message: 'User approved',
|
||||
status: 'accepted',
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
it('should handle tool execution with interrupt - user rejection', async () => {
|
||||
@@ -613,6 +620,10 @@ describe('Copilot Methods API Route', () => {
|
||||
expect(mockToolRegistryExecute).toHaveBeenCalledWith('no_op', {
|
||||
existing: 'param',
|
||||
confirmationMessage: 'Confirmation message',
|
||||
fullData: {
|
||||
message: 'Confirmation message',
|
||||
status: 'accepted',
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ async function addToolToRedis(toolCallId: string): Promise<void> {
|
||||
*/
|
||||
async function pollRedisForTool(
|
||||
toolCallId: string
|
||||
): Promise<{ status: NotificationStatus; message?: string } | null> {
|
||||
): Promise<{ status: NotificationStatus; message?: string; fullData?: any } | null> {
|
||||
const redis = getRedisClient()
|
||||
if (!redis) {
|
||||
logger.warn('pollRedisForTool: Redis client not available')
|
||||
@@ -86,12 +86,14 @@ async function pollRedisForTool(
|
||||
|
||||
let status: NotificationStatus | null = null
|
||||
let message: string | undefined
|
||||
let fullData: any = null
|
||||
|
||||
// Try to parse as JSON (new format), fallback to string (old format)
|
||||
try {
|
||||
const parsedData = JSON.parse(redisValue)
|
||||
status = parsedData.status as NotificationStatus
|
||||
message = parsedData.message || undefined
|
||||
fullData = parsedData // Store the full parsed data
|
||||
} catch {
|
||||
// Fallback to old format (direct status string)
|
||||
status = redisValue as NotificationStatus
|
||||
@@ -138,7 +140,7 @@ async function pollRedisForTool(
|
||||
})
|
||||
}
|
||||
|
||||
return { status, message }
|
||||
return { status, message, fullData }
|
||||
}
|
||||
|
||||
// Wait before next poll
|
||||
@@ -163,9 +165,13 @@ async function pollRedisForTool(
|
||||
* Handle tool calls that require user interruption/approval
|
||||
* Returns { approved: boolean, rejected: boolean, error?: boolean, message?: string } to distinguish between rejection, timeout, and error
|
||||
*/
|
||||
async function interruptHandler(
|
||||
toolCallId: string
|
||||
): Promise<{ approved: boolean; rejected: boolean; error?: boolean; message?: string }> {
|
||||
async function interruptHandler(toolCallId: string): Promise<{
|
||||
approved: boolean
|
||||
rejected: boolean
|
||||
error?: boolean
|
||||
message?: string
|
||||
fullData?: any
|
||||
}> {
|
||||
if (!toolCallId) {
|
||||
logger.error('interruptHandler: No tool call ID provided')
|
||||
return { approved: false, rejected: false, error: true, message: 'No tool call ID provided' }
|
||||
@@ -185,31 +191,31 @@ async function interruptHandler(
|
||||
return { approved: false, rejected: false }
|
||||
}
|
||||
|
||||
const { status, message } = result
|
||||
const { status, message, fullData } = result
|
||||
|
||||
if (status === 'rejected') {
|
||||
logger.info('Tool execution rejected by user', { toolCallId, message })
|
||||
return { approved: false, rejected: true, message }
|
||||
return { approved: false, rejected: true, message, fullData }
|
||||
}
|
||||
|
||||
if (status === 'accepted') {
|
||||
logger.info('Tool execution approved by user', { toolCallId, message })
|
||||
return { approved: true, rejected: false, message }
|
||||
return { approved: true, rejected: false, message, fullData }
|
||||
}
|
||||
|
||||
if (status === 'error') {
|
||||
logger.error('Tool execution failed with error', { toolCallId, message })
|
||||
return { approved: false, rejected: false, error: true, message }
|
||||
return { approved: false, rejected: false, error: true, message, fullData }
|
||||
}
|
||||
|
||||
if (status === 'background') {
|
||||
logger.info('Tool execution moved to background', { toolCallId, message })
|
||||
return { approved: true, rejected: false, message }
|
||||
return { approved: true, rejected: false, message, fullData }
|
||||
}
|
||||
|
||||
if (status === 'success') {
|
||||
logger.info('Tool execution completed successfully', { toolCallId, message })
|
||||
return { approved: true, rejected: false, message }
|
||||
return { approved: true, rejected: false, message, fullData }
|
||||
}
|
||||
|
||||
logger.warn('Unexpected tool call status', { toolCallId, status, message })
|
||||
@@ -326,7 +332,7 @@ export async function POST(req: NextRequest) {
|
||||
})
|
||||
|
||||
// Handle interrupt flow
|
||||
const { approved, rejected, error, message } = await interruptHandler(toolCallId)
|
||||
const { approved, rejected, error, message, fullData } = await interruptHandler(toolCallId)
|
||||
|
||||
if (rejected) {
|
||||
logger.info(`[${requestId}] Tool execution rejected by user`, {
|
||||
@@ -371,10 +377,13 @@ export async function POST(req: NextRequest) {
|
||||
message,
|
||||
})
|
||||
|
||||
// For noop tool, pass the confirmation message as a parameter
|
||||
if (methodId === 'no_op' && message) {
|
||||
// For tools that need confirmation data, pass the message and/or fullData as parameters
|
||||
if (message) {
|
||||
params.confirmationMessage = message
|
||||
}
|
||||
if (fullData) {
|
||||
params.fullData = fullData
|
||||
}
|
||||
}
|
||||
|
||||
// Execute the tool directly via registry
|
||||
|
||||
@@ -9,9 +9,11 @@ import { getS3Client, sanitizeFilenameForMetadata } from '@/lib/uploads/s3/s3-cl
|
||||
import {
|
||||
BLOB_CHAT_CONFIG,
|
||||
BLOB_CONFIG,
|
||||
BLOB_COPILOT_CONFIG,
|
||||
BLOB_KB_CONFIG,
|
||||
S3_CHAT_CONFIG,
|
||||
S3_CONFIG,
|
||||
S3_COPILOT_CONFIG,
|
||||
S3_KB_CONFIG,
|
||||
} from '@/lib/uploads/setup'
|
||||
import { createErrorResponse, createOptionsResponse } from '@/app/api/files/utils'
|
||||
@@ -22,9 +24,11 @@ interface PresignedUrlRequest {
|
||||
fileName: string
|
||||
contentType: string
|
||||
fileSize: number
|
||||
userId?: string
|
||||
chatId?: string
|
||||
}
|
||||
|
||||
type UploadType = 'general' | 'knowledge-base' | 'chat'
|
||||
type UploadType = 'general' | 'knowledge-base' | 'chat' | 'copilot'
|
||||
|
||||
class PresignedUrlError extends Error {
|
||||
constructor(
|
||||
@@ -58,7 +62,7 @@ export async function POST(request: NextRequest) {
|
||||
throw new ValidationError('Invalid JSON in request body')
|
||||
}
|
||||
|
||||
const { fileName, contentType, fileSize } = data
|
||||
const { fileName, contentType, fileSize, userId, chatId } = data
|
||||
|
||||
if (!fileName?.trim()) {
|
||||
throw new ValidationError('fileName is required and cannot be empty')
|
||||
@@ -83,7 +87,16 @@ export async function POST(request: NextRequest) {
|
||||
? 'knowledge-base'
|
||||
: uploadTypeParam === 'chat'
|
||||
? 'chat'
|
||||
: 'general'
|
||||
: uploadTypeParam === 'copilot'
|
||||
? 'copilot'
|
||||
: 'general'
|
||||
|
||||
// Validate copilot-specific requirements
|
||||
if (uploadType === 'copilot') {
|
||||
if (!userId?.trim()) {
|
||||
throw new ValidationError('userId is required for copilot uploads')
|
||||
}
|
||||
}
|
||||
|
||||
if (!isUsingCloudStorage()) {
|
||||
throw new StorageConfigError(
|
||||
@@ -96,9 +109,9 @@ export async function POST(request: NextRequest) {
|
||||
|
||||
switch (storageProvider) {
|
||||
case 's3':
|
||||
return await handleS3PresignedUrl(fileName, contentType, fileSize, uploadType)
|
||||
return await handleS3PresignedUrl(fileName, contentType, fileSize, uploadType, userId)
|
||||
case 'blob':
|
||||
return await handleBlobPresignedUrl(fileName, contentType, fileSize, uploadType)
|
||||
return await handleBlobPresignedUrl(fileName, contentType, fileSize, uploadType, userId)
|
||||
default:
|
||||
throw new StorageConfigError(`Unknown storage provider: ${storageProvider}`)
|
||||
}
|
||||
@@ -126,7 +139,8 @@ async function handleS3PresignedUrl(
|
||||
fileName: string,
|
||||
contentType: string,
|
||||
fileSize: number,
|
||||
uploadType: UploadType
|
||||
uploadType: UploadType,
|
||||
userId?: string
|
||||
) {
|
||||
try {
|
||||
const config =
|
||||
@@ -134,15 +148,26 @@ async function handleS3PresignedUrl(
|
||||
? S3_KB_CONFIG
|
||||
: uploadType === 'chat'
|
||||
? S3_CHAT_CONFIG
|
||||
: S3_CONFIG
|
||||
: uploadType === 'copilot'
|
||||
? S3_COPILOT_CONFIG
|
||||
: S3_CONFIG
|
||||
|
||||
if (!config.bucket || !config.region) {
|
||||
throw new StorageConfigError(`S3 configuration missing for ${uploadType} uploads`)
|
||||
}
|
||||
|
||||
const safeFileName = fileName.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9.-]/g, '_')
|
||||
const prefix = uploadType === 'knowledge-base' ? 'kb/' : uploadType === 'chat' ? 'chat/' : ''
|
||||
const uniqueKey = `${prefix}${Date.now()}-${uuidv4()}-${safeFileName}`
|
||||
|
||||
let prefix = ''
|
||||
if (uploadType === 'knowledge-base') {
|
||||
prefix = 'kb/'
|
||||
} else if (uploadType === 'chat') {
|
||||
prefix = 'chat/'
|
||||
} else if (uploadType === 'copilot') {
|
||||
prefix = `${userId}/`
|
||||
}
|
||||
|
||||
const uniqueKey = `${prefix}${uuidv4()}-${safeFileName}`
|
||||
|
||||
const sanitizedOriginalName = sanitizeFilenameForMetadata(fileName)
|
||||
|
||||
@@ -155,6 +180,9 @@ async function handleS3PresignedUrl(
|
||||
metadata.purpose = 'knowledge-base'
|
||||
} else if (uploadType === 'chat') {
|
||||
metadata.purpose = 'chat'
|
||||
} else if (uploadType === 'copilot') {
|
||||
metadata.purpose = 'copilot'
|
||||
metadata.userId = userId || ''
|
||||
}
|
||||
|
||||
const command = new PutObjectCommand({
|
||||
@@ -210,7 +238,8 @@ async function handleBlobPresignedUrl(
|
||||
fileName: string,
|
||||
contentType: string,
|
||||
fileSize: number,
|
||||
uploadType: UploadType
|
||||
uploadType: UploadType,
|
||||
userId?: string
|
||||
) {
|
||||
try {
|
||||
const config =
|
||||
@@ -218,7 +247,9 @@ async function handleBlobPresignedUrl(
|
||||
? BLOB_KB_CONFIG
|
||||
: uploadType === 'chat'
|
||||
? BLOB_CHAT_CONFIG
|
||||
: BLOB_CONFIG
|
||||
: uploadType === 'copilot'
|
||||
? BLOB_COPILOT_CONFIG
|
||||
: BLOB_CONFIG
|
||||
|
||||
if (
|
||||
!config.accountName ||
|
||||
@@ -229,8 +260,17 @@ async function handleBlobPresignedUrl(
|
||||
}
|
||||
|
||||
const safeFileName = fileName.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9.-]/g, '_')
|
||||
const prefix = uploadType === 'knowledge-base' ? 'kb/' : uploadType === 'chat' ? 'chat/' : ''
|
||||
const uniqueKey = `${prefix}${Date.now()}-${uuidv4()}-${safeFileName}`
|
||||
|
||||
let prefix = ''
|
||||
if (uploadType === 'knowledge-base') {
|
||||
prefix = 'kb/'
|
||||
} else if (uploadType === 'chat') {
|
||||
prefix = 'chat/'
|
||||
} else if (uploadType === 'copilot') {
|
||||
prefix = `${userId}/`
|
||||
}
|
||||
|
||||
const uniqueKey = `${prefix}${uuidv4()}-${safeFileName}`
|
||||
|
||||
const blobServiceClient = getBlobServiceClient()
|
||||
const containerClient = blobServiceClient.getContainerClient(config.containerName)
|
||||
@@ -282,6 +322,9 @@ async function handleBlobPresignedUrl(
|
||||
uploadHeaders['x-ms-meta-purpose'] = 'knowledge-base'
|
||||
} else if (uploadType === 'chat') {
|
||||
uploadHeaders['x-ms-meta-purpose'] = 'chat'
|
||||
} else if (uploadType === 'copilot') {
|
||||
uploadHeaders['x-ms-meta-purpose'] = 'copilot'
|
||||
uploadHeaders['x-ms-meta-userid'] = encodeURIComponent(userId || '')
|
||||
}
|
||||
|
||||
return NextResponse.json({
|
||||
|
||||
@@ -13,8 +13,6 @@ import {
|
||||
getContentType,
|
||||
} from '@/app/api/files/utils'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
const logger = createLogger('FilesServeAPI')
|
||||
|
||||
async function streamToBuffer(readableStream: NodeJS.ReadableStream): Promise<Buffer> {
|
||||
@@ -58,7 +56,11 @@ export async function GET(
|
||||
if (isUsingCloudStorage() || isCloudPath) {
|
||||
// Extract the actual key (remove 's3/' or 'blob/' prefix if present)
|
||||
const cloudKey = isCloudPath ? path.slice(1).join('/') : fullPath
|
||||
return await handleCloudProxy(cloudKey)
|
||||
|
||||
// Get bucket type from query parameter
|
||||
const bucketType = request.nextUrl.searchParams.get('bucket')
|
||||
|
||||
return await handleCloudProxy(cloudKey, bucketType)
|
||||
}
|
||||
|
||||
// Use local handler for local files
|
||||
@@ -152,12 +154,37 @@ async function downloadKBFile(cloudKey: string): Promise<Buffer> {
|
||||
/**
|
||||
* Proxy cloud file through our server
|
||||
*/
|
||||
async function handleCloudProxy(cloudKey: string): Promise<NextResponse> {
|
||||
async function handleCloudProxy(
|
||||
cloudKey: string,
|
||||
bucketType?: string | null
|
||||
): Promise<NextResponse> {
|
||||
try {
|
||||
// Check if this is a KB file (starts with 'kb/')
|
||||
const isKBFile = cloudKey.startsWith('kb/')
|
||||
|
||||
const fileBuffer = isKBFile ? await downloadKBFile(cloudKey) : await downloadFile(cloudKey)
|
||||
let fileBuffer: Buffer
|
||||
|
||||
if (isKBFile) {
|
||||
fileBuffer = await downloadKBFile(cloudKey)
|
||||
} else if (bucketType === 'copilot') {
|
||||
// Download from copilot-specific bucket
|
||||
const storageProvider = getStorageProvider()
|
||||
|
||||
if (storageProvider === 's3') {
|
||||
const { downloadFromS3WithConfig } = await import('@/lib/uploads/s3/s3-client')
|
||||
const { S3_COPILOT_CONFIG } = await import('@/lib/uploads/setup')
|
||||
fileBuffer = await downloadFromS3WithConfig(cloudKey, S3_COPILOT_CONFIG)
|
||||
} else if (storageProvider === 'blob') {
|
||||
// For Azure Blob, use the default downloadFile for now
|
||||
// TODO: Add downloadFromBlobWithConfig when needed
|
||||
fileBuffer = await downloadFile(cloudKey)
|
||||
} else {
|
||||
fileBuffer = await downloadFile(cloudKey)
|
||||
}
|
||||
} else {
|
||||
// Default bucket
|
||||
fileBuffer = await downloadFile(cloudKey)
|
||||
}
|
||||
|
||||
// Extract the original filename from the key (last part after last /)
|
||||
const originalFilename = cloudKey.split('/').pop() || 'download'
|
||||
|
||||
@@ -2,9 +2,6 @@ import { and, eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
import { getUserEntityPermissions } from '@/lib/permissions/utils'
|
||||
import { db } from '@/db'
|
||||
import { workflow, workflowFolder } from '@/db/schema'
|
||||
|
||||
@@ -8,8 +8,6 @@ import { workflowFolder } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('FoldersAPI')
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
// GET - Fetch folders for a workspace
|
||||
export async function GET(request: NextRequest) {
|
||||
try {
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import crypto from 'node:crypto'
|
||||
import { createHash, randomUUID } from 'crypto'
|
||||
import { eq, sql } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
@@ -22,7 +22,7 @@ export async function GET(
|
||||
req: NextRequest,
|
||||
{ params }: { params: Promise<{ id: string; documentId: string; chunkId: string }> }
|
||||
) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
const { id: knowledgeBaseId, documentId, chunkId } = await params
|
||||
|
||||
try {
|
||||
@@ -70,7 +70,7 @@ export async function PUT(
|
||||
req: NextRequest,
|
||||
{ params }: { params: Promise<{ id: string; documentId: string; chunkId: string }> }
|
||||
) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
const { id: knowledgeBaseId, documentId, chunkId } = await params
|
||||
|
||||
try {
|
||||
@@ -119,10 +119,7 @@ export async function PUT(
|
||||
updateData.contentLength = validatedData.content.length
|
||||
// Update token count estimation (rough approximation: 4 chars per token)
|
||||
updateData.tokenCount = Math.ceil(validatedData.content.length / 4)
|
||||
updateData.chunkHash = crypto
|
||||
.createHash('sha256')
|
||||
.update(validatedData.content)
|
||||
.digest('hex')
|
||||
updateData.chunkHash = createHash('sha256').update(validatedData.content).digest('hex')
|
||||
}
|
||||
|
||||
if (validatedData.enabled !== undefined) updateData.enabled = validatedData.enabled
|
||||
@@ -166,7 +163,7 @@ export async function DELETE(
|
||||
req: NextRequest,
|
||||
{ params }: { params: Promise<{ id: string; documentId: string; chunkId: string }> }
|
||||
) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
const { id: knowledgeBaseId, documentId, chunkId } = await params
|
||||
|
||||
try {
|
||||
|
||||
@@ -3,7 +3,11 @@ import { and, eq, sql } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { MAX_TAG_SLOTS, TAG_SLOTS } from '@/lib/constants/knowledge'
|
||||
import {
|
||||
getMaxSlotsForFieldType,
|
||||
getSlotsForFieldType,
|
||||
SUPPORTED_FIELD_TYPES,
|
||||
} from '@/lib/constants/knowledge'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { checkKnowledgeBaseAccess, checkKnowledgeBaseWriteAccess } from '@/app/api/knowledge/utils'
|
||||
import { db } from '@/db'
|
||||
@@ -14,17 +18,60 @@ export const dynamic = 'force-dynamic'
|
||||
const logger = createLogger('DocumentTagDefinitionsAPI')
|
||||
|
||||
const TagDefinitionSchema = z.object({
|
||||
tagSlot: z.enum(TAG_SLOTS as [string, ...string[]]),
|
||||
tagSlot: z.string(), // Will be validated against field type slots
|
||||
displayName: z.string().min(1, 'Display name is required').max(100, 'Display name too long'),
|
||||
fieldType: z.string().default('text'), // Currently only 'text', future: 'date', 'number', 'range'
|
||||
fieldType: z.enum(SUPPORTED_FIELD_TYPES as [string, ...string[]]).default('text'),
|
||||
// Optional: for editing existing definitions
|
||||
_originalDisplayName: z.string().optional(),
|
||||
})
|
||||
|
||||
const BulkTagDefinitionsSchema = z.object({
|
||||
definitions: z
|
||||
.array(TagDefinitionSchema)
|
||||
.max(MAX_TAG_SLOTS, `Cannot define more than ${MAX_TAG_SLOTS} tags`),
|
||||
definitions: z.array(TagDefinitionSchema),
|
||||
})
|
||||
|
||||
// Helper function to get the next available slot for a knowledge base and field type
|
||||
async function getNextAvailableSlot(
|
||||
knowledgeBaseId: string,
|
||||
fieldType: string,
|
||||
existingBySlot?: Map<string, any>
|
||||
): Promise<string | null> {
|
||||
// Get available slots for this field type
|
||||
const availableSlots = getSlotsForFieldType(fieldType)
|
||||
let usedSlots: Set<string>
|
||||
|
||||
if (existingBySlot) {
|
||||
// Use provided map if available (for performance in batch operations)
|
||||
// Filter by field type
|
||||
usedSlots = new Set(
|
||||
Array.from(existingBySlot.entries())
|
||||
.filter(([_, def]) => def.fieldType === fieldType)
|
||||
.map(([slot, _]) => slot)
|
||||
)
|
||||
} else {
|
||||
// Query database for existing tag definitions of the same field type
|
||||
const existingDefinitions = await db
|
||||
.select({ tagSlot: knowledgeBaseTagDefinitions.tagSlot })
|
||||
.from(knowledgeBaseTagDefinitions)
|
||||
.where(
|
||||
and(
|
||||
eq(knowledgeBaseTagDefinitions.knowledgeBaseId, knowledgeBaseId),
|
||||
eq(knowledgeBaseTagDefinitions.fieldType, fieldType)
|
||||
)
|
||||
)
|
||||
|
||||
usedSlots = new Set(existingDefinitions.map((def) => def.tagSlot))
|
||||
}
|
||||
|
||||
// Find the first available slot for this field type
|
||||
for (const slot of availableSlots) {
|
||||
if (!usedSlots.has(slot)) {
|
||||
return slot
|
||||
}
|
||||
}
|
||||
|
||||
return null // No available slots for this field type
|
||||
}
|
||||
|
||||
// Helper function to clean up unused tag definitions
|
||||
async function cleanupUnusedTagDefinitions(knowledgeBaseId: string, requestId: string) {
|
||||
try {
|
||||
@@ -191,35 +238,93 @@ export async function POST(
|
||||
|
||||
const validatedData = BulkTagDefinitionsSchema.parse(body)
|
||||
|
||||
// Validate no duplicate tag slots
|
||||
const tagSlots = validatedData.definitions.map((def) => def.tagSlot)
|
||||
const uniqueTagSlots = new Set(tagSlots)
|
||||
if (tagSlots.length !== uniqueTagSlots.size) {
|
||||
return NextResponse.json({ error: 'Duplicate tag slots not allowed' }, { status: 400 })
|
||||
// Validate slots are valid for their field types
|
||||
for (const definition of validatedData.definitions) {
|
||||
const validSlots = getSlotsForFieldType(definition.fieldType)
|
||||
if (validSlots.length === 0) {
|
||||
return NextResponse.json(
|
||||
{ error: `Unsupported field type: ${definition.fieldType}` },
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
if (!validSlots.includes(definition.tagSlot)) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: `Invalid slot '${definition.tagSlot}' for field type '${definition.fieldType}'. Valid slots: ${validSlots.join(', ')}`,
|
||||
},
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate no duplicate tag slots within the same field type
|
||||
const slotsByFieldType = new Map<string, Set<string>>()
|
||||
for (const definition of validatedData.definitions) {
|
||||
if (!slotsByFieldType.has(definition.fieldType)) {
|
||||
slotsByFieldType.set(definition.fieldType, new Set())
|
||||
}
|
||||
const slotsForType = slotsByFieldType.get(definition.fieldType)!
|
||||
if (slotsForType.has(definition.tagSlot)) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: `Duplicate slot '${definition.tagSlot}' for field type '${definition.fieldType}'`,
|
||||
},
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
slotsForType.add(definition.tagSlot)
|
||||
}
|
||||
|
||||
const now = new Date()
|
||||
const createdDefinitions: (typeof knowledgeBaseTagDefinitions.$inferSelect)[] = []
|
||||
|
||||
// Get existing definitions count before transaction for cleanup check
|
||||
// Get existing definitions
|
||||
const existingDefinitions = await db
|
||||
.select()
|
||||
.from(knowledgeBaseTagDefinitions)
|
||||
.where(eq(knowledgeBaseTagDefinitions.knowledgeBaseId, knowledgeBaseId))
|
||||
|
||||
// Check if we're trying to create more tag definitions than available slots
|
||||
const existingTagNames = new Set(existingDefinitions.map((def) => def.displayName))
|
||||
const trulyNewTags = validatedData.definitions.filter(
|
||||
(def) => !existingTagNames.has(def.displayName)
|
||||
)
|
||||
// Group by field type for validation
|
||||
const existingByFieldType = new Map<string, number>()
|
||||
for (const def of existingDefinitions) {
|
||||
existingByFieldType.set(def.fieldType, (existingByFieldType.get(def.fieldType) || 0) + 1)
|
||||
}
|
||||
|
||||
if (existingDefinitions.length + trulyNewTags.length > MAX_TAG_SLOTS) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: `Cannot create ${trulyNewTags.length} new tags. Knowledge base already has ${existingDefinitions.length} tag definitions. Maximum is ${MAX_TAG_SLOTS} total.`,
|
||||
},
|
||||
{ status: 400 }
|
||||
// Validate we don't exceed limits per field type
|
||||
const newByFieldType = new Map<string, number>()
|
||||
for (const definition of validatedData.definitions) {
|
||||
// Skip validation for edit operations - they don't create new slots
|
||||
if (definition._originalDisplayName) {
|
||||
continue
|
||||
}
|
||||
|
||||
const existingTagNames = new Set(
|
||||
existingDefinitions
|
||||
.filter((def) => def.fieldType === definition.fieldType)
|
||||
.map((def) => def.displayName)
|
||||
)
|
||||
|
||||
if (!existingTagNames.has(definition.displayName)) {
|
||||
newByFieldType.set(
|
||||
definition.fieldType,
|
||||
(newByFieldType.get(definition.fieldType) || 0) + 1
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
for (const [fieldType, newCount] of newByFieldType.entries()) {
|
||||
const existingCount = existingByFieldType.get(fieldType) || 0
|
||||
const maxSlots = getMaxSlotsForFieldType(fieldType)
|
||||
|
||||
if (existingCount + newCount > maxSlots) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: `Cannot create ${newCount} new '${fieldType}' tags. Knowledge base already has ${existingCount} '${fieldType}' tag definitions. Maximum is ${maxSlots} per field type.`,
|
||||
},
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Use transaction to ensure consistency
|
||||
@@ -228,30 +333,51 @@ export async function POST(
|
||||
const existingByName = new Map(existingDefinitions.map((def) => [def.displayName, def]))
|
||||
const existingBySlot = new Map(existingDefinitions.map((def) => [def.tagSlot, def]))
|
||||
|
||||
// Process each new definition
|
||||
// Process each definition
|
||||
for (const definition of validatedData.definitions) {
|
||||
if (definition._originalDisplayName) {
|
||||
// This is an EDIT operation - find by original name and update
|
||||
const originalDefinition = existingByName.get(definition._originalDisplayName)
|
||||
|
||||
if (originalDefinition) {
|
||||
logger.info(
|
||||
`[${requestId}] Editing tag definition: ${definition._originalDisplayName} -> ${definition.displayName} (slot ${originalDefinition.tagSlot})`
|
||||
)
|
||||
|
||||
await tx
|
||||
.update(knowledgeBaseTagDefinitions)
|
||||
.set({
|
||||
displayName: definition.displayName,
|
||||
fieldType: definition.fieldType,
|
||||
updatedAt: now,
|
||||
})
|
||||
.where(eq(knowledgeBaseTagDefinitions.id, originalDefinition.id))
|
||||
|
||||
createdDefinitions.push({
|
||||
...originalDefinition,
|
||||
displayName: definition.displayName,
|
||||
fieldType: definition.fieldType,
|
||||
updatedAt: now,
|
||||
})
|
||||
continue
|
||||
}
|
||||
logger.warn(
|
||||
`[${requestId}] Could not find original definition for: ${definition._originalDisplayName}`
|
||||
)
|
||||
}
|
||||
|
||||
// Regular create/update logic
|
||||
const existingByDisplayName = existingByName.get(definition.displayName)
|
||||
const existingByTagSlot = existingBySlot.get(definition.tagSlot)
|
||||
|
||||
if (existingByDisplayName) {
|
||||
// Update existing definition (same display name)
|
||||
if (existingByDisplayName.tagSlot !== definition.tagSlot) {
|
||||
// Slot is changing - check if target slot is available
|
||||
if (existingByTagSlot && existingByTagSlot.id !== existingByDisplayName.id) {
|
||||
// Target slot is occupied by a different definition - this is a conflict
|
||||
// For now, keep the existing slot to avoid constraint violation
|
||||
logger.warn(
|
||||
`[${requestId}] Slot conflict for ${definition.displayName}: keeping existing slot ${existingByDisplayName.tagSlot}`
|
||||
)
|
||||
createdDefinitions.push(existingByDisplayName)
|
||||
continue
|
||||
}
|
||||
}
|
||||
// Display name exists - UPDATE operation
|
||||
logger.info(
|
||||
`[${requestId}] Updating existing tag definition: ${definition.displayName} (slot ${existingByDisplayName.tagSlot})`
|
||||
)
|
||||
|
||||
await tx
|
||||
.update(knowledgeBaseTagDefinitions)
|
||||
.set({
|
||||
tagSlot: definition.tagSlot,
|
||||
fieldType: definition.fieldType,
|
||||
updatedAt: now,
|
||||
})
|
||||
@@ -259,33 +385,32 @@ export async function POST(
|
||||
|
||||
createdDefinitions.push({
|
||||
...existingByDisplayName,
|
||||
tagSlot: definition.tagSlot,
|
||||
fieldType: definition.fieldType,
|
||||
updatedAt: now,
|
||||
})
|
||||
} else if (existingByTagSlot) {
|
||||
// Slot is occupied by a different display name - update it
|
||||
await tx
|
||||
.update(knowledgeBaseTagDefinitions)
|
||||
.set({
|
||||
displayName: definition.displayName,
|
||||
fieldType: definition.fieldType,
|
||||
updatedAt: now,
|
||||
})
|
||||
.where(eq(knowledgeBaseTagDefinitions.id, existingByTagSlot.id))
|
||||
|
||||
createdDefinitions.push({
|
||||
...existingByTagSlot,
|
||||
displayName: definition.displayName,
|
||||
fieldType: definition.fieldType,
|
||||
updatedAt: now,
|
||||
})
|
||||
} else {
|
||||
// Create new definition
|
||||
// Display name doesn't exist - CREATE operation
|
||||
const targetSlot = await getNextAvailableSlot(
|
||||
knowledgeBaseId,
|
||||
definition.fieldType,
|
||||
existingBySlot
|
||||
)
|
||||
|
||||
if (!targetSlot) {
|
||||
logger.error(
|
||||
`[${requestId}] No available slots for new tag definition: ${definition.displayName}`
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
logger.info(
|
||||
`[${requestId}] Creating new tag definition: ${definition.displayName} -> ${targetSlot}`
|
||||
)
|
||||
|
||||
const newDefinition = {
|
||||
id: randomUUID(),
|
||||
knowledgeBaseId,
|
||||
tagSlot: definition.tagSlot,
|
||||
tagSlot: targetSlot as any,
|
||||
displayName: definition.displayName,
|
||||
fieldType: definition.fieldType,
|
||||
createdAt: now,
|
||||
@@ -293,7 +418,8 @@ export async function POST(
|
||||
}
|
||||
|
||||
await tx.insert(knowledgeBaseTagDefinitions).values(newDefinition)
|
||||
createdDefinitions.push(newDefinition)
|
||||
existingBySlot.set(targetSlot as any, newDefinition)
|
||||
createdDefinitions.push(newDefinition as any)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import crypto from 'node:crypto'
|
||||
import { randomUUID } from 'crypto'
|
||||
import { and, desc, eq, inArray, isNull, sql } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { TAG_SLOTS } from '@/lib/constants/knowledge'
|
||||
import { getSlotsForFieldType } from '@/lib/constants/knowledge'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { getUserId } from '@/app/api/auth/oauth/utils'
|
||||
import {
|
||||
@@ -23,6 +23,48 @@ const PROCESSING_CONFIG = {
|
||||
delayBetweenDocuments: 500,
|
||||
}
|
||||
|
||||
// Helper function to get the next available slot for a knowledge base and field type
|
||||
async function getNextAvailableSlot(
|
||||
knowledgeBaseId: string,
|
||||
fieldType: string,
|
||||
existingBySlot?: Map<string, any>
|
||||
): Promise<string | null> {
|
||||
let usedSlots: Set<string>
|
||||
|
||||
if (existingBySlot) {
|
||||
// Use provided map if available (for performance in batch operations)
|
||||
// Filter by field type
|
||||
usedSlots = new Set(
|
||||
Array.from(existingBySlot.entries())
|
||||
.filter(([_, def]) => def.fieldType === fieldType)
|
||||
.map(([slot, _]) => slot)
|
||||
)
|
||||
} else {
|
||||
// Query database for existing tag definitions of the same field type
|
||||
const existingDefinitions = await db
|
||||
.select({ tagSlot: knowledgeBaseTagDefinitions.tagSlot })
|
||||
.from(knowledgeBaseTagDefinitions)
|
||||
.where(
|
||||
and(
|
||||
eq(knowledgeBaseTagDefinitions.knowledgeBaseId, knowledgeBaseId),
|
||||
eq(knowledgeBaseTagDefinitions.fieldType, fieldType)
|
||||
)
|
||||
)
|
||||
|
||||
usedSlots = new Set(existingDefinitions.map((def) => def.tagSlot))
|
||||
}
|
||||
|
||||
// Find the first available slot for this field type
|
||||
const availableSlots = getSlotsForFieldType(fieldType)
|
||||
for (const slot of availableSlots) {
|
||||
if (!usedSlots.has(slot)) {
|
||||
return slot
|
||||
}
|
||||
}
|
||||
|
||||
return null // No available slots for this field type
|
||||
}
|
||||
|
||||
// Helper function to process structured document tags
|
||||
async function processDocumentTags(
|
||||
knowledgeBaseId: string,
|
||||
@@ -31,8 +73,9 @@ async function processDocumentTags(
|
||||
): Promise<Record<string, string | null>> {
|
||||
const result: Record<string, string | null> = {}
|
||||
|
||||
// Initialize all tag slots to null
|
||||
TAG_SLOTS.forEach((slot) => {
|
||||
// Initialize all text tag slots to null (only text type is supported currently)
|
||||
const textSlots = getSlotsForFieldType('text')
|
||||
textSlots.forEach((slot) => {
|
||||
result[slot] = null
|
||||
})
|
||||
|
||||
@@ -55,7 +98,7 @@ async function processDocumentTags(
|
||||
if (!tag.tagName?.trim() || !tag.value?.trim()) continue
|
||||
|
||||
const tagName = tag.tagName.trim()
|
||||
const fieldType = tag.fieldType || 'text'
|
||||
const fieldType = tag.fieldType
|
||||
const value = tag.value.trim()
|
||||
|
||||
let targetSlot: string | null = null
|
||||
@@ -65,18 +108,13 @@ async function processDocumentTags(
|
||||
if (existingDef) {
|
||||
targetSlot = existingDef.tagSlot
|
||||
} else {
|
||||
// Find next available slot
|
||||
for (const slot of TAG_SLOTS) {
|
||||
if (!existingBySlot.has(slot)) {
|
||||
targetSlot = slot
|
||||
break
|
||||
}
|
||||
}
|
||||
// Find next available slot using the helper function
|
||||
targetSlot = await getNextAvailableSlot(knowledgeBaseId, fieldType, existingBySlot)
|
||||
|
||||
// Create new tag definition if we have a slot
|
||||
if (targetSlot) {
|
||||
const newDefinition = {
|
||||
id: crypto.randomUUID(),
|
||||
id: randomUUID(),
|
||||
knowledgeBaseId,
|
||||
tagSlot: targetSlot as any,
|
||||
displayName: tagName,
|
||||
@@ -274,7 +312,7 @@ const BulkUpdateDocumentsSchema = z.object({
|
||||
})
|
||||
|
||||
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
const { id: knowledgeBaseId } = await params
|
||||
|
||||
try {
|
||||
@@ -385,7 +423,7 @@ export async function GET(req: NextRequest, { params }: { params: Promise<{ id:
|
||||
}
|
||||
|
||||
export async function POST(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
const { id: knowledgeBaseId } = await params
|
||||
|
||||
try {
|
||||
@@ -432,7 +470,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
|
||||
|
||||
const createdDocuments = await db.transaction(async (tx) => {
|
||||
const documentPromises = validatedData.documents.map(async (docData) => {
|
||||
const documentId = crypto.randomUUID()
|
||||
const documentId = randomUUID()
|
||||
const now = new Date()
|
||||
|
||||
// Process documentTagsData if provided (for knowledge base block)
|
||||
@@ -540,7 +578,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
|
||||
try {
|
||||
const validatedData = CreateDocumentSchema.parse(body)
|
||||
|
||||
const documentId = crypto.randomUUID()
|
||||
const documentId = randomUUID()
|
||||
const now = new Date()
|
||||
|
||||
// Process structured tag data if provided
|
||||
|
||||
84
apps/sim/app/api/knowledge/[id]/next-available-slot/route.ts
Normal file
84
apps/sim/app/api/knowledge/[id]/next-available-slot/route.ts
Normal file
@@ -0,0 +1,84 @@
|
||||
import { randomUUID } from 'crypto'
|
||||
import { and, eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { getMaxSlotsForFieldType, getSlotsForFieldType } from '@/lib/constants/knowledge'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { checkKnowledgeBaseAccess } from '@/app/api/knowledge/utils'
|
||||
import { db } from '@/db'
|
||||
import { knowledgeBaseTagDefinitions } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('NextAvailableSlotAPI')
|
||||
|
||||
// GET /api/knowledge/[id]/next-available-slot - Get the next available tag slot for a knowledge base and field type
|
||||
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
const { id: knowledgeBaseId } = await params
|
||||
const { searchParams } = new URL(req.url)
|
||||
const fieldType = searchParams.get('fieldType')
|
||||
|
||||
if (!fieldType) {
|
||||
return NextResponse.json({ error: 'fieldType parameter is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
try {
|
||||
logger.info(
|
||||
`[${requestId}] Getting next available slot for knowledge base ${knowledgeBaseId}, fieldType: ${fieldType}`
|
||||
)
|
||||
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
|
||||
}
|
||||
|
||||
// Check if user has read access to the knowledge base
|
||||
const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, session.user.id)
|
||||
if (!accessCheck.hasAccess) {
|
||||
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
|
||||
}
|
||||
|
||||
// Get available slots for this field type
|
||||
const availableSlots = getSlotsForFieldType(fieldType)
|
||||
const maxSlots = getMaxSlotsForFieldType(fieldType)
|
||||
|
||||
// Get existing tag definitions to find used slots for this field type
|
||||
const existingDefinitions = await db
|
||||
.select({ tagSlot: knowledgeBaseTagDefinitions.tagSlot })
|
||||
.from(knowledgeBaseTagDefinitions)
|
||||
.where(
|
||||
and(
|
||||
eq(knowledgeBaseTagDefinitions.knowledgeBaseId, knowledgeBaseId),
|
||||
eq(knowledgeBaseTagDefinitions.fieldType, fieldType)
|
||||
)
|
||||
)
|
||||
|
||||
const usedSlots = new Set(existingDefinitions.map((def) => def.tagSlot as string))
|
||||
|
||||
// Find the first available slot for this field type
|
||||
let nextAvailableSlot: string | null = null
|
||||
for (const slot of availableSlots) {
|
||||
if (!usedSlots.has(slot)) {
|
||||
nextAvailableSlot = slot
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(
|
||||
`[${requestId}] Next available slot for fieldType ${fieldType}: ${nextAvailableSlot}`
|
||||
)
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: {
|
||||
nextAvailableSlot,
|
||||
fieldType,
|
||||
usedSlots: Array.from(usedSlots),
|
||||
totalSlots: maxSlots,
|
||||
availableSlots: maxSlots - usedSlots.size,
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error getting next available slot`, error)
|
||||
return NextResponse.json({ error: 'Failed to get next available slot' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
@@ -41,7 +41,6 @@ function extractBlockExecutionsFromTraceSpans(traceSpans: any[]): any[] {
|
||||
return blockExecutions
|
||||
}
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
export const revalidate = 0
|
||||
|
||||
const QueryParamsSchema = z.object({
|
||||
|
||||
@@ -7,9 +7,6 @@ import {
|
||||
updateOrganizationSeats,
|
||||
} from '@/lib/billing/validation/seat-management'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
import { db } from '@/db'
|
||||
import { member, organization } from '@/db/schema'
|
||||
|
||||
|
||||
@@ -7,8 +7,6 @@ import { member, permissions, user, workspace } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('OrganizationWorkspacesAPI')
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
/**
|
||||
* GET /api/organizations/[id]/workspaces
|
||||
* Get workspaces related to the organization with optional filtering
|
||||
|
||||
52
apps/sim/app/api/providers/ollama/models/route.ts
Normal file
52
apps/sim/app/api/providers/ollama/models/route.ts
Normal file
@@ -0,0 +1,52 @@
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { env } from '@/lib/env'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import type { ModelsObject } from '@/providers/ollama/types'
|
||||
|
||||
const logger = createLogger('OllamaModelsAPI')
|
||||
const OLLAMA_HOST = env.OLLAMA_URL || 'http://localhost:11434'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
/**
|
||||
* Get available Ollama models
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
try {
|
||||
logger.info('Fetching Ollama models', {
|
||||
host: OLLAMA_HOST,
|
||||
})
|
||||
|
||||
const response = await fetch(`${OLLAMA_HOST}/api/tags`, {
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
logger.warn('Ollama service is not available', {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
})
|
||||
return NextResponse.json({ models: [] })
|
||||
}
|
||||
|
||||
const data = (await response.json()) as ModelsObject
|
||||
const models = data.models.map((model) => model.name)
|
||||
|
||||
logger.info('Successfully fetched Ollama models', {
|
||||
count: models.length,
|
||||
models,
|
||||
})
|
||||
|
||||
return NextResponse.json({ models })
|
||||
} catch (error) {
|
||||
logger.error('Failed to fetch Ollama models', {
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
host: OLLAMA_HOST,
|
||||
})
|
||||
|
||||
// Return empty array instead of error to avoid breaking the UI
|
||||
return NextResponse.json({ models: [] })
|
||||
}
|
||||
}
|
||||
@@ -7,7 +7,6 @@ import { templates } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('TemplateByIdAPI')
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
export const revalidate = 0
|
||||
|
||||
// GET /api/templates/[id] - Retrieve a single template by ID
|
||||
|
||||
@@ -9,7 +9,6 @@ import { templateStars, templates, workflow } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('TemplatesAPI')
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
export const revalidate = 0
|
||||
|
||||
// Function to sanitize sensitive data from workflow state
|
||||
|
||||
110
apps/sim/app/api/tools/microsoft_planner/tasks/route.ts
Normal file
110
apps/sim/app/api/tools/microsoft_planner/tasks/route.ts
Normal file
@@ -0,0 +1,110 @@
|
||||
import { randomUUID } from 'crypto'
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
|
||||
import { db } from '@/db'
|
||||
import { account } from '@/db/schema'
|
||||
import type { PlannerTask } from '@/tools/microsoft_planner/types'
|
||||
|
||||
const logger = createLogger('MicrosoftPlannerTasksAPI')
|
||||
|
||||
export async function GET(request: NextRequest) {
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
|
||||
if (!session?.user?.id) {
|
||||
logger.warn(`[${requestId}] Unauthenticated request rejected`)
|
||||
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
|
||||
}
|
||||
|
||||
const { searchParams } = new URL(request.url)
|
||||
const credentialId = searchParams.get('credentialId')
|
||||
const planId = searchParams.get('planId')
|
||||
|
||||
if (!credentialId) {
|
||||
logger.error(`[${requestId}] Missing credentialId parameter`)
|
||||
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
if (!planId) {
|
||||
logger.error(`[${requestId}] Missing planId parameter`)
|
||||
return NextResponse.json({ error: 'Plan ID is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
// Get the credential from the database
|
||||
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
|
||||
|
||||
if (!credentials.length) {
|
||||
logger.warn(`[${requestId}] Credential not found`, { credentialId })
|
||||
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
const credential = credentials[0]
|
||||
|
||||
// Check if the credential belongs to the user
|
||||
if (credential.userId !== session.user.id) {
|
||||
logger.warn(`[${requestId}] Unauthorized credential access attempt`, {
|
||||
credentialUserId: credential.userId,
|
||||
requestUserId: session.user.id,
|
||||
})
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
|
||||
}
|
||||
|
||||
// Refresh access token if needed
|
||||
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
|
||||
|
||||
if (!accessToken) {
|
||||
logger.error(`[${requestId}] Failed to obtain valid access token`)
|
||||
return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 })
|
||||
}
|
||||
|
||||
// Fetch tasks directly from Microsoft Graph API
|
||||
const response = await fetch(`https://graph.microsoft.com/v1.0/planner/plans/${planId}/tasks`, {
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
},
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text()
|
||||
logger.error(`[${requestId}] Microsoft Graph API error:`, errorText)
|
||||
return NextResponse.json(
|
||||
{ error: 'Failed to fetch tasks from Microsoft Graph' },
|
||||
{ status: response.status }
|
||||
)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
const tasks = data.value || []
|
||||
|
||||
// Filter tasks to only include useful fields (matching our read_task tool)
|
||||
const filteredTasks = tasks.map((task: PlannerTask) => ({
|
||||
id: task.id,
|
||||
title: task.title,
|
||||
planId: task.planId,
|
||||
bucketId: task.bucketId,
|
||||
percentComplete: task.percentComplete,
|
||||
priority: task.priority,
|
||||
dueDateTime: task.dueDateTime,
|
||||
createdDateTime: task.createdDateTime,
|
||||
completedDateTime: task.completedDateTime,
|
||||
hasDescription: task.hasDescription,
|
||||
assignments: task.assignments ? Object.keys(task.assignments) : [],
|
||||
}))
|
||||
|
||||
return NextResponse.json({
|
||||
tasks: filteredTasks,
|
||||
metadata: {
|
||||
planId,
|
||||
planUrl: `https://graph.microsoft.com/v1.0/planner/plans/${planId}`,
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error fetching Microsoft Planner tasks:`, error)
|
||||
return NextResponse.json({ error: 'Failed to fetch tasks' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
83
apps/sim/app/api/tools/onedrive/folder/route.ts
Normal file
83
apps/sim/app/api/tools/onedrive/folder/route.ts
Normal file
@@ -0,0 +1,83 @@
|
||||
import { randomUUID } from 'crypto'
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
|
||||
import { db } from '@/db'
|
||||
import { account } from '@/db/schema'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
const logger = createLogger('OneDriveFolderAPI')
|
||||
|
||||
/**
|
||||
* Get a single folder from Microsoft OneDrive
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
|
||||
}
|
||||
|
||||
const { searchParams } = new URL(request.url)
|
||||
const credentialId = searchParams.get('credentialId')
|
||||
const fileId = searchParams.get('fileId')
|
||||
|
||||
if (!credentialId || !fileId) {
|
||||
return NextResponse.json({ error: 'Credential ID and File ID are required' }, { status: 400 })
|
||||
}
|
||||
|
||||
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
|
||||
if (!credentials.length) {
|
||||
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
const credential = credentials[0]
|
||||
if (credential.userId !== session.user.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
|
||||
}
|
||||
|
||||
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
|
||||
if (!accessToken) {
|
||||
return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 })
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`https://graph.microsoft.com/v1.0/me/drive/items/${fileId}?$select=id,name,folder,webUrl,createdDateTime,lastModifiedDateTime`,
|
||||
{
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } }))
|
||||
return NextResponse.json(
|
||||
{ error: errorData.error?.message || 'Failed to fetch folder from OneDrive' },
|
||||
{ status: response.status }
|
||||
)
|
||||
}
|
||||
|
||||
const folder = await response.json()
|
||||
|
||||
// Transform the response to match expected format
|
||||
const transformedFolder = {
|
||||
id: folder.id,
|
||||
name: folder.name,
|
||||
mimeType: 'application/vnd.microsoft.graph.folder',
|
||||
webViewLink: folder.webUrl,
|
||||
createdTime: folder.createdDateTime,
|
||||
modifiedTime: folder.lastModifiedDateTime,
|
||||
}
|
||||
|
||||
return NextResponse.json({ file: transformedFolder }, { status: 200 })
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error fetching folder from OneDrive`, error)
|
||||
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
89
apps/sim/app/api/tools/onedrive/folders/route.ts
Normal file
89
apps/sim/app/api/tools/onedrive/folders/route.ts
Normal file
@@ -0,0 +1,89 @@
|
||||
import { randomUUID } from 'crypto'
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
|
||||
import { db } from '@/db'
|
||||
import { account } from '@/db/schema'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
const logger = createLogger('OneDriveFoldersAPI')
|
||||
|
||||
import type { MicrosoftGraphDriveItem } from '@/tools/onedrive/types'
|
||||
|
||||
/**
|
||||
* Get folders from Microsoft OneDrive
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
|
||||
}
|
||||
|
||||
const { searchParams } = new URL(request.url)
|
||||
const credentialId = searchParams.get('credentialId')
|
||||
const query = searchParams.get('query') || ''
|
||||
|
||||
if (!credentialId) {
|
||||
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
|
||||
if (!credentials.length) {
|
||||
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
const credential = credentials[0]
|
||||
if (credential.userId !== session.user.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
|
||||
}
|
||||
|
||||
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
|
||||
if (!accessToken) {
|
||||
return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 })
|
||||
}
|
||||
|
||||
// Build URL for OneDrive folders
|
||||
let url = `https://graph.microsoft.com/v1.0/me/drive/root/children?$filter=folder ne null&$select=id,name,folder,webUrl,createdDateTime,lastModifiedDateTime&$top=50`
|
||||
|
||||
if (query) {
|
||||
url += `&$search="${encodeURIComponent(query)}"`
|
||||
}
|
||||
|
||||
const response = await fetch(url, {
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
},
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } }))
|
||||
return NextResponse.json(
|
||||
{ error: errorData.error?.message || 'Failed to fetch folders from OneDrive' },
|
||||
{ status: response.status }
|
||||
)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
const folders = (data.value || [])
|
||||
.filter((item: MicrosoftGraphDriveItem) => item.folder) // Only folders
|
||||
.map((folder: MicrosoftGraphDriveItem) => ({
|
||||
id: folder.id,
|
||||
name: folder.name,
|
||||
mimeType: 'application/vnd.microsoft.graph.folder',
|
||||
webViewLink: folder.webUrl,
|
||||
createdTime: folder.createdDateTime,
|
||||
modifiedTime: folder.lastModifiedDateTime,
|
||||
}))
|
||||
|
||||
return NextResponse.json({ files: folders }, { status: 200 })
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error fetching folders from OneDrive`, error)
|
||||
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
105
apps/sim/app/api/tools/sharepoint/site/route.ts
Normal file
105
apps/sim/app/api/tools/sharepoint/site/route.ts
Normal file
@@ -0,0 +1,105 @@
|
||||
import { randomUUID } from 'crypto'
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
|
||||
import { db } from '@/db'
|
||||
import { account } from '@/db/schema'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
const logger = createLogger('SharePointSiteAPI')
|
||||
|
||||
/**
|
||||
* Get a single SharePoint site from Microsoft Graph API
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
|
||||
}
|
||||
|
||||
const { searchParams } = new URL(request.url)
|
||||
const credentialId = searchParams.get('credentialId')
|
||||
const siteId = searchParams.get('siteId')
|
||||
|
||||
if (!credentialId || !siteId) {
|
||||
return NextResponse.json({ error: 'Credential ID and Site ID are required' }, { status: 400 })
|
||||
}
|
||||
|
||||
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
|
||||
if (!credentials.length) {
|
||||
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
const credential = credentials[0]
|
||||
if (credential.userId !== session.user.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
|
||||
}
|
||||
|
||||
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
|
||||
if (!accessToken) {
|
||||
return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 })
|
||||
}
|
||||
|
||||
// Handle different ways to access SharePoint sites:
|
||||
// 1. Site ID: sites/{site-id}
|
||||
// 2. Root site: sites/root
|
||||
// 3. Hostname: sites/{hostname}
|
||||
// 4. Server-relative URL: sites/{hostname}:/{server-relative-path}
|
||||
// 5. Group team site: groups/{group-id}/sites/root
|
||||
|
||||
let endpoint: string
|
||||
if (siteId === 'root') {
|
||||
endpoint = 'sites/root'
|
||||
} else if (siteId.includes(':')) {
|
||||
// Server-relative URL format
|
||||
endpoint = `sites/${siteId}`
|
||||
} else if (siteId.includes('groups/')) {
|
||||
// Group team site format
|
||||
endpoint = siteId
|
||||
} else {
|
||||
// Standard site ID or hostname
|
||||
endpoint = `sites/${siteId}`
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`https://graph.microsoft.com/v1.0/${endpoint}?$select=id,name,displayName,webUrl,createdDateTime,lastModifiedDateTime`,
|
||||
{
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } }))
|
||||
return NextResponse.json(
|
||||
{ error: errorData.error?.message || 'Failed to fetch site from SharePoint' },
|
||||
{ status: response.status }
|
||||
)
|
||||
}
|
||||
|
||||
const site = await response.json()
|
||||
|
||||
// Transform the response to match expected format
|
||||
const transformedSite = {
|
||||
id: site.id,
|
||||
name: site.displayName || site.name,
|
||||
mimeType: 'application/vnd.microsoft.graph.site',
|
||||
webViewLink: site.webUrl,
|
||||
createdTime: site.createdDateTime,
|
||||
modifiedTime: site.lastModifiedDateTime,
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Successfully fetched SharePoint site: ${transformedSite.name}`)
|
||||
return NextResponse.json({ site: transformedSite }, { status: 200 })
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error fetching site from SharePoint`, error)
|
||||
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
85
apps/sim/app/api/tools/sharepoint/sites/route.ts
Normal file
85
apps/sim/app/api/tools/sharepoint/sites/route.ts
Normal file
@@ -0,0 +1,85 @@
|
||||
import { randomUUID } from 'crypto'
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
|
||||
import { db } from '@/db'
|
||||
import { account } from '@/db/schema'
|
||||
import type { SharepointSite } from '@/tools/sharepoint/types'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
const logger = createLogger('SharePointSitesAPI')
|
||||
|
||||
/**
|
||||
* Get SharePoint sites from Microsoft Graph API
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const requestId = randomUUID().slice(0, 8)
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
|
||||
}
|
||||
|
||||
const { searchParams } = new URL(request.url)
|
||||
const credentialId = searchParams.get('credentialId')
|
||||
const query = searchParams.get('query') || ''
|
||||
|
||||
if (!credentialId) {
|
||||
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
|
||||
if (!credentials.length) {
|
||||
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
const credential = credentials[0]
|
||||
if (credential.userId !== session.user.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
|
||||
}
|
||||
|
||||
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
|
||||
if (!accessToken) {
|
||||
return NextResponse.json({ error: 'Failed to obtain valid access token' }, { status: 401 })
|
||||
}
|
||||
|
||||
// Build URL for SharePoint sites
|
||||
// Use search=* to get all sites the user has access to, or search for specific query
|
||||
const searchQuery = query || '*'
|
||||
const url = `https://graph.microsoft.com/v1.0/sites?search=${encodeURIComponent(searchQuery)}&$select=id,name,displayName,webUrl,createdDateTime,lastModifiedDateTime&$top=50`
|
||||
|
||||
const response = await fetch(url, {
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
},
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({ error: { message: 'Unknown error' } }))
|
||||
return NextResponse.json(
|
||||
{ error: errorData.error?.message || 'Failed to fetch sites from SharePoint' },
|
||||
{ status: response.status }
|
||||
)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
const sites = (data.value || []).map((site: SharepointSite) => ({
|
||||
id: site.id,
|
||||
name: site.displayName || site.name,
|
||||
mimeType: 'application/vnd.microsoft.graph.site',
|
||||
webViewLink: site.webUrl,
|
||||
createdTime: site.createdDateTime,
|
||||
modifiedTime: site.lastModifiedDateTime,
|
||||
}))
|
||||
|
||||
logger.info(`[${requestId}] Successfully fetched ${sites.length} SharePoint sites`)
|
||||
return NextResponse.json({ files: sites }, { status: 200 })
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error fetching sites from SharePoint`, error)
|
||||
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
@@ -4,9 +4,6 @@ import { NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
import { db } from '@/db'
|
||||
import { settings } from '@/db/schema'
|
||||
|
||||
|
||||
@@ -87,7 +87,7 @@ describe('Workflow Execution API Route', () => {
|
||||
}))
|
||||
|
||||
vi.doMock('@/lib/workflows/db-helpers', () => ({
|
||||
loadWorkflowFromNormalizedTables: vi.fn().mockResolvedValue({
|
||||
loadDeployedWorkflowState: vi.fn().mockResolvedValue({
|
||||
blocks: {
|
||||
'starter-id': {
|
||||
id: 'starter-id',
|
||||
@@ -121,7 +121,7 @@ describe('Workflow Execution API Route', () => {
|
||||
],
|
||||
loops: {},
|
||||
parallels: {},
|
||||
isFromNormalizedTables: true,
|
||||
isFromNormalizedTables: false, // Changed to false since it's from deployed state
|
||||
}),
|
||||
}))
|
||||
|
||||
@@ -516,7 +516,7 @@ describe('Workflow Execution API Route', () => {
|
||||
}))
|
||||
|
||||
vi.doMock('@/lib/workflows/db-helpers', () => ({
|
||||
loadWorkflowFromNormalizedTables: vi.fn().mockResolvedValue({
|
||||
loadDeployedWorkflowState: vi.fn().mockResolvedValue({
|
||||
blocks: {
|
||||
'starter-id': {
|
||||
id: 'starter-id',
|
||||
@@ -550,7 +550,7 @@ describe('Workflow Execution API Route', () => {
|
||||
],
|
||||
loops: {},
|
||||
parallels: {},
|
||||
isFromNormalizedTables: true,
|
||||
isFromNormalizedTables: false, // Changed to false since it's from deployed state
|
||||
}),
|
||||
}))
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { LoggingSession } from '@/lib/logs/execution/logging-session'
|
||||
import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans'
|
||||
import { decryptSecret } from '@/lib/utils'
|
||||
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers'
|
||||
import { loadDeployedWorkflowState } from '@/lib/workflows/db-helpers'
|
||||
import {
|
||||
createHttpResponseFromBlock,
|
||||
updateWorkflowRunCounts,
|
||||
@@ -111,20 +111,13 @@ async function executeWorkflow(workflow: any, requestId: string, input?: any): P
|
||||
runningExecutions.add(executionKey)
|
||||
logger.info(`[${requestId}] Starting workflow execution: ${workflowId}`)
|
||||
|
||||
// Load workflow data from normalized tables
|
||||
logger.debug(`[${requestId}] Loading workflow ${workflowId} from normalized tables`)
|
||||
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
|
||||
// Load workflow data from deployed state for API executions
|
||||
const deployedData = await loadDeployedWorkflowState(workflowId)
|
||||
|
||||
if (!normalizedData) {
|
||||
throw new Error(
|
||||
`Workflow ${workflowId} has no normalized data available. Ensure the workflow is properly saved to normalized tables.`
|
||||
)
|
||||
}
|
||||
|
||||
// Use normalized data as primary source
|
||||
const { blocks, edges, loops, parallels } = normalizedData
|
||||
logger.info(`[${requestId}] Using normalized tables for workflow execution: ${workflowId}`)
|
||||
logger.debug(`[${requestId}] Normalized data loaded:`, {
|
||||
// Use deployed data as primary source for API executions
|
||||
const { blocks, edges, loops, parallels } = deployedData
|
||||
logger.info(`[${requestId}] Using deployed state for workflow execution: ${workflowId}`)
|
||||
logger.debug(`[${requestId}] Deployed data loaded:`, {
|
||||
blocksCount: Object.keys(blocks || {}).length,
|
||||
edgesCount: (edges || []).length,
|
||||
loopsCount: Object.keys(loops || {}).length,
|
||||
|
||||
@@ -3,9 +3,6 @@ import { and, desc, eq, isNull } from 'drizzle-orm'
|
||||
import { NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
||||
import { db } from '@/db'
|
||||
import { permissions, workflow, workflowBlocks, workspace } from '@/db/schema'
|
||||
|
||||
|
||||
@@ -70,6 +70,16 @@ export async function POST(request: NextRequest) {
|
||||
// Note: This endpoint is stateless, so we need to get this from the request
|
||||
const currentWorkflowState = (body as any).currentWorkflowState
|
||||
|
||||
// Ensure currentWorkflowState has all required properties with proper defaults if provided
|
||||
if (currentWorkflowState) {
|
||||
if (!currentWorkflowState.loops) {
|
||||
currentWorkflowState.loops = {}
|
||||
}
|
||||
if (!currentWorkflowState.parallels) {
|
||||
currentWorkflowState.parallels = {}
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Creating diff from YAML`, {
|
||||
contentLength: yamlContent.length,
|
||||
hasDiffAnalysis: !!diffAnalysis,
|
||||
|
||||
@@ -24,8 +24,8 @@ const MergeDiffRequestSchema = z.object({
|
||||
proposedState: z.object({
|
||||
blocks: z.record(z.any()),
|
||||
edges: z.array(z.any()),
|
||||
loops: z.record(z.any()),
|
||||
parallels: z.record(z.any()),
|
||||
loops: z.record(z.any()).optional(),
|
||||
parallels: z.record(z.any()).optional(),
|
||||
}),
|
||||
diffAnalysis: z.any().optional(),
|
||||
metadata: z.object({
|
||||
@@ -50,6 +50,14 @@ export async function POST(request: NextRequest) {
|
||||
const body = await request.json()
|
||||
const { existingDiff, yamlContent, diffAnalysis, options } = MergeDiffRequestSchema.parse(body)
|
||||
|
||||
// Ensure existingDiff.proposedState has all required properties with proper defaults
|
||||
if (!existingDiff.proposedState.loops) {
|
||||
existingDiff.proposedState.loops = {}
|
||||
}
|
||||
if (!existingDiff.proposedState.parallels) {
|
||||
existingDiff.proposedState.parallels = {}
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Merging diff from YAML`, {
|
||||
contentLength: yamlContent.length,
|
||||
existingBlockCount: Object.keys(existingDiff.proposedState.blocks).length,
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
import { Analytics } from '@vercel/analytics/next'
|
||||
import { SpeedInsights } from '@vercel/speed-insights/next'
|
||||
import { GeistSans } from 'geist/font/sans'
|
||||
import type { Metadata, Viewport } from 'next'
|
||||
import { PublicEnvScript } from 'next-runtime-env'
|
||||
import { BrandedLayout } from '@/components/branded-layout'
|
||||
import { generateBrandedMetadata, generateStructuredData } from '@/lib/branding/metadata'
|
||||
import { env } from '@/lib/env'
|
||||
import { isHosted } from '@/lib/environment'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
@@ -51,149 +53,20 @@ export const viewport: Viewport = {
|
||||
userScalable: false,
|
||||
}
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: {
|
||||
template: '',
|
||||
default: 'Sim',
|
||||
},
|
||||
description:
|
||||
'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.',
|
||||
applicationName: 'Sim',
|
||||
authors: [{ name: 'Sim' }],
|
||||
generator: 'Next.js',
|
||||
keywords: [
|
||||
'AI agent',
|
||||
'AI agent builder',
|
||||
'AI agent workflow',
|
||||
'AI workflow automation',
|
||||
'visual workflow editor',
|
||||
'AI agents',
|
||||
'workflow canvas',
|
||||
'intelligent automation',
|
||||
'AI tools',
|
||||
'workflow designer',
|
||||
'artificial intelligence',
|
||||
'business automation',
|
||||
'AI agent workflows',
|
||||
'visual programming',
|
||||
],
|
||||
referrer: 'origin-when-cross-origin',
|
||||
creator: 'Sim',
|
||||
publisher: 'Sim',
|
||||
metadataBase: new URL('https://sim.ai'),
|
||||
alternates: {
|
||||
canonical: '/',
|
||||
languages: {
|
||||
'en-US': '/en-US',
|
||||
},
|
||||
},
|
||||
robots: {
|
||||
index: true,
|
||||
follow: true,
|
||||
googleBot: {
|
||||
index: true,
|
||||
follow: true,
|
||||
'max-image-preview': 'large',
|
||||
'max-video-preview': -1,
|
||||
'max-snippet': -1,
|
||||
},
|
||||
},
|
||||
openGraph: {
|
||||
type: 'website',
|
||||
locale: 'en_US',
|
||||
url: 'https://sim.ai',
|
||||
title: 'Sim',
|
||||
description:
|
||||
'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.',
|
||||
siteName: 'Sim',
|
||||
images: [
|
||||
{
|
||||
url: getAssetUrl('social/facebook.png'),
|
||||
width: 1200,
|
||||
height: 630,
|
||||
alt: 'Sim',
|
||||
},
|
||||
],
|
||||
},
|
||||
twitter: {
|
||||
card: 'summary_large_image',
|
||||
title: 'Sim',
|
||||
description:
|
||||
'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.',
|
||||
images: [getAssetUrl('social/twitter.png')],
|
||||
creator: '@simstudioai',
|
||||
site: '@simstudioai',
|
||||
},
|
||||
manifest: '/favicon/site.webmanifest',
|
||||
icons: {
|
||||
icon: [
|
||||
{ url: '/favicon/favicon-16x16.png', sizes: '16x16', type: 'image/png' },
|
||||
{ url: '/favicon/favicon-32x32.png', sizes: '32x32', type: 'image/png' },
|
||||
{
|
||||
url: '/favicon/favicon-192x192.png',
|
||||
sizes: '192x192',
|
||||
type: 'image/png',
|
||||
},
|
||||
{
|
||||
url: '/favicon/favicon-512x512.png',
|
||||
sizes: '512x512',
|
||||
type: 'image/png',
|
||||
},
|
||||
{ url: '/sim.png', sizes: 'any', type: 'image/png' },
|
||||
],
|
||||
apple: '/favicon/apple-touch-icon.png',
|
||||
shortcut: '/favicon/favicon.ico',
|
||||
},
|
||||
appleWebApp: {
|
||||
capable: true,
|
||||
statusBarStyle: 'default',
|
||||
title: 'Sim',
|
||||
},
|
||||
formatDetection: {
|
||||
telephone: false,
|
||||
},
|
||||
category: 'technology',
|
||||
other: {
|
||||
'apple-mobile-web-app-capable': 'yes',
|
||||
'mobile-web-app-capable': 'yes',
|
||||
'msapplication-TileColor': '#ffffff',
|
||||
'msapplication-config': '/favicon/browserconfig.xml',
|
||||
},
|
||||
}
|
||||
// Generate dynamic metadata based on brand configuration
|
||||
export const metadata: Metadata = generateBrandedMetadata()
|
||||
|
||||
export default function RootLayout({ children }: { children: React.ReactNode }) {
|
||||
const structuredData = generateStructuredData()
|
||||
|
||||
return (
|
||||
<html lang='en' suppressHydrationWarning className={GeistSans.className}>
|
||||
<html lang='en' suppressHydrationWarning>
|
||||
<head>
|
||||
{/* Structured Data for SEO */}
|
||||
<script
|
||||
type='application/ld+json'
|
||||
dangerouslySetInnerHTML={{
|
||||
__html: JSON.stringify({
|
||||
'@context': 'https://schema.org',
|
||||
'@type': 'SoftwareApplication',
|
||||
name: 'Sim',
|
||||
description:
|
||||
'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.',
|
||||
url: 'https://sim.ai',
|
||||
applicationCategory: 'BusinessApplication',
|
||||
operatingSystem: 'Web Browser',
|
||||
offers: {
|
||||
'@type': 'Offer',
|
||||
category: 'SaaS',
|
||||
},
|
||||
creator: {
|
||||
'@type': 'Organization',
|
||||
name: 'Sim',
|
||||
url: 'https://sim.ai',
|
||||
},
|
||||
featureList: [
|
||||
'Visual AI Agent Builder',
|
||||
'Workflow Canvas Interface',
|
||||
'AI Agent Automation',
|
||||
'Custom AI Workflows',
|
||||
],
|
||||
}),
|
||||
__html: JSON.stringify(structuredData),
|
||||
}}
|
||||
/>
|
||||
|
||||
@@ -226,24 +99,26 @@ export default function RootLayout({ children }: { children: React.ReactNode })
|
||||
<PublicEnvScript />
|
||||
|
||||
{/* RB2B Script - Only load on hosted version */}
|
||||
{/* {isHosted && env.NEXT_PUBLIC_RB2B_KEY && (
|
||||
{isHosted && env.NEXT_PUBLIC_RB2B_KEY && (
|
||||
<script
|
||||
dangerouslySetInnerHTML={{
|
||||
__html: `!function () {var reb2b = window.reb2b = window.reb2b || [];if (reb2b.invoked) return;reb2b.invoked = true;reb2b.methods = ["identify", "collect"];reb2b.factory = function (method) {return function () {var args = Array.prototype.slice.call(arguments);args.unshift(method);reb2b.push(args);return reb2b;};};for (var i = 0; i < reb2b.methods.length; i++) {var key = reb2b.methods[i];reb2b[key] = reb2b.factory(key);}reb2b.load = function (key) {var script = document.createElement("script");script.type = "text/javascript";script.async = true;script.src = "https://b2bjsstore.s3.us-west-2.amazonaws.com/b/" + key + "/${env.NEXT_PUBLIC_RB2B_KEY}.js.gz";var first = document.getElementsByTagName("script")[0];first.parentNode.insertBefore(script, first);};reb2b.SNIPPET_VERSION = "1.0.1";reb2b.load("${env.NEXT_PUBLIC_RB2B_KEY}");}();`,
|
||||
}}
|
||||
/>
|
||||
)} */}
|
||||
)}
|
||||
</head>
|
||||
<body suppressHydrationWarning>
|
||||
<ZoomPrevention />
|
||||
<TelemetryConsentDialog />
|
||||
{children}
|
||||
{isHosted && (
|
||||
<>
|
||||
<SpeedInsights />
|
||||
<Analytics />
|
||||
</>
|
||||
)}
|
||||
<BrandedLayout>
|
||||
<ZoomPrevention />
|
||||
<TelemetryConsentDialog />
|
||||
{children}
|
||||
{isHosted && (
|
||||
<>
|
||||
<SpeedInsights />
|
||||
<Analytics />
|
||||
</>
|
||||
)}
|
||||
</BrandedLayout>
|
||||
</body>
|
||||
</html>
|
||||
)
|
||||
|
||||
29
apps/sim/app/manifest.ts
Normal file
29
apps/sim/app/manifest.ts
Normal file
@@ -0,0 +1,29 @@
|
||||
import type { MetadataRoute } from 'next'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
|
||||
export default function manifest(): MetadataRoute.Manifest {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
return {
|
||||
name: brand.name,
|
||||
short_name: brand.name,
|
||||
description:
|
||||
'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.',
|
||||
start_url: '/',
|
||||
display: 'standalone',
|
||||
background_color: brand.primaryColor || '#ffffff',
|
||||
theme_color: brand.primaryColor || '#ffffff',
|
||||
icons: [
|
||||
{
|
||||
src: '/favicon/android-chrome-192x192.png',
|
||||
sizes: '192x192',
|
||||
type: 'image/png',
|
||||
},
|
||||
{
|
||||
src: '/favicon/android-chrome-512x512.png',
|
||||
sizes: '512x512',
|
||||
type: 'image/png',
|
||||
},
|
||||
],
|
||||
}
|
||||
}
|
||||
@@ -4,6 +4,7 @@ import { Suspense, useEffect, useState } from 'react'
|
||||
import { CheckCircle, Heart, Info, Loader2, XCircle } from 'lucide-react'
|
||||
import { useSearchParams } from 'next/navigation'
|
||||
import { Button, Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui'
|
||||
import { useBrandConfig } from '@/lib/branding/branding'
|
||||
|
||||
interface UnsubscribeData {
|
||||
success: boolean
|
||||
@@ -26,6 +27,7 @@ function UnsubscribeContent() {
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
const [processing, setProcessing] = useState(false)
|
||||
const [unsubscribed, setUnsubscribed] = useState(false)
|
||||
const brand = useBrandConfig()
|
||||
|
||||
const email = searchParams.get('email')
|
||||
const token = searchParams.get('token')
|
||||
@@ -160,7 +162,7 @@ function UnsubscribeContent() {
|
||||
<Button
|
||||
onClick={() =>
|
||||
window.open(
|
||||
'mailto:help@sim.ai?subject=Unsubscribe%20Help&body=Hi%2C%20I%20need%20help%20unsubscribing%20from%20emails.%20My%20unsubscribe%20link%20is%20not%20working.',
|
||||
`mailto:${brand.supportEmail}?subject=Unsubscribe%20Help&body=Hi%2C%20I%20need%20help%20unsubscribing%20from%20emails.%20My%20unsubscribe%20link%20is%20not%20working.`,
|
||||
'_blank'
|
||||
)
|
||||
}
|
||||
@@ -176,8 +178,8 @@ function UnsubscribeContent() {
|
||||
<div className='mt-4 text-center'>
|
||||
<p className='text-muted-foreground text-xs'>
|
||||
Need immediate help? Email us at{' '}
|
||||
<a href='mailto:help@sim.ai' className='text-primary hover:underline'>
|
||||
help@sim.ai
|
||||
<a href={`mailto:${brand.supportEmail}`} className='text-primary hover:underline'>
|
||||
{brand.supportEmail}
|
||||
</a>
|
||||
</p>
|
||||
</div>
|
||||
@@ -222,7 +224,7 @@ function UnsubscribeContent() {
|
||||
<Button
|
||||
onClick={() =>
|
||||
window.open(
|
||||
'mailto:help@sim.ai?subject=Account%20Help&body=Hi%2C%20I%20need%20help%20with%20my%20account%20emails.',
|
||||
`mailto:${brand.supportEmail}?subject=Account%20Help&body=Hi%2C%20I%20need%20help%20with%20my%20account%20emails.`,
|
||||
'_blank'
|
||||
)
|
||||
}
|
||||
@@ -256,8 +258,8 @@ function UnsubscribeContent() {
|
||||
<p className='text-muted-foreground text-sm'>
|
||||
If you change your mind, you can always update your email preferences in your account
|
||||
settings or contact us at{' '}
|
||||
<a href='mailto:help@sim.ai' className='text-primary hover:underline'>
|
||||
help@sim.ai
|
||||
<a href={`mailto:${brand.supportEmail}`} className='text-primary hover:underline'>
|
||||
{brand.supportEmail}
|
||||
</a>
|
||||
</p>
|
||||
</CardContent>
|
||||
@@ -369,8 +371,8 @@ function UnsubscribeContent() {
|
||||
|
||||
<p className='text-center text-muted-foreground text-xs'>
|
||||
Questions? Contact us at{' '}
|
||||
<a href='mailto:help@sim.ai' className='text-primary hover:underline'>
|
||||
help@sim.ai
|
||||
<a href={`mailto:${brand.supportEmail}`} className='text-primary hover:underline'>
|
||||
{brand.supportEmail}
|
||||
</a>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { ChevronDown, Plus, X } from 'lucide-react'
|
||||
import { ChevronDown, Info, Plus, X } from 'lucide-react'
|
||||
import {
|
||||
Badge,
|
||||
Button,
|
||||
@@ -20,9 +20,14 @@ import {
|
||||
SelectItem,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
Tooltip,
|
||||
TooltipContent,
|
||||
TooltipProvider,
|
||||
TooltipTrigger,
|
||||
} from '@/components/ui'
|
||||
import { MAX_TAG_SLOTS, TAG_SLOTS, type TagSlot } from '@/lib/constants/knowledge'
|
||||
import { MAX_TAG_SLOTS, type TagSlot } from '@/lib/constants/knowledge'
|
||||
import { useKnowledgeBaseTagDefinitions } from '@/hooks/use-knowledge-base-tag-definitions'
|
||||
import { useNextAvailableSlot } from '@/hooks/use-next-available-slot'
|
||||
import { type TagDefinitionInput, useTagDefinitions } from '@/hooks/use-tag-definitions'
|
||||
|
||||
export interface DocumentTag {
|
||||
@@ -52,6 +57,7 @@ export function DocumentTagEntry({
|
||||
// Use different hooks based on whether we have a documentId
|
||||
const documentTagHook = useTagDefinitions(knowledgeBaseId, documentId)
|
||||
const kbTagHook = useKnowledgeBaseTagDefinitions(knowledgeBaseId)
|
||||
const { getNextAvailableSlot: getServerNextSlot } = useNextAvailableSlot(knowledgeBaseId)
|
||||
|
||||
// Use the document-level hook since we have documentId
|
||||
const { saveTagDefinitions } = documentTagHook
|
||||
@@ -66,17 +72,6 @@ export function DocumentTagEntry({
|
||||
value: '',
|
||||
})
|
||||
|
||||
const getNextAvailableSlot = (): DocumentTag['slot'] => {
|
||||
// Check which slots are used at the KB level (tag definitions)
|
||||
const usedSlots = new Set(kbTagDefinitions.map((def) => def.tagSlot))
|
||||
for (const slot of TAG_SLOTS) {
|
||||
if (!usedSlots.has(slot)) {
|
||||
return slot
|
||||
}
|
||||
}
|
||||
return TAG_SLOTS[0] // Fallback to first slot if all are used
|
||||
}
|
||||
|
||||
const handleRemoveTag = async (index: number) => {
|
||||
const updatedTags = tags.filter((_, i) => i !== index)
|
||||
onTagsChange(updatedTags)
|
||||
@@ -117,15 +112,36 @@ export function DocumentTagEntry({
|
||||
|
||||
// Save tag from modal
|
||||
const saveTagFromModal = async () => {
|
||||
if (!editForm.displayName.trim()) return
|
||||
if (!editForm.displayName.trim() || !editForm.value.trim()) return
|
||||
|
||||
try {
|
||||
// Calculate slot once at the beginning
|
||||
const targetSlot =
|
||||
editingTagIndex !== null ? tags[editingTagIndex].slot : getNextAvailableSlot()
|
||||
let targetSlot: string
|
||||
|
||||
if (editingTagIndex !== null) {
|
||||
// Editing existing tag - use existing slot
|
||||
// EDIT MODE: Editing existing tag - use existing slot
|
||||
targetSlot = tags[editingTagIndex].slot
|
||||
} else {
|
||||
// CREATE MODE: Check if using existing definition or creating new one
|
||||
const existingDefinition = kbTagDefinitions.find(
|
||||
(def) => def.displayName.toLowerCase() === editForm.displayName.toLowerCase()
|
||||
)
|
||||
|
||||
if (existingDefinition) {
|
||||
// Using existing definition - use its slot
|
||||
targetSlot = existingDefinition.tagSlot
|
||||
} else {
|
||||
// Creating new definition - get next available slot from server
|
||||
const serverSlot = await getServerNextSlot(editForm.fieldType)
|
||||
if (!serverSlot) {
|
||||
throw new Error(`No available slots for new tag of type '${editForm.fieldType}'`)
|
||||
}
|
||||
targetSlot = serverSlot
|
||||
}
|
||||
}
|
||||
|
||||
// Update the tags array
|
||||
if (editingTagIndex !== null) {
|
||||
// Editing existing tag
|
||||
const updatedTags = [...tags]
|
||||
updatedTags[editingTagIndex] = {
|
||||
...updatedTags[editingTagIndex],
|
||||
@@ -135,7 +151,7 @@ export function DocumentTagEntry({
|
||||
}
|
||||
onTagsChange(updatedTags)
|
||||
} else {
|
||||
// Creating new tag - use calculated slot
|
||||
// Creating new tag
|
||||
const newTag: DocumentTag = {
|
||||
slot: targetSlot,
|
||||
displayName: editForm.displayName,
|
||||
@@ -146,25 +162,60 @@ export function DocumentTagEntry({
|
||||
onTagsChange(newTags)
|
||||
}
|
||||
|
||||
// Auto-save tag definition if it's a new name
|
||||
const existingDefinition = kbTagDefinitions.find(
|
||||
(def) => def.displayName.toLowerCase() === editForm.displayName.toLowerCase()
|
||||
)
|
||||
// Handle tag definition creation/update based on edit mode
|
||||
if (editingTagIndex !== null) {
|
||||
// EDIT MODE: Always update existing definition, never create new slots
|
||||
const currentTag = tags[editingTagIndex]
|
||||
const currentDefinition = kbTagDefinitions.find(
|
||||
(def) => def.displayName.toLowerCase() === currentTag.displayName.toLowerCase()
|
||||
)
|
||||
|
||||
if (!existingDefinition) {
|
||||
// Use the same slot for both tag and definition
|
||||
const newDefinition: TagDefinitionInput = {
|
||||
displayName: editForm.displayName,
|
||||
fieldType: editForm.fieldType,
|
||||
tagSlot: targetSlot as TagSlot,
|
||||
}
|
||||
if (currentDefinition) {
|
||||
const updatedDefinition: TagDefinitionInput = {
|
||||
displayName: editForm.displayName,
|
||||
fieldType: currentDefinition.fieldType, // Keep existing field type (can't change in edit mode)
|
||||
tagSlot: currentDefinition.tagSlot, // Keep existing slot
|
||||
_originalDisplayName: currentTag.displayName, // Tell server which definition to update
|
||||
}
|
||||
|
||||
if (saveTagDefinitions) {
|
||||
await saveTagDefinitions([newDefinition])
|
||||
} else {
|
||||
throw new Error('Cannot save tag definitions without a document ID')
|
||||
if (saveTagDefinitions) {
|
||||
await saveTagDefinitions([updatedDefinition])
|
||||
} else {
|
||||
throw new Error('Cannot save tag definitions without a document ID')
|
||||
}
|
||||
await refreshTagDefinitions()
|
||||
|
||||
// Update the document tag's display name
|
||||
const updatedTags = [...tags]
|
||||
updatedTags[editingTagIndex] = {
|
||||
...currentTag,
|
||||
displayName: editForm.displayName,
|
||||
fieldType: currentDefinition.fieldType,
|
||||
}
|
||||
onTagsChange(updatedTags)
|
||||
}
|
||||
await refreshTagDefinitions()
|
||||
} else {
|
||||
// CREATE MODE: Adding new tag
|
||||
const existingDefinition = kbTagDefinitions.find(
|
||||
(def) => def.displayName.toLowerCase() === editForm.displayName.toLowerCase()
|
||||
)
|
||||
|
||||
if (!existingDefinition) {
|
||||
// Create new definition
|
||||
const newDefinition: TagDefinitionInput = {
|
||||
displayName: editForm.displayName,
|
||||
fieldType: editForm.fieldType,
|
||||
tagSlot: targetSlot as TagSlot,
|
||||
}
|
||||
|
||||
if (saveTagDefinitions) {
|
||||
await saveTagDefinitions([newDefinition])
|
||||
} else {
|
||||
throw new Error('Cannot save tag definitions without a document ID')
|
||||
}
|
||||
await refreshTagDefinitions()
|
||||
}
|
||||
// If existingDefinition exists, use it (no server update needed)
|
||||
}
|
||||
|
||||
// Save the actual document tags if onSave is provided
|
||||
@@ -194,13 +245,24 @@ export function DocumentTagEntry({
|
||||
}
|
||||
|
||||
setModalOpen(false)
|
||||
} catch (error) {}
|
||||
} catch (error) {
|
||||
console.error('Error saving tag:', error)
|
||||
}
|
||||
}
|
||||
|
||||
// Filter available tag definitions (exclude already used ones)
|
||||
const availableDefinitions = kbTagDefinitions.filter(
|
||||
(def) => !tags.some((tag) => tag.displayName.toLowerCase() === def.displayName.toLowerCase())
|
||||
)
|
||||
// Filter available tag definitions based on context
|
||||
const availableDefinitions = kbTagDefinitions.filter((def) => {
|
||||
if (editingTagIndex !== null) {
|
||||
// When editing, exclude only other used tag names (not the current one being edited)
|
||||
return !tags.some(
|
||||
(tag, index) =>
|
||||
index !== editingTagIndex &&
|
||||
tag.displayName.toLowerCase() === def.displayName.toLowerCase()
|
||||
)
|
||||
}
|
||||
// When creating new, exclude all already used tag names
|
||||
return !tags.some((tag) => tag.displayName.toLowerCase() === def.displayName.toLowerCase())
|
||||
})
|
||||
|
||||
return (
|
||||
<div className='space-y-4'>
|
||||
@@ -244,7 +306,7 @@ export function DocumentTagEntry({
|
||||
variant='outline'
|
||||
size='sm'
|
||||
onClick={openNewTagModal}
|
||||
disabled={disabled || tags.length >= MAX_TAG_SLOTS}
|
||||
disabled={disabled}
|
||||
className='gap-1 border-dashed text-muted-foreground hover:text-foreground'
|
||||
>
|
||||
<Plus className='h-4 w-4' />
|
||||
@@ -274,7 +336,24 @@ export function DocumentTagEntry({
|
||||
<div className='space-y-4'>
|
||||
{/* Tag Name */}
|
||||
<div className='space-y-2'>
|
||||
<Label htmlFor='tag-name'>Tag Name</Label>
|
||||
<div className='flex items-center gap-2'>
|
||||
<Label htmlFor='tag-name'>Tag Name</Label>
|
||||
{editingTagIndex !== null && (
|
||||
<TooltipProvider>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<Info className='h-4 w-4 cursor-help text-muted-foreground' />
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
<p className='text-sm'>
|
||||
Changing this tag name will update it for all documents in this knowledge
|
||||
base
|
||||
</p>
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)}
|
||||
</div>
|
||||
<div className='flex gap-2'>
|
||||
<Input
|
||||
id='tag-name'
|
||||
@@ -283,7 +362,7 @@ export function DocumentTagEntry({
|
||||
placeholder='Enter tag name'
|
||||
className='flex-1'
|
||||
/>
|
||||
{availableDefinitions.length > 0 && (
|
||||
{editingTagIndex === null && availableDefinitions.length > 0 && (
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button variant='outline' size='sm'>
|
||||
@@ -317,6 +396,7 @@ export function DocumentTagEntry({
|
||||
<Select
|
||||
value={editForm.fieldType}
|
||||
onValueChange={(value) => setEditForm({ ...editForm, fieldType: value })}
|
||||
disabled={editingTagIndex !== null} // Disable in edit mode
|
||||
>
|
||||
<SelectTrigger>
|
||||
<SelectValue />
|
||||
@@ -339,12 +419,60 @@ export function DocumentTagEntry({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Show warning when at max slots in create mode */}
|
||||
{editingTagIndex === null && kbTagDefinitions.length >= MAX_TAG_SLOTS && (
|
||||
<div className='rounded-md border border-amber-200 bg-amber-50 p-3'>
|
||||
<div className='flex items-center gap-2 text-amber-800 text-sm'>
|
||||
<span className='font-medium'>Maximum tag definitions reached</span>
|
||||
</div>
|
||||
<p className='mt-1 text-amber-700 text-xs'>
|
||||
You can still use existing tag definitions from the dropdown, but cannot create new
|
||||
ones.
|
||||
</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className='flex justify-end gap-2 pt-4'>
|
||||
<Button variant='outline' onClick={() => setModalOpen(false)}>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button onClick={saveTagFromModal} disabled={!editForm.displayName.trim()}>
|
||||
{editingTagIndex !== null ? 'Save Changes' : 'Add Tag'}
|
||||
<Button
|
||||
onClick={saveTagFromModal}
|
||||
disabled={(() => {
|
||||
if (!editForm.displayName.trim()) return true
|
||||
|
||||
// In edit mode, always allow
|
||||
if (editingTagIndex !== null) return false
|
||||
|
||||
// In create mode, check if we're creating a new definition at max slots
|
||||
const existingDefinition = kbTagDefinitions.find(
|
||||
(def) => def.displayName.toLowerCase() === editForm.displayName.toLowerCase()
|
||||
)
|
||||
|
||||
// If using existing definition, allow
|
||||
if (existingDefinition) return false
|
||||
|
||||
// If creating new definition and at max slots, disable
|
||||
return kbTagDefinitions.length >= MAX_TAG_SLOTS
|
||||
})()}
|
||||
>
|
||||
{(() => {
|
||||
if (editingTagIndex !== null) {
|
||||
return 'Save Changes'
|
||||
}
|
||||
|
||||
const existingDefinition = kbTagDefinitions.find(
|
||||
(def) => def.displayName.toLowerCase() === editForm.displayName.toLowerCase()
|
||||
)
|
||||
|
||||
if (existingDefinition) {
|
||||
return 'Use Existing Tag'
|
||||
}
|
||||
if (kbTagDefinitions.length >= MAX_TAG_SLOTS) {
|
||||
return 'Max Tags Reached'
|
||||
}
|
||||
return 'Create New Tag'
|
||||
})()}
|
||||
</Button>
|
||||
</div>
|
||||
</DialogContent>
|
||||
|
||||
@@ -37,7 +37,7 @@ export function ExportControls({ disabled = false }: ExportControlsProps) {
|
||||
}
|
||||
}
|
||||
|
||||
const handleExportYaml = () => {
|
||||
const handleExportYaml = async () => {
|
||||
if (!currentWorkflow || !activeWorkflowId) {
|
||||
logger.warn('No active workflow to export')
|
||||
return
|
||||
@@ -45,7 +45,7 @@ export function ExportControls({ disabled = false }: ExportControlsProps) {
|
||||
|
||||
setIsExporting(true)
|
||||
try {
|
||||
const yamlContent = getYaml()
|
||||
const yamlContent = await getYaml()
|
||||
const filename = `${currentWorkflow.name.replace(/[^a-z0-9]/gi, '-')}.yaml`
|
||||
|
||||
downloadFile(yamlContent, filename, 'text/yaml')
|
||||
|
||||
@@ -1,7 +1,17 @@
|
||||
'use client'
|
||||
|
||||
import { type FC, memo, useEffect, useMemo, useRef, useState } from 'react'
|
||||
import { Check, Clipboard, Loader2, RotateCcw, ThumbsDown, ThumbsUp, X } from 'lucide-react'
|
||||
import {
|
||||
Check,
|
||||
Clipboard,
|
||||
FileText,
|
||||
Image,
|
||||
Loader2,
|
||||
RotateCcw,
|
||||
ThumbsDown,
|
||||
ThumbsUp,
|
||||
X,
|
||||
} from 'lucide-react'
|
||||
import { InlineToolCall } from '@/lib/copilot/tools/inline-tool-call'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { usePreviewStore } from '@/stores/copilot/preview-store'
|
||||
@@ -38,6 +48,107 @@ const StreamingIndicator = memo(() => (
|
||||
|
||||
StreamingIndicator.displayName = 'StreamingIndicator'
|
||||
|
||||
// File attachment display component
|
||||
interface FileAttachmentDisplayProps {
|
||||
fileAttachments: any[]
|
||||
}
|
||||
|
||||
const FileAttachmentDisplay = memo(({ fileAttachments }: FileAttachmentDisplayProps) => {
|
||||
// Cache for file URLs to avoid re-fetching on every render
|
||||
const [fileUrls, setFileUrls] = useState<Record<string, string>>({})
|
||||
|
||||
const formatFileSize = (bytes: number) => {
|
||||
if (bytes === 0) return '0 B'
|
||||
const k = 1024
|
||||
const sizes = ['B', 'KB', 'MB', 'GB']
|
||||
const i = Math.floor(Math.log(bytes) / Math.log(k))
|
||||
return `${Math.round((bytes / k ** i) * 10) / 10} ${sizes[i]}`
|
||||
}
|
||||
|
||||
const getFileIcon = (mediaType: string) => {
|
||||
if (mediaType.startsWith('image/')) {
|
||||
return <Image className='h-5 w-5 text-muted-foreground' />
|
||||
}
|
||||
if (mediaType.includes('pdf')) {
|
||||
return <FileText className='h-5 w-5 text-red-500' />
|
||||
}
|
||||
if (mediaType.includes('text') || mediaType.includes('json') || mediaType.includes('xml')) {
|
||||
return <FileText className='h-5 w-5 text-blue-500' />
|
||||
}
|
||||
return <FileText className='h-5 w-5 text-muted-foreground' />
|
||||
}
|
||||
|
||||
const getFileUrl = (file: any) => {
|
||||
const cacheKey = file.s3_key
|
||||
if (fileUrls[cacheKey]) {
|
||||
return fileUrls[cacheKey]
|
||||
}
|
||||
|
||||
// Generate URL only once and cache it
|
||||
const url = `/api/files/serve/s3/${encodeURIComponent(file.s3_key)}?bucket=copilot`
|
||||
setFileUrls((prev) => ({ ...prev, [cacheKey]: url }))
|
||||
return url
|
||||
}
|
||||
|
||||
const handleFileClick = (file: any) => {
|
||||
// Use cached URL or generate it
|
||||
const serveUrl = getFileUrl(file)
|
||||
|
||||
// Open the file in a new tab
|
||||
window.open(serveUrl, '_blank')
|
||||
}
|
||||
|
||||
const isImageFile = (mediaType: string) => {
|
||||
return mediaType.startsWith('image/')
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
{fileAttachments.map((file) => (
|
||||
<div
|
||||
key={file.id}
|
||||
className='group relative h-16 w-16 cursor-pointer overflow-hidden rounded-md border border-border/50 bg-muted/20 transition-all hover:bg-muted/40'
|
||||
onClick={() => handleFileClick(file)}
|
||||
title={`${file.filename} (${formatFileSize(file.size)})`}
|
||||
>
|
||||
{isImageFile(file.media_type) ? (
|
||||
// For images, show actual thumbnail
|
||||
<img
|
||||
src={getFileUrl(file)}
|
||||
alt={file.filename}
|
||||
className='h-full w-full object-cover'
|
||||
onError={(e) => {
|
||||
// If image fails to load, replace with icon
|
||||
const target = e.target as HTMLImageElement
|
||||
target.style.display = 'none'
|
||||
const parent = target.parentElement
|
||||
if (parent) {
|
||||
const iconContainer = document.createElement('div')
|
||||
iconContainer.className =
|
||||
'flex items-center justify-center w-full h-full bg-background/50'
|
||||
iconContainer.innerHTML =
|
||||
'<svg class="h-5 w-5 text-muted-foreground" xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><rect width="18" height="18" x="3" y="3" rx="2" ry="2"/><circle cx="9" cy="9" r="2"/><path d="m21 15-3.086-3.086a2 2 0 0 0-2.828 0L6 21"/></svg>'
|
||||
parent.appendChild(iconContainer)
|
||||
}
|
||||
}}
|
||||
/>
|
||||
) : (
|
||||
// For other files, show icon centered
|
||||
<div className='flex h-full w-full items-center justify-center bg-background/50'>
|
||||
{getFileIcon(file.media_type)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Hover overlay effect */}
|
||||
<div className='pointer-events-none absolute inset-0 bg-black/10 opacity-0 transition-opacity group-hover:opacity-100' />
|
||||
</div>
|
||||
))}
|
||||
</>
|
||||
)
|
||||
})
|
||||
|
||||
FileAttachmentDisplay.displayName = 'FileAttachmentDisplay'
|
||||
|
||||
// Smooth streaming text component with typewriter effect
|
||||
interface SmoothStreamingTextProps {
|
||||
content: string
|
||||
@@ -481,8 +592,18 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
|
||||
if (isUser) {
|
||||
return (
|
||||
<div className='w-full py-2'>
|
||||
{/* File attachments displayed above the message, completely separate from message box width */}
|
||||
{message.fileAttachments && message.fileAttachments.length > 0 && (
|
||||
<div className='mb-1 flex justify-end'>
|
||||
<div className='flex flex-wrap gap-1.5'>
|
||||
<FileAttachmentDisplay fileAttachments={message.fileAttachments} />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className='flex justify-end'>
|
||||
<div className='max-w-[80%]'>
|
||||
{/* Message content in purple box */}
|
||||
<div
|
||||
className='rounded-[10px] px-3 py-2'
|
||||
style={{ backgroundColor: 'rgba(128, 47, 255, 0.08)' }}
|
||||
@@ -491,6 +612,8 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
|
||||
<WordWrap text={message.content} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Checkpoints below message */}
|
||||
{hasCheckpoints && (
|
||||
<div className='mt-1 flex justify-end'>
|
||||
{showRestoreConfirmation ? (
|
||||
|
||||
@@ -8,13 +8,43 @@ import {
|
||||
useRef,
|
||||
useState,
|
||||
} from 'react'
|
||||
import { ArrowUp, Loader2, MessageCircle, Package, X } from 'lucide-react'
|
||||
import {
|
||||
ArrowUp,
|
||||
FileText,
|
||||
Image,
|
||||
Loader2,
|
||||
MessageCircle,
|
||||
Package,
|
||||
Paperclip,
|
||||
X,
|
||||
} from 'lucide-react'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { Textarea } from '@/components/ui/textarea'
|
||||
import { useSession } from '@/lib/auth-client'
|
||||
import { cn } from '@/lib/utils'
|
||||
import { useCopilotStore } from '@/stores/copilot/store'
|
||||
|
||||
export interface MessageFileAttachment {
|
||||
id: string
|
||||
s3_key: string
|
||||
filename: string
|
||||
media_type: string
|
||||
size: number
|
||||
}
|
||||
|
||||
interface AttachedFile {
|
||||
id: string
|
||||
name: string
|
||||
size: number
|
||||
type: string
|
||||
path: string
|
||||
key?: string // Add key field to store the actual S3 key
|
||||
uploading: boolean
|
||||
previewUrl?: string // For local preview of images before upload
|
||||
}
|
||||
|
||||
interface UserInputProps {
|
||||
onSubmit: (message: string) => void
|
||||
onSubmit: (message: string, fileAttachments?: MessageFileAttachment[]) => void
|
||||
onAbort?: () => void
|
||||
disabled?: boolean
|
||||
isLoading?: boolean
|
||||
@@ -49,7 +79,15 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
|
||||
ref
|
||||
) => {
|
||||
const [internalMessage, setInternalMessage] = useState('')
|
||||
const [attachedFiles, setAttachedFiles] = useState<AttachedFile[]>([])
|
||||
// Drag and drop state
|
||||
const [isDragging, setIsDragging] = useState(false)
|
||||
const [dragCounter, setDragCounter] = useState(0)
|
||||
const textareaRef = useRef<HTMLTextAreaElement>(null)
|
||||
const fileInputRef = useRef<HTMLInputElement>(null)
|
||||
|
||||
const { data: session } = useSession()
|
||||
const { currentChat, workflowId } = useCopilotStore()
|
||||
|
||||
// Expose focus method to parent
|
||||
useImperativeHandle(
|
||||
@@ -76,17 +114,190 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
|
||||
}
|
||||
}, [message])
|
||||
|
||||
// Cleanup preview URLs on unmount
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
attachedFiles.forEach((f) => {
|
||||
if (f.previewUrl) {
|
||||
URL.revokeObjectURL(f.previewUrl)
|
||||
}
|
||||
})
|
||||
}
|
||||
}, [])
|
||||
|
||||
// Drag and drop handlers
|
||||
const handleDragEnter = (e: React.DragEvent) => {
|
||||
e.preventDefault()
|
||||
e.stopPropagation()
|
||||
setDragCounter((prev) => {
|
||||
const newCount = prev + 1
|
||||
if (newCount === 1) {
|
||||
setIsDragging(true)
|
||||
}
|
||||
return newCount
|
||||
})
|
||||
}
|
||||
|
||||
const handleDragLeave = (e: React.DragEvent) => {
|
||||
e.preventDefault()
|
||||
e.stopPropagation()
|
||||
setDragCounter((prev) => {
|
||||
const newCount = prev - 1
|
||||
if (newCount === 0) {
|
||||
setIsDragging(false)
|
||||
}
|
||||
return newCount
|
||||
})
|
||||
}
|
||||
|
||||
const handleDragOver = (e: React.DragEvent) => {
|
||||
e.preventDefault()
|
||||
e.stopPropagation()
|
||||
// Add visual feedback for valid drop zone
|
||||
e.dataTransfer.dropEffect = 'copy'
|
||||
}
|
||||
|
||||
const handleDrop = async (e: React.DragEvent) => {
|
||||
e.preventDefault()
|
||||
e.stopPropagation()
|
||||
setIsDragging(false)
|
||||
setDragCounter(0)
|
||||
|
||||
if (e.dataTransfer.files && e.dataTransfer.files.length > 0) {
|
||||
await processFiles(e.dataTransfer.files)
|
||||
}
|
||||
}
|
||||
|
||||
// Process dropped or selected files
|
||||
const processFiles = async (fileList: FileList) => {
|
||||
const userId = session?.user?.id
|
||||
|
||||
if (!userId) {
|
||||
console.error('User ID not available for file upload')
|
||||
return
|
||||
}
|
||||
|
||||
// Process files one by one
|
||||
for (const file of Array.from(fileList)) {
|
||||
// Create a preview URL for images
|
||||
let previewUrl: string | undefined
|
||||
if (file.type.startsWith('image/')) {
|
||||
previewUrl = URL.createObjectURL(file)
|
||||
}
|
||||
|
||||
// Create a temporary file entry with uploading state
|
||||
const tempFile: AttachedFile = {
|
||||
id: crypto.randomUUID(),
|
||||
name: file.name,
|
||||
size: file.size,
|
||||
type: file.type,
|
||||
path: '',
|
||||
uploading: true,
|
||||
previewUrl,
|
||||
}
|
||||
|
||||
setAttachedFiles((prev) => [...prev, tempFile])
|
||||
|
||||
try {
|
||||
// Request presigned URL
|
||||
const presignedResponse = await fetch('/api/files/presigned?type=copilot', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
fileName: file.name,
|
||||
contentType: file.type,
|
||||
fileSize: file.size,
|
||||
userId,
|
||||
}),
|
||||
})
|
||||
|
||||
if (!presignedResponse.ok) {
|
||||
throw new Error('Failed to get presigned URL')
|
||||
}
|
||||
|
||||
const presignedData = await presignedResponse.json()
|
||||
|
||||
// Upload file to S3
|
||||
console.log('Uploading to S3:', presignedData.presignedUrl)
|
||||
const uploadResponse = await fetch(presignedData.presignedUrl, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': file.type,
|
||||
},
|
||||
body: file,
|
||||
})
|
||||
|
||||
console.log('S3 Upload response status:', uploadResponse.status)
|
||||
|
||||
if (!uploadResponse.ok) {
|
||||
const errorText = await uploadResponse.text()
|
||||
console.error('S3 Upload failed:', errorText)
|
||||
throw new Error(`Failed to upload file: ${uploadResponse.status} ${errorText}`)
|
||||
}
|
||||
|
||||
// Update file entry with success
|
||||
setAttachedFiles((prev) =>
|
||||
prev.map((f) =>
|
||||
f.id === tempFile.id
|
||||
? {
|
||||
...f,
|
||||
path: presignedData.fileInfo.path,
|
||||
key: presignedData.fileInfo.key, // Store the actual S3 key
|
||||
uploading: false,
|
||||
}
|
||||
: f
|
||||
)
|
||||
)
|
||||
} catch (error) {
|
||||
console.error('File upload failed:', error)
|
||||
// Remove failed upload
|
||||
setAttachedFiles((prev) => prev.filter((f) => f.id !== tempFile.id))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const handleSubmit = () => {
|
||||
const trimmedMessage = message.trim()
|
||||
if (!trimmedMessage || disabled || isLoading) return
|
||||
|
||||
onSubmit(trimmedMessage)
|
||||
// Clear the message after submit
|
||||
// Check for failed uploads and show user feedback
|
||||
const failedUploads = attachedFiles.filter((f) => !f.uploading && !f.key)
|
||||
if (failedUploads.length > 0) {
|
||||
console.error(
|
||||
'Some files failed to upload:',
|
||||
failedUploads.map((f) => f.name)
|
||||
)
|
||||
}
|
||||
|
||||
// Convert attached files to the format expected by the API
|
||||
const fileAttachments = attachedFiles
|
||||
.filter((f) => !f.uploading && f.key) // Only include successfully uploaded files with keys
|
||||
.map((f) => ({
|
||||
id: f.id,
|
||||
s3_key: f.key!, // Use the actual S3 key stored from the upload response
|
||||
filename: f.name,
|
||||
media_type: f.type,
|
||||
size: f.size,
|
||||
}))
|
||||
|
||||
onSubmit(trimmedMessage, fileAttachments)
|
||||
|
||||
// Clean up preview URLs before clearing
|
||||
attachedFiles.forEach((f) => {
|
||||
if (f.previewUrl) {
|
||||
URL.revokeObjectURL(f.previewUrl)
|
||||
}
|
||||
})
|
||||
|
||||
// Clear the message and files after submit
|
||||
if (controlledValue !== undefined) {
|
||||
onControlledChange?.('')
|
||||
} else {
|
||||
setInternalMessage('')
|
||||
}
|
||||
setAttachedFiles([])
|
||||
}
|
||||
|
||||
const handleAbort = () => {
|
||||
@@ -111,6 +322,67 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
|
||||
}
|
||||
}
|
||||
|
||||
const handleFileSelect = () => {
|
||||
fileInputRef.current?.click()
|
||||
}
|
||||
|
||||
const handleFileChange = async (e: React.ChangeEvent<HTMLInputElement>) => {
|
||||
const files = e.target.files
|
||||
if (!files || files.length === 0) return
|
||||
|
||||
await processFiles(files)
|
||||
|
||||
// Clear the input
|
||||
if (fileInputRef.current) {
|
||||
fileInputRef.current.value = ''
|
||||
}
|
||||
}
|
||||
|
||||
const removeFile = (fileId: string) => {
|
||||
// Clean up preview URL if it exists
|
||||
const file = attachedFiles.find((f) => f.id === fileId)
|
||||
if (file?.previewUrl) {
|
||||
URL.revokeObjectURL(file.previewUrl)
|
||||
}
|
||||
setAttachedFiles((prev) => prev.filter((f) => f.id !== fileId))
|
||||
}
|
||||
|
||||
const handleFileClick = (file: AttachedFile) => {
|
||||
// If file has been uploaded and has an S3 key, open the S3 URL
|
||||
if (file.key) {
|
||||
const serveUrl = `/api/files/serve/s3/${encodeURIComponent(file.key)}?bucket=copilot`
|
||||
window.open(serveUrl, '_blank')
|
||||
} else if (file.previewUrl) {
|
||||
// If file hasn't been uploaded yet but has a preview URL, open that
|
||||
window.open(file.previewUrl, '_blank')
|
||||
}
|
||||
}
|
||||
|
||||
const formatFileSize = (bytes: number) => {
|
||||
if (bytes === 0) return '0 Bytes'
|
||||
const k = 1024
|
||||
const sizes = ['Bytes', 'KB', 'MB', 'GB']
|
||||
const i = Math.floor(Math.log(bytes) / Math.log(k))
|
||||
return `${Math.round((bytes / k ** i) * 100) / 100} ${sizes[i]}`
|
||||
}
|
||||
|
||||
const isImageFile = (type: string) => {
|
||||
return type.startsWith('image/')
|
||||
}
|
||||
|
||||
const getFileIcon = (mediaType: string) => {
|
||||
if (mediaType.startsWith('image/')) {
|
||||
return <Image className='h-5 w-5 text-muted-foreground' />
|
||||
}
|
||||
if (mediaType.includes('pdf')) {
|
||||
return <FileText className='h-5 w-5 text-red-500' />
|
||||
}
|
||||
if (mediaType.includes('text') || mediaType.includes('json') || mediaType.includes('xml')) {
|
||||
return <FileText className='h-5 w-5 text-blue-500' />
|
||||
}
|
||||
return <FileText className='h-5 w-5 text-muted-foreground' />
|
||||
}
|
||||
|
||||
const canSubmit = message.trim().length > 0 && !disabled && !isLoading
|
||||
const showAbortButton = isLoading && onAbort
|
||||
|
||||
@@ -130,23 +402,93 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
|
||||
|
||||
return (
|
||||
<div className={cn('relative flex-none pb-4', className)}>
|
||||
<div className='rounded-[8px] border border-[#E5E5E5] bg-[#FFFFFF] p-2 shadow-xs dark:border-[#414141] dark:bg-[#202020]'>
|
||||
<div
|
||||
className={cn(
|
||||
'rounded-[8px] border border-[#E5E5E5] bg-[#FFFFFF] p-2 shadow-xs transition-all duration-200 dark:border-[#414141] dark:bg-[#202020]',
|
||||
isDragging &&
|
||||
'border-[#802FFF] bg-purple-50/50 dark:border-[#802FFF] dark:bg-purple-950/20'
|
||||
)}
|
||||
onDragEnter={handleDragEnter}
|
||||
onDragLeave={handleDragLeave}
|
||||
onDragOver={handleDragOver}
|
||||
onDrop={handleDrop}
|
||||
>
|
||||
{/* Attached Files Display with Thumbnails */}
|
||||
{attachedFiles.length > 0 && (
|
||||
<div className='mb-2 flex flex-wrap gap-1.5'>
|
||||
{attachedFiles.map((file) => (
|
||||
<div
|
||||
key={file.id}
|
||||
className='group relative h-16 w-16 cursor-pointer overflow-hidden rounded-md border border-border/50 bg-muted/20 transition-all hover:bg-muted/40'
|
||||
title={`${file.name} (${formatFileSize(file.size)})`}
|
||||
onClick={() => handleFileClick(file)}
|
||||
>
|
||||
{isImageFile(file.type) && file.previewUrl ? (
|
||||
// For images, show actual thumbnail
|
||||
<img
|
||||
src={file.previewUrl}
|
||||
alt={file.name}
|
||||
className='h-full w-full object-cover'
|
||||
/>
|
||||
) : isImageFile(file.type) && file.key ? (
|
||||
// For uploaded images without preview URL, use S3 URL
|
||||
<img
|
||||
src={`/api/files/serve/s3/${encodeURIComponent(file.key)}?bucket=copilot`}
|
||||
alt={file.name}
|
||||
className='h-full w-full object-cover'
|
||||
/>
|
||||
) : (
|
||||
// For other files, show icon centered
|
||||
<div className='flex h-full w-full items-center justify-center bg-background/50'>
|
||||
{getFileIcon(file.type)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Loading overlay */}
|
||||
{file.uploading && (
|
||||
<div className='absolute inset-0 flex items-center justify-center bg-black/50'>
|
||||
<Loader2 className='h-4 w-4 animate-spin text-white' />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Remove button */}
|
||||
{!file.uploading && (
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='icon'
|
||||
onClick={(e) => {
|
||||
e.stopPropagation()
|
||||
removeFile(file.id)
|
||||
}}
|
||||
className='absolute top-0.5 right-0.5 h-5 w-5 bg-black/50 text-white opacity-0 transition-opacity hover:bg-black/70 group-hover:opacity-100'
|
||||
>
|
||||
<X className='h-3 w-3' />
|
||||
</Button>
|
||||
)}
|
||||
|
||||
{/* Hover overlay effect */}
|
||||
<div className='pointer-events-none absolute inset-0 bg-black/10 opacity-0 transition-opacity group-hover:opacity-100' />
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Textarea Field */}
|
||||
<Textarea
|
||||
ref={textareaRef}
|
||||
value={message}
|
||||
onChange={handleInputChange}
|
||||
onKeyDown={handleKeyDown}
|
||||
placeholder={placeholder}
|
||||
placeholder={isDragging ? 'Drop files here...' : placeholder}
|
||||
disabled={disabled}
|
||||
rows={1}
|
||||
className='mb-2 min-h-[32px] w-full resize-none overflow-hidden border-0 bg-transparent px-[2px] py-1 text-muted-foreground focus-visible:ring-0 focus-visible:ring-offset-0'
|
||||
style={{ height: 'auto' }}
|
||||
/>
|
||||
|
||||
{/* Bottom Row: Mode Selector + Send Button */}
|
||||
{/* Bottom Row: Mode Selector + Attach Button + Send Button */}
|
||||
<div className='flex items-center justify-between'>
|
||||
{/* Mode Selector Tag */}
|
||||
{/* Left side: Mode Selector */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
@@ -158,36 +500,61 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
|
||||
<span className='capitalize'>{mode}</span>
|
||||
</Button>
|
||||
|
||||
{/* Send Button */}
|
||||
{showAbortButton ? (
|
||||
{/* Right side: Attach Button + Send Button */}
|
||||
<div className='flex items-center gap-1'>
|
||||
{/* Attach Button */}
|
||||
<Button
|
||||
onClick={handleAbort}
|
||||
disabled={isAborting}
|
||||
variant='ghost'
|
||||
size='icon'
|
||||
className='h-6 w-6 rounded-full bg-red-500 text-white transition-all duration-200 hover:bg-red-600'
|
||||
title='Stop generation'
|
||||
onClick={handleFileSelect}
|
||||
disabled={disabled || isLoading}
|
||||
className='h-6 w-6 text-muted-foreground hover:text-foreground'
|
||||
title='Attach file'
|
||||
>
|
||||
{isAborting ? (
|
||||
<Loader2 className='h-3 w-3 animate-spin' />
|
||||
) : (
|
||||
<X className='h-3 w-3' />
|
||||
)}
|
||||
<Paperclip className='h-3 w-3' />
|
||||
</Button>
|
||||
) : (
|
||||
<Button
|
||||
onClick={handleSubmit}
|
||||
disabled={!canSubmit}
|
||||
size='icon'
|
||||
className='h-6 w-6 rounded-full bg-[#802FFF] text-white shadow-[0_0_0_0_#802FFF] transition-all duration-200 hover:bg-[#7028E6] hover:shadow-[0_0_0_4px_rgba(127,47,255,0.15)]'
|
||||
>
|
||||
{isLoading ? (
|
||||
<Loader2 className='h-3 w-3 animate-spin' />
|
||||
) : (
|
||||
<ArrowUp className='h-3 w-3' />
|
||||
)}
|
||||
</Button>
|
||||
)}
|
||||
|
||||
{/* Send Button */}
|
||||
{showAbortButton ? (
|
||||
<Button
|
||||
onClick={handleAbort}
|
||||
disabled={isAborting}
|
||||
size='icon'
|
||||
className='h-6 w-6 rounded-full bg-red-500 text-white transition-all duration-200 hover:bg-red-600'
|
||||
title='Stop generation'
|
||||
>
|
||||
{isAborting ? (
|
||||
<Loader2 className='h-3 w-3 animate-spin' />
|
||||
) : (
|
||||
<X className='h-3 w-3' />
|
||||
)}
|
||||
</Button>
|
||||
) : (
|
||||
<Button
|
||||
onClick={handleSubmit}
|
||||
disabled={!canSubmit}
|
||||
size='icon'
|
||||
className='h-6 w-6 rounded-full bg-[#802FFF] text-white shadow-[0_0_0_0_#802FFF] transition-all duration-200 hover:bg-[#7028E6] hover:shadow-[0_0_0_4px_rgba(127,47,255,0.15)]'
|
||||
>
|
||||
{isLoading ? (
|
||||
<Loader2 className='h-3 w-3 animate-spin' />
|
||||
) : (
|
||||
<ArrowUp className='h-3 w-3' />
|
||||
)}
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Hidden File Input */}
|
||||
<input
|
||||
ref={fileInputRef}
|
||||
type='file'
|
||||
onChange={handleFileChange}
|
||||
className='hidden'
|
||||
accept='.pdf,.doc,.docx,.txt,.md,.png,.jpg,.jpeg,.gif,.svg'
|
||||
multiple
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
|
||||
@@ -12,7 +12,10 @@ import {
|
||||
CopilotWelcome,
|
||||
UserInput,
|
||||
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components'
|
||||
import type { UserInputRef } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/user-input'
|
||||
import type {
|
||||
MessageFileAttachment,
|
||||
UserInputRef,
|
||||
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/user-input'
|
||||
import { COPILOT_TOOL_IDS } from '@/stores/copilot/constants'
|
||||
import { usePreviewStore } from '@/stores/copilot/preview-store'
|
||||
import { useCopilotStore } from '@/stores/copilot/store'
|
||||
@@ -251,12 +254,16 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
|
||||
|
||||
// Handle message submission
|
||||
const handleSubmit = useCallback(
|
||||
async (query: string) => {
|
||||
async (query: string, fileAttachments?: MessageFileAttachment[]) => {
|
||||
if (!query || isSendingMessage || !activeWorkflowId) return
|
||||
|
||||
try {
|
||||
await sendMessage(query, { stream: true })
|
||||
logger.info('Sent message:', query)
|
||||
await sendMessage(query, { stream: true, fileAttachments })
|
||||
logger.info(
|
||||
'Sent message:',
|
||||
query,
|
||||
fileAttachments ? `with ${fileAttachments.length} attachments` : ''
|
||||
)
|
||||
} catch (error) {
|
||||
logger.error('Failed to send message:', error)
|
||||
}
|
||||
|
||||
@@ -24,6 +24,7 @@ import {
|
||||
parseProvider,
|
||||
} from '@/lib/oauth'
|
||||
import { OAuthRequiredModal } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/components/sub-block/components/credential-selector/components/oauth-required-modal'
|
||||
import type { PlannerTask } from '@/tools/microsoft_planner/types'
|
||||
|
||||
const logger = createLogger('MicrosoftFileSelector')
|
||||
|
||||
@@ -40,6 +41,9 @@ export interface MicrosoftFileInfo {
|
||||
owners?: { displayName: string; emailAddress: string }[]
|
||||
}
|
||||
|
||||
// Union type for items that can be displayed in the file selector
|
||||
type SelectableItem = MicrosoftFileInfo | PlannerTask
|
||||
|
||||
interface MicrosoftFileSelectorProps {
|
||||
value: string
|
||||
onChange: (value: string, fileInfo?: MicrosoftFileInfo) => void
|
||||
@@ -50,6 +54,7 @@ interface MicrosoftFileSelectorProps {
|
||||
serviceId?: string
|
||||
showPreview?: boolean
|
||||
onFileInfoChange?: (fileInfo: MicrosoftFileInfo | null) => void
|
||||
planId?: string
|
||||
}
|
||||
|
||||
export function MicrosoftFileSelector({
|
||||
@@ -62,6 +67,7 @@ export function MicrosoftFileSelector({
|
||||
serviceId,
|
||||
showPreview = true,
|
||||
onFileInfoChange,
|
||||
planId,
|
||||
}: MicrosoftFileSelectorProps) {
|
||||
const [open, setOpen] = useState(false)
|
||||
const [credentials, setCredentials] = useState<Credential[]>([])
|
||||
@@ -77,6 +83,11 @@ export function MicrosoftFileSelector({
|
||||
const [credentialsLoaded, setCredentialsLoaded] = useState(false)
|
||||
const initialFetchRef = useRef(false)
|
||||
|
||||
// Handle Microsoft Planner task selection
|
||||
const [plannerTasks, setPlannerTasks] = useState<PlannerTask[]>([])
|
||||
const [isLoadingTasks, setIsLoadingTasks] = useState(false)
|
||||
const [selectedTask, setSelectedTask] = useState<PlannerTask | null>(null)
|
||||
|
||||
// Determine the appropriate service ID based on provider and scopes
|
||||
const getServiceId = (): string => {
|
||||
if (serviceId) return serviceId
|
||||
@@ -128,7 +139,7 @@ export function MicrosoftFileSelector({
|
||||
}
|
||||
}, [provider, getProviderId, selectedCredentialId])
|
||||
|
||||
// Fetch available Excel files for the selected credential
|
||||
// Fetch available files for the selected credential
|
||||
const fetchAvailableFiles = useCallback(async () => {
|
||||
if (!selectedCredentialId) return
|
||||
|
||||
@@ -143,7 +154,17 @@ export function MicrosoftFileSelector({
|
||||
queryParams.append('query', searchQuery.trim())
|
||||
}
|
||||
|
||||
const response = await fetch(`/api/auth/oauth/microsoft/files?${queryParams.toString()}`)
|
||||
// Route to correct endpoint based on service
|
||||
let endpoint: string
|
||||
if (serviceId === 'onedrive') {
|
||||
endpoint = `/api/tools/onedrive/folders?${queryParams.toString()}`
|
||||
} else if (serviceId === 'sharepoint') {
|
||||
endpoint = `/api/tools/sharepoint/sites?${queryParams.toString()}`
|
||||
} else {
|
||||
endpoint = `/api/auth/oauth/microsoft/files?${queryParams.toString()}`
|
||||
}
|
||||
|
||||
const response = await fetch(endpoint)
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json()
|
||||
@@ -160,7 +181,7 @@ export function MicrosoftFileSelector({
|
||||
} finally {
|
||||
setIsLoadingFiles(false)
|
||||
}
|
||||
}, [selectedCredentialId, searchQuery])
|
||||
}, [selectedCredentialId, searchQuery, serviceId])
|
||||
|
||||
// Fetch a single file by ID when we have a selectedFileId but no metadata
|
||||
const fetchFileById = useCallback(
|
||||
@@ -175,7 +196,22 @@ export function MicrosoftFileSelector({
|
||||
fileId: fileId,
|
||||
})
|
||||
|
||||
const response = await fetch(`/api/auth/oauth/microsoft/file?${queryParams.toString()}`)
|
||||
// Route to correct endpoint based on service
|
||||
let endpoint: string
|
||||
if (serviceId === 'onedrive') {
|
||||
endpoint = `/api/tools/onedrive/folder?${queryParams.toString()}`
|
||||
} else if (serviceId === 'sharepoint') {
|
||||
// Change from fileId to siteId for SharePoint
|
||||
const sharepointParams = new URLSearchParams({
|
||||
credentialId: selectedCredentialId,
|
||||
siteId: fileId, // Use siteId instead of fileId
|
||||
})
|
||||
endpoint = `/api/tools/sharepoint/site?${sharepointParams.toString()}`
|
||||
} else {
|
||||
endpoint = `/api/auth/oauth/microsoft/file?${queryParams.toString()}`
|
||||
}
|
||||
|
||||
const response = await fetch(endpoint)
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json()
|
||||
@@ -204,9 +240,77 @@ export function MicrosoftFileSelector({
|
||||
setIsLoadingSelectedFile(false)
|
||||
}
|
||||
},
|
||||
[selectedCredentialId, onFileInfoChange]
|
||||
[selectedCredentialId, onFileInfoChange, serviceId]
|
||||
)
|
||||
|
||||
// Fetch Microsoft Planner tasks when planId and credentials are available
|
||||
const fetchPlannerTasks = useCallback(async () => {
|
||||
if (!selectedCredentialId || !planId || serviceId !== 'microsoft-planner') {
|
||||
logger.info('Skipping task fetch - missing requirements:', {
|
||||
selectedCredentialId: !!selectedCredentialId,
|
||||
planId: !!planId,
|
||||
serviceId,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
logger.info('Fetching Planner tasks with:', {
|
||||
credentialId: selectedCredentialId,
|
||||
planId,
|
||||
serviceId,
|
||||
})
|
||||
|
||||
setIsLoadingTasks(true)
|
||||
try {
|
||||
const queryParams = new URLSearchParams({
|
||||
credentialId: selectedCredentialId,
|
||||
planId: planId,
|
||||
})
|
||||
|
||||
const url = `/api/tools/microsoft_planner/tasks?${queryParams.toString()}`
|
||||
logger.info('Calling API endpoint:', url)
|
||||
|
||||
const response = await fetch(url)
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json()
|
||||
logger.info('Received task data:', data)
|
||||
const tasks = data.tasks || []
|
||||
|
||||
// Transform tasks to match file info format for consistency
|
||||
const transformedTasks = tasks.map((task: PlannerTask) => ({
|
||||
id: task.id,
|
||||
name: task.title,
|
||||
mimeType: 'planner/task',
|
||||
webViewLink: `https://tasks.office.com/planner/task/${task.id}`,
|
||||
modifiedTime: task.createdDateTime,
|
||||
createdTime: task.createdDateTime,
|
||||
planId: task.planId,
|
||||
bucketId: task.bucketId,
|
||||
percentComplete: task.percentComplete,
|
||||
priority: task.priority,
|
||||
dueDateTime: task.dueDateTime,
|
||||
}))
|
||||
|
||||
logger.info('Transformed tasks:', transformedTasks)
|
||||
setPlannerTasks(transformedTasks)
|
||||
} else {
|
||||
const errorText = await response.text()
|
||||
logger.error('API response not ok:', {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
errorText,
|
||||
})
|
||||
setPlannerTasks([])
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Network/fetch error:', error)
|
||||
setPlannerTasks([])
|
||||
} finally {
|
||||
setIsLoadingTasks(false)
|
||||
}
|
||||
}, [selectedCredentialId, planId, serviceId])
|
||||
|
||||
// Fetch credentials on initial mount
|
||||
useEffect(() => {
|
||||
if (!initialFetchRef.current) {
|
||||
@@ -233,6 +337,35 @@ export function MicrosoftFileSelector({
|
||||
}
|
||||
}, [searchQuery, selectedCredentialId, fetchAvailableFiles])
|
||||
|
||||
// Fetch planner tasks when credentials and planId change
|
||||
useEffect(() => {
|
||||
if (serviceId === 'microsoft-planner' && selectedCredentialId && planId) {
|
||||
fetchPlannerTasks()
|
||||
}
|
||||
}, [selectedCredentialId, planId, serviceId, fetchPlannerTasks])
|
||||
|
||||
// Handle task selection for planner
|
||||
const handleTaskSelect = (task: PlannerTask) => {
|
||||
const taskId = task.id || ''
|
||||
// Convert PlannerTask to MicrosoftFileInfo format for compatibility
|
||||
const taskAsFileInfo: MicrosoftFileInfo = {
|
||||
id: taskId,
|
||||
name: task.title,
|
||||
mimeType: 'planner/task',
|
||||
webViewLink: `https://tasks.office.com/planner/task/${taskId}`,
|
||||
createdTime: task.createdDateTime,
|
||||
modifiedTime: task.createdDateTime,
|
||||
}
|
||||
|
||||
setSelectedFileId(taskId)
|
||||
setSelectedFile(taskAsFileInfo)
|
||||
setSelectedTask(task)
|
||||
onChange(taskId, taskAsFileInfo)
|
||||
onFileInfoChange?.(taskAsFileInfo)
|
||||
setOpen(false)
|
||||
setSearchQuery('')
|
||||
}
|
||||
|
||||
// Keep internal selectedFileId in sync with the value prop
|
||||
useEffect(() => {
|
||||
if (value !== selectedFileId) {
|
||||
@@ -276,7 +409,10 @@ export function MicrosoftFileSelector({
|
||||
selectedCredentialId &&
|
||||
credentialsLoaded &&
|
||||
!selectedFile &&
|
||||
!isLoadingSelectedFile
|
||||
!isLoadingSelectedFile &&
|
||||
serviceId !== 'microsoft-planner' &&
|
||||
serviceId !== 'sharepoint' &&
|
||||
serviceId !== 'onedrive'
|
||||
) {
|
||||
fetchFileById(value)
|
||||
}
|
||||
@@ -287,6 +423,7 @@ export function MicrosoftFileSelector({
|
||||
selectedFile,
|
||||
isLoadingSelectedFile,
|
||||
fetchFileById,
|
||||
serviceId,
|
||||
])
|
||||
|
||||
// Handle selecting a file from the available files
|
||||
@@ -324,6 +461,22 @@ export function MicrosoftFileSelector({
|
||||
return <ExternalLink className='h-4 w-4' />
|
||||
}
|
||||
|
||||
// Handle OneDrive specifically by checking serviceId
|
||||
if (baseProvider === 'microsoft' && serviceId === 'onedrive') {
|
||||
const onedriveService = baseProviderConfig.services.onedrive
|
||||
if (onedriveService) {
|
||||
return onedriveService.icon({ className: 'h-4 w-4' })
|
||||
}
|
||||
}
|
||||
|
||||
// Handle SharePoint specifically by checking serviceId
|
||||
if (baseProvider === 'microsoft' && serviceId === 'sharepoint') {
|
||||
const sharepointService = baseProviderConfig.services.sharepoint
|
||||
if (sharepointService) {
|
||||
return sharepointService.icon({ className: 'h-4 w-4' })
|
||||
}
|
||||
}
|
||||
|
||||
// For compound providers, find the specific service
|
||||
if (providerName.includes('-')) {
|
||||
for (const service of Object.values(baseProviderConfig.services)) {
|
||||
@@ -383,6 +536,9 @@ export function MicrosoftFileSelector({
|
||||
if (file.mimeType === 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet') {
|
||||
return <MicrosoftExcelIcon className={`${iconSize} text-green-600`} />
|
||||
}
|
||||
if (file.mimeType === 'planner/task') {
|
||||
return getProviderIcon(provider)
|
||||
}
|
||||
// if (file.mimeType === 'application/vnd.openxmlformats-officedocument.wordprocessingml.document') {
|
||||
// return <FileIcon className={`${iconSize} text-blue-600`} />
|
||||
// }
|
||||
@@ -397,6 +553,55 @@ export function MicrosoftFileSelector({
|
||||
setSearchQuery(query)
|
||||
}
|
||||
|
||||
const getFileTypeTitleCase = () => {
|
||||
if (serviceId === 'onedrive') return 'Folders'
|
||||
if (serviceId === 'sharepoint') return 'Sites'
|
||||
if (serviceId === 'microsoft-planner') return 'Tasks'
|
||||
return 'Excel Files'
|
||||
}
|
||||
|
||||
const getSearchPlaceholder = () => {
|
||||
if (serviceId === 'onedrive') return 'Search OneDrive folders...'
|
||||
if (serviceId === 'sharepoint') return 'Search SharePoint sites...'
|
||||
if (serviceId === 'microsoft-planner') return 'Search tasks...'
|
||||
return 'Search Excel files...'
|
||||
}
|
||||
|
||||
const getEmptyStateText = () => {
|
||||
if (serviceId === 'onedrive') {
|
||||
return {
|
||||
title: 'No folders found.',
|
||||
description: 'No folders were found in your OneDrive.',
|
||||
}
|
||||
}
|
||||
if (serviceId === 'sharepoint') {
|
||||
return {
|
||||
title: 'No sites found.',
|
||||
description: 'No SharePoint sites were found.',
|
||||
}
|
||||
}
|
||||
if (serviceId === 'microsoft-planner') {
|
||||
return {
|
||||
title: 'No tasks found.',
|
||||
description: 'No tasks were found in this plan.',
|
||||
}
|
||||
}
|
||||
return {
|
||||
title: 'No Excel files found.',
|
||||
description: 'No .xlsx files were found in your OneDrive.',
|
||||
}
|
||||
}
|
||||
|
||||
// Filter tasks based on search query for planner
|
||||
const filteredTasks: SelectableItem[] =
|
||||
serviceId === 'microsoft-planner'
|
||||
? plannerTasks.filter((task) => {
|
||||
const title = task.title || ''
|
||||
const query = searchQuery || ''
|
||||
return title.toLowerCase().includes(query.toLowerCase())
|
||||
})
|
||||
: availableFiles
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className='space-y-2'>
|
||||
@@ -405,7 +610,7 @@ export function MicrosoftFileSelector({
|
||||
onOpenChange={(isOpen) => {
|
||||
setOpen(isOpen)
|
||||
if (!isOpen) {
|
||||
setSearchQuery('') // Clear search when popover closes
|
||||
setSearchQuery('')
|
||||
}
|
||||
}}
|
||||
>
|
||||
@@ -415,7 +620,7 @@ export function MicrosoftFileSelector({
|
||||
role='combobox'
|
||||
aria-expanded={open}
|
||||
className='h-10 w-full min-w-0 justify-between'
|
||||
disabled={disabled}
|
||||
disabled={disabled || (serviceId === 'microsoft-planner' && !planId)}
|
||||
>
|
||||
<div className='flex min-w-0 items-center gap-2 overflow-hidden'>
|
||||
{selectedFile ? (
|
||||
@@ -463,10 +668,10 @@ export function MicrosoftFileSelector({
|
||||
)}
|
||||
|
||||
<Command>
|
||||
<CommandInput placeholder='Search Excel files...' onValueChange={handleSearch} />
|
||||
<CommandInput placeholder={getSearchPlaceholder()} onValueChange={handleSearch} />
|
||||
<CommandList>
|
||||
<CommandEmpty>
|
||||
{isLoading || isLoadingFiles ? (
|
||||
{isLoading || isLoadingFiles || isLoadingTasks ? (
|
||||
<div className='flex items-center justify-center p-4'>
|
||||
<RefreshCw className='h-4 w-4 animate-spin' />
|
||||
<span className='ml-2'>Loading...</span>
|
||||
@@ -478,11 +683,18 @@ export function MicrosoftFileSelector({
|
||||
Connect a {getProviderName(provider)} account to continue.
|
||||
</p>
|
||||
</div>
|
||||
) : availableFiles.length === 0 ? (
|
||||
) : serviceId === 'microsoft-planner' && !planId ? (
|
||||
<div className='p-4 text-center'>
|
||||
<p className='font-medium text-sm'>No Excel files found.</p>
|
||||
<p className='font-medium text-sm'>Plan ID required.</p>
|
||||
<p className='text-muted-foreground text-xs'>
|
||||
No .xlsx files were found in your OneDrive.
|
||||
Please enter a Plan ID first to see tasks.
|
||||
</p>
|
||||
</div>
|
||||
) : filteredTasks.length === 0 ? (
|
||||
<div className='p-4 text-center'>
|
||||
<p className='font-medium text-sm'>{getEmptyStateText().title}</p>
|
||||
<p className='text-muted-foreground text-xs'>
|
||||
{getEmptyStateText().description}
|
||||
</p>
|
||||
</div>
|
||||
) : null}
|
||||
@@ -510,32 +722,58 @@ export function MicrosoftFileSelector({
|
||||
</CommandGroup>
|
||||
)}
|
||||
|
||||
{/* Available Excel files - only show if we have credentials and files */}
|
||||
{credentials.length > 0 && selectedCredentialId && availableFiles.length > 0 && (
|
||||
{/* Available files/tasks - only show if we have credentials and items */}
|
||||
{credentials.length > 0 && selectedCredentialId && filteredTasks.length > 0 && (
|
||||
<CommandGroup>
|
||||
<div className='px-2 py-1.5 font-medium text-muted-foreground text-xs'>
|
||||
Excel Files
|
||||
{getFileTypeTitleCase()}
|
||||
</div>
|
||||
{availableFiles.map((file) => (
|
||||
<CommandItem
|
||||
key={file.id}
|
||||
value={`file-${file.id}-${file.name}`}
|
||||
onSelect={() => handleFileSelect(file)}
|
||||
>
|
||||
<div className='flex items-center gap-2 overflow-hidden'>
|
||||
{getFileIcon(file, 'sm')}
|
||||
<div className='min-w-0 flex-1'>
|
||||
<span className='truncate font-normal'>{file.name}</span>
|
||||
{file.modifiedTime && (
|
||||
<div className='text-muted-foreground text-xs'>
|
||||
Modified {new Date(file.modifiedTime).toLocaleDateString()}
|
||||
</div>
|
||||
{filteredTasks.map((item) => {
|
||||
const isPlanner = serviceId === 'microsoft-planner'
|
||||
const isPlannerTask = isPlanner && 'title' in item
|
||||
const plannerTask = item as PlannerTask
|
||||
const fileInfo = item as MicrosoftFileInfo
|
||||
|
||||
const displayName = isPlannerTask ? plannerTask.title : fileInfo.name
|
||||
const dateField = isPlannerTask
|
||||
? plannerTask.createdDateTime
|
||||
: fileInfo.createdTime
|
||||
|
||||
return (
|
||||
<CommandItem
|
||||
key={item.id}
|
||||
value={`file-${item.id}-${displayName}`}
|
||||
onSelect={() =>
|
||||
isPlannerTask
|
||||
? handleTaskSelect(plannerTask)
|
||||
: handleFileSelect(fileInfo)
|
||||
}
|
||||
>
|
||||
<div className='flex items-center gap-2 overflow-hidden'>
|
||||
{getFileIcon(
|
||||
isPlannerTask
|
||||
? {
|
||||
...fileInfo,
|
||||
id: plannerTask.id || '',
|
||||
name: plannerTask.title,
|
||||
mimeType: 'planner/task',
|
||||
}
|
||||
: fileInfo,
|
||||
'sm'
|
||||
)}
|
||||
<div className='min-w-0 flex-1'>
|
||||
<span className='truncate font-normal'>{displayName}</span>
|
||||
{dateField && (
|
||||
<div className='text-muted-foreground text-xs'>
|
||||
Modified {new Date(dateField).toLocaleDateString()}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{file.id === selectedFileId && <Check className='ml-auto h-4 w-4' />}
|
||||
</CommandItem>
|
||||
))}
|
||||
{item.id === selectedFileId && <Check className='ml-auto h-4 w-4' />}
|
||||
</CommandItem>
|
||||
)
|
||||
})}
|
||||
</CommandGroup>
|
||||
)}
|
||||
|
||||
@@ -589,7 +827,13 @@ export function MicrosoftFileSelector({
|
||||
className='flex items-center gap-1 text-primary text-xs hover:underline'
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
>
|
||||
<span>Open in OneDrive</span>
|
||||
<span>
|
||||
{serviceId === 'microsoft-planner'
|
||||
? 'Open in Planner'
|
||||
: serviceId === 'sharepoint'
|
||||
? 'Open in SharePoint'
|
||||
: 'Open in OneDrive'}
|
||||
</span>
|
||||
<ExternalLink className='h-3 w-3' />
|
||||
</a>
|
||||
) : (
|
||||
@@ -600,7 +844,9 @@ export function MicrosoftFileSelector({
|
||||
className='flex items-center gap-1 text-primary text-xs hover:underline'
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
>
|
||||
<span>Open in OneDrive</span>
|
||||
<span>
|
||||
{serviceId === 'sharepoint' ? 'Open in SharePoint' : 'Open in OneDrive'}
|
||||
</span>
|
||||
<ExternalLink className='h-3 w-3' />
|
||||
</a>
|
||||
)}
|
||||
|
||||
@@ -68,8 +68,12 @@ export function FileSelectorInput({
|
||||
const isDiscord = provider === 'discord'
|
||||
const isMicrosoftTeams = provider === 'microsoft-teams'
|
||||
const isMicrosoftExcel = provider === 'microsoft-excel'
|
||||
const isMicrosoftWord = provider === 'microsoft-word'
|
||||
const isMicrosoftOneDrive = provider === 'microsoft' && subBlock.serviceId === 'onedrive'
|
||||
const isGoogleCalendar = subBlock.provider === 'google-calendar'
|
||||
const isWealthbox = provider === 'wealthbox'
|
||||
const isMicrosoftSharePoint = provider === 'microsoft' && subBlock.serviceId === 'sharepoint'
|
||||
const isMicrosoftPlanner = provider === 'microsoft-planner'
|
||||
// For Confluence and Jira, we need the domain and credentials
|
||||
const domain = isConfluence || isJira ? (getValue(blockId, 'domain') as string) || '' : ''
|
||||
// For Discord, we need the bot token and server ID
|
||||
@@ -94,6 +98,8 @@ export function FileSelectorInput({
|
||||
setSelectedCalendarId(value)
|
||||
} else if (isWealthbox) {
|
||||
setSelectedWealthboxItemId(value)
|
||||
} else if (isMicrosoftSharePoint) {
|
||||
setSelectedFileId(value)
|
||||
} else {
|
||||
setSelectedFileId(value)
|
||||
}
|
||||
@@ -111,6 +117,8 @@ export function FileSelectorInput({
|
||||
setSelectedCalendarId(value)
|
||||
} else if (isWealthbox) {
|
||||
setSelectedWealthboxItemId(value)
|
||||
} else if (isMicrosoftSharePoint) {
|
||||
setSelectedFileId(value)
|
||||
} else {
|
||||
setSelectedFileId(value)
|
||||
}
|
||||
@@ -125,6 +133,7 @@ export function FileSelectorInput({
|
||||
isMicrosoftTeams,
|
||||
isGoogleCalendar,
|
||||
isWealthbox,
|
||||
isMicrosoftSharePoint,
|
||||
isPreview,
|
||||
previewValue,
|
||||
])
|
||||
@@ -325,6 +334,141 @@ export function FileSelectorInput({
|
||||
)
|
||||
}
|
||||
|
||||
// Handle Microsoft Word selector
|
||||
if (isMicrosoftWord) {
|
||||
// Get credential using the same pattern as other tools
|
||||
const credential = (getValue(blockId, 'credential') as string) || ''
|
||||
|
||||
return (
|
||||
<TooltipProvider>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div className='w-full'>
|
||||
<MicrosoftFileSelector
|
||||
value={selectedFileId}
|
||||
onChange={handleFileChange}
|
||||
provider='microsoft-word'
|
||||
requiredScopes={subBlock.requiredScopes || []}
|
||||
serviceId={subBlock.serviceId}
|
||||
label={subBlock.placeholder || 'Select Microsoft Word document'}
|
||||
disabled={disabled || !credential}
|
||||
showPreview={true}
|
||||
onFileInfoChange={setFileInfo as (info: MicrosoftFileInfo | null) => void}
|
||||
/>
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
{!credential && (
|
||||
<TooltipContent side='top'>
|
||||
<p>Please select Microsoft Word credentials first</p>
|
||||
</TooltipContent>
|
||||
)}
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)
|
||||
}
|
||||
|
||||
// Handle Microsoft OneDrive selector
|
||||
if (isMicrosoftOneDrive) {
|
||||
const credential = (getValue(blockId, 'credential') as string) || ''
|
||||
|
||||
return (
|
||||
<TooltipProvider>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div className='w-full'>
|
||||
<MicrosoftFileSelector
|
||||
value={selectedFileId}
|
||||
onChange={handleFileChange}
|
||||
provider='microsoft'
|
||||
requiredScopes={subBlock.requiredScopes || []}
|
||||
serviceId={subBlock.serviceId}
|
||||
label={subBlock.placeholder || 'Select OneDrive folder'}
|
||||
disabled={disabled || !credential}
|
||||
showPreview={true}
|
||||
onFileInfoChange={setFileInfo as (info: MicrosoftFileInfo | null) => void}
|
||||
/>
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
{!credential && (
|
||||
<TooltipContent side='top'>
|
||||
<p>Please select Microsoft credentials first</p>
|
||||
</TooltipContent>
|
||||
)}
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)
|
||||
}
|
||||
|
||||
// Handle Microsoft SharePoint selector
|
||||
if (isMicrosoftSharePoint) {
|
||||
const credential = (getValue(blockId, 'credential') as string) || ''
|
||||
|
||||
return (
|
||||
<TooltipProvider>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div className='w-full'>
|
||||
<MicrosoftFileSelector
|
||||
value={selectedFileId}
|
||||
onChange={handleFileChange}
|
||||
provider='microsoft'
|
||||
requiredScopes={subBlock.requiredScopes || []}
|
||||
serviceId={subBlock.serviceId}
|
||||
label={subBlock.placeholder || 'Select SharePoint site'}
|
||||
disabled={disabled || !credential}
|
||||
showPreview={true}
|
||||
onFileInfoChange={setFileInfo as (info: MicrosoftFileInfo | null) => void}
|
||||
/>
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
{!credential && (
|
||||
<TooltipContent side='top'>
|
||||
<p>Please select SharePoint credentials first</p>
|
||||
</TooltipContent>
|
||||
)}
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)
|
||||
}
|
||||
|
||||
// Handle Microsoft Planner task selector
|
||||
if (isMicrosoftPlanner) {
|
||||
const credential = (getValue(blockId, 'credential') as string) || ''
|
||||
const planId = (getValue(blockId, 'planId') as string) || ''
|
||||
|
||||
return (
|
||||
<TooltipProvider>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div className='w-full'>
|
||||
<MicrosoftFileSelector
|
||||
value={selectedFileId}
|
||||
onChange={handleFileChange}
|
||||
provider='microsoft-planner'
|
||||
requiredScopes={subBlock.requiredScopes || []}
|
||||
serviceId='microsoft-planner'
|
||||
label={subBlock.placeholder || 'Select task'}
|
||||
disabled={disabled || !credential || !planId}
|
||||
showPreview={true}
|
||||
onFileInfoChange={setFileInfo as (info: MicrosoftFileInfo | null) => void}
|
||||
planId={planId}
|
||||
/>
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
{!credential ? (
|
||||
<TooltipContent side='top'>
|
||||
<p>Please select Microsoft Planner credentials first</p>
|
||||
</TooltipContent>
|
||||
) : !planId ? (
|
||||
<TooltipContent side='top'>
|
||||
<p>Please enter a Plan ID first</p>
|
||||
</TooltipContent>
|
||||
) : null}
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)
|
||||
}
|
||||
|
||||
// Handle Microsoft Teams selector
|
||||
if (isMicrosoftTeams) {
|
||||
// Get credential using the same pattern as other tools
|
||||
|
||||
@@ -70,7 +70,10 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
|
||||
const currentWorkflow = useCurrentWorkflow()
|
||||
const currentBlock = currentWorkflow.getBlockById(id)
|
||||
|
||||
const isEnabled = currentBlock?.enabled ?? true
|
||||
// In preview mode, use the blockState provided; otherwise use current workflow state
|
||||
const isEnabled = data.isPreview
|
||||
? (data.blockState?.enabled ?? true)
|
||||
: (currentBlock?.enabled ?? true)
|
||||
|
||||
// Get diff status from the block itself (set by diff engine)
|
||||
const diffStatus =
|
||||
@@ -405,33 +408,37 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
|
||||
// If there's no condition, the block should be shown
|
||||
if (!block.condition) return true
|
||||
|
||||
// If condition is a function, call it to get the actual condition object
|
||||
const actualCondition =
|
||||
typeof block.condition === 'function' ? block.condition() : block.condition
|
||||
|
||||
// Get the values of the fields this block depends on from the appropriate state
|
||||
const fieldValue = stateToUse[block.condition.field]?.value
|
||||
const andFieldValue = block.condition.and
|
||||
? stateToUse[block.condition.and.field]?.value
|
||||
const fieldValue = stateToUse[actualCondition.field]?.value
|
||||
const andFieldValue = actualCondition.and
|
||||
? stateToUse[actualCondition.and.field]?.value
|
||||
: undefined
|
||||
|
||||
// Check if the condition value is an array
|
||||
const isValueMatch = Array.isArray(block.condition.value)
|
||||
const isValueMatch = Array.isArray(actualCondition.value)
|
||||
? fieldValue != null &&
|
||||
(block.condition.not
|
||||
? !block.condition.value.includes(fieldValue as string | number | boolean)
|
||||
: block.condition.value.includes(fieldValue as string | number | boolean))
|
||||
: block.condition.not
|
||||
? fieldValue !== block.condition.value
|
||||
: fieldValue === block.condition.value
|
||||
(actualCondition.not
|
||||
? !actualCondition.value.includes(fieldValue as string | number | boolean)
|
||||
: actualCondition.value.includes(fieldValue as string | number | boolean))
|
||||
: actualCondition.not
|
||||
? fieldValue !== actualCondition.value
|
||||
: fieldValue === actualCondition.value
|
||||
|
||||
// Check both conditions if 'and' is present
|
||||
const isAndValueMatch =
|
||||
!block.condition.and ||
|
||||
(Array.isArray(block.condition.and.value)
|
||||
!actualCondition.and ||
|
||||
(Array.isArray(actualCondition.and.value)
|
||||
? andFieldValue != null &&
|
||||
(block.condition.and.not
|
||||
? !block.condition.and.value.includes(andFieldValue as string | number | boolean)
|
||||
: block.condition.and.value.includes(andFieldValue as string | number | boolean))
|
||||
: block.condition.and.not
|
||||
? andFieldValue !== block.condition.and.value
|
||||
: andFieldValue === block.condition.and.value)
|
||||
(actualCondition.and.not
|
||||
? !actualCondition.and.value.includes(andFieldValue as string | number | boolean)
|
||||
: actualCondition.and.value.includes(andFieldValue as string | number | boolean))
|
||||
: actualCondition.and.not
|
||||
? andFieldValue !== actualCondition.and.value
|
||||
: andFieldValue === actualCondition.and.value)
|
||||
|
||||
return isValueMatch && isAndValueMatch
|
||||
})
|
||||
|
||||
@@ -7,10 +7,14 @@ import { BookOpen, Building2, LibraryBig, ScrollText, Search, Shapes, Workflow }
|
||||
import { useParams, useRouter } from 'next/navigation'
|
||||
import { Dialog, DialogOverlay, DialogPortal, DialogTitle } from '@/components/ui/dialog'
|
||||
import { Input } from '@/components/ui/input'
|
||||
import { useBrandConfig } from '@/lib/branding/branding'
|
||||
import { cn } from '@/lib/utils'
|
||||
import {
|
||||
TemplateCard,
|
||||
TemplateCardSkeleton,
|
||||
} from '@/app/workspace/[workspaceId]/templates/components/template-card'
|
||||
import { getKeyboardShortcutText } from '@/app/workspace/[workspaceId]/w/hooks/use-keyboard-shortcuts'
|
||||
import { getAllBlocks } from '@/blocks'
|
||||
import { TemplateCard, TemplateCardSkeleton } from '../../../templates/components/template-card'
|
||||
import { getKeyboardShortcutText } from '../../hooks/use-keyboard-shortcuts'
|
||||
import { type NavigationSection, useSearchNavigation } from './hooks/use-search-navigation'
|
||||
|
||||
interface SearchModalProps {
|
||||
@@ -100,6 +104,7 @@ export function SearchModal({
|
||||
const params = useParams()
|
||||
const router = useRouter()
|
||||
const workspaceId = params.workspaceId as string
|
||||
const brand = useBrandConfig()
|
||||
|
||||
// Local state for templates to handle star changes
|
||||
const [localTemplates, setLocalTemplates] = useState<TemplateData[]>(templates)
|
||||
@@ -182,7 +187,7 @@ export function SearchModal({
|
||||
id: 'docs',
|
||||
name: 'Docs',
|
||||
icon: BookOpen,
|
||||
href: 'https://docs.simstudio.ai/',
|
||||
href: brand.documentationUrl || 'https://docs.sim.ai/',
|
||||
},
|
||||
],
|
||||
[workspaceId]
|
||||
|
||||
@@ -12,6 +12,12 @@ import {
|
||||
MODELS_WITH_TEMPERATURE_SUPPORT,
|
||||
providers,
|
||||
} from '@/providers/utils'
|
||||
|
||||
// Get current Ollama models dynamically
|
||||
const getCurrentOllamaModels = () => {
|
||||
return useOllamaStore.getState().models
|
||||
}
|
||||
|
||||
import { useOllamaStore } from '@/stores/ollama/store'
|
||||
import type { ToolResponse } from '@/tools/types'
|
||||
|
||||
@@ -213,14 +219,18 @@ Create a system prompt appropriately detailed for the request, using clear langu
|
||||
password: true,
|
||||
connectionDroppable: false,
|
||||
required: true,
|
||||
// Hide API key for all hosted models when running on hosted version
|
||||
// Hide API key for hosted models and Ollama models
|
||||
condition: isHosted
|
||||
? {
|
||||
field: 'model',
|
||||
value: getHostedModels(),
|
||||
not: true, // Show for all models EXCEPT those listed
|
||||
}
|
||||
: undefined, // Show for all models in non-hosted environments
|
||||
: () => ({
|
||||
field: 'model',
|
||||
value: getCurrentOllamaModels(),
|
||||
not: true, // Show for all models EXCEPT Ollama models
|
||||
}),
|
||||
},
|
||||
{
|
||||
id: 'azureEndpoint',
|
||||
|
||||
238
apps/sim/blocks/blocks/microsoft_planner.ts
Normal file
238
apps/sim/blocks/blocks/microsoft_planner.ts
Normal file
@@ -0,0 +1,238 @@
|
||||
import { MicrosoftPlannerIcon } from '@/components/icons'
|
||||
import type { BlockConfig } from '@/blocks/types'
|
||||
import type { MicrosoftPlannerResponse } from '@/tools/microsoft_planner/types'
|
||||
|
||||
interface MicrosoftPlannerBlockParams {
|
||||
credential: string
|
||||
accessToken?: string
|
||||
planId?: string
|
||||
taskId?: string
|
||||
title?: string
|
||||
description?: string
|
||||
dueDateTime?: string
|
||||
assigneeUserId?: string
|
||||
bucketId?: string
|
||||
[key: string]: string | number | boolean | undefined
|
||||
}
|
||||
|
||||
export const MicrosoftPlannerBlock: BlockConfig<MicrosoftPlannerResponse> = {
|
||||
type: 'microsoft_planner',
|
||||
name: 'Microsoft Planner',
|
||||
description: 'Read and create tasks in Microsoft Planner',
|
||||
longDescription:
|
||||
'Integrate Microsoft Planner functionality to manage tasks. Read all user tasks, tasks from specific plans, individual tasks, or create new tasks with various properties like title, description, due date, and assignees using OAuth authentication.',
|
||||
docsLink: 'https://docs.sim.ai/tools/microsoft_planner',
|
||||
category: 'tools',
|
||||
bgColor: '#E0E0E0',
|
||||
icon: MicrosoftPlannerIcon,
|
||||
subBlocks: [
|
||||
{
|
||||
id: 'operation',
|
||||
title: 'Operation',
|
||||
type: 'dropdown',
|
||||
layout: 'full',
|
||||
options: [
|
||||
{ label: 'Read Task', id: 'read_task' },
|
||||
{ label: 'Create Task', id: 'create_task' },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'credential',
|
||||
title: 'Microsoft Account',
|
||||
type: 'oauth-input',
|
||||
layout: 'full',
|
||||
provider: 'microsoft-planner',
|
||||
serviceId: 'microsoft-planner',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Group.ReadWrite.All',
|
||||
'Group.Read.All',
|
||||
'Tasks.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
placeholder: 'Select Microsoft account',
|
||||
},
|
||||
{
|
||||
id: 'planId',
|
||||
title: 'Plan ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter the plan ID',
|
||||
condition: { field: 'operation', value: ['create_task', 'read_task'] },
|
||||
},
|
||||
{
|
||||
id: 'taskId',
|
||||
title: 'Task ID',
|
||||
type: 'file-selector',
|
||||
layout: 'full',
|
||||
placeholder: 'Select a task',
|
||||
provider: 'microsoft-planner',
|
||||
condition: { field: 'operation', value: ['read_task'] },
|
||||
mode: 'basic',
|
||||
},
|
||||
|
||||
// Advanced mode
|
||||
{
|
||||
id: 'taskId',
|
||||
title: 'Manual Task ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter the task ID',
|
||||
condition: { field: 'operation', value: ['read_task'] },
|
||||
mode: 'advanced',
|
||||
},
|
||||
|
||||
{
|
||||
id: 'title',
|
||||
title: 'Task Title',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter the task title',
|
||||
condition: { field: 'operation', value: ['create_task'] },
|
||||
},
|
||||
{
|
||||
id: 'description',
|
||||
title: 'Description',
|
||||
type: 'long-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter task description (optional)',
|
||||
condition: { field: 'operation', value: ['create_task'] },
|
||||
},
|
||||
{
|
||||
id: 'dueDateTime',
|
||||
title: 'Due Date',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter due date in ISO 8601 format (e.g., 2024-12-31T23:59:59Z)',
|
||||
condition: { field: 'operation', value: ['create_task'] },
|
||||
},
|
||||
{
|
||||
id: 'assigneeUserId',
|
||||
title: 'Assignee User ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter the user ID to assign this task to (optional)',
|
||||
condition: { field: 'operation', value: ['create_task'] },
|
||||
},
|
||||
{
|
||||
id: 'bucketId',
|
||||
title: 'Bucket ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter the bucket ID to organize the task (optional)',
|
||||
condition: { field: 'operation', value: ['create_task'] },
|
||||
},
|
||||
],
|
||||
tools: {
|
||||
access: ['microsoft_planner_read_task', 'microsoft_planner_create_task'],
|
||||
config: {
|
||||
tool: (params) => {
|
||||
switch (params.operation) {
|
||||
case 'read_task':
|
||||
return 'microsoft_planner_read_task'
|
||||
case 'create_task':
|
||||
return 'microsoft_planner_create_task'
|
||||
default:
|
||||
throw new Error(`Invalid Microsoft Planner operation: ${params.operation}`)
|
||||
}
|
||||
},
|
||||
params: (params) => {
|
||||
const {
|
||||
credential,
|
||||
operation,
|
||||
planId,
|
||||
taskId,
|
||||
title,
|
||||
description,
|
||||
dueDateTime,
|
||||
assigneeUserId,
|
||||
bucketId,
|
||||
...rest
|
||||
} = params
|
||||
|
||||
const baseParams = {
|
||||
...rest,
|
||||
credential,
|
||||
}
|
||||
|
||||
// For read operations
|
||||
if (operation === 'read_task') {
|
||||
const readParams: MicrosoftPlannerBlockParams = { ...baseParams }
|
||||
|
||||
// If taskId is provided, add it (highest priority - get specific task)
|
||||
if (taskId?.trim()) {
|
||||
readParams.taskId = taskId.trim()
|
||||
}
|
||||
// If no taskId but planId is provided, add planId (get tasks from plan)
|
||||
else if (planId?.trim()) {
|
||||
readParams.planId = planId.trim()
|
||||
}
|
||||
// If neither, get all user tasks (baseParams only)
|
||||
|
||||
return readParams
|
||||
}
|
||||
|
||||
// For create operation
|
||||
if (operation === 'create_task') {
|
||||
if (!planId?.trim()) {
|
||||
throw new Error('Plan ID is required to create a task.')
|
||||
}
|
||||
if (!title?.trim()) {
|
||||
throw new Error('Task title is required to create a task.')
|
||||
}
|
||||
|
||||
const createParams: MicrosoftPlannerBlockParams = {
|
||||
...baseParams,
|
||||
planId: planId.trim(),
|
||||
title: title.trim(),
|
||||
}
|
||||
|
||||
if (description?.trim()) {
|
||||
createParams.description = description.trim()
|
||||
}
|
||||
|
||||
if (dueDateTime?.trim()) {
|
||||
createParams.dueDateTime = dueDateTime.trim()
|
||||
}
|
||||
|
||||
if (assigneeUserId?.trim()) {
|
||||
createParams.assigneeUserId = assigneeUserId.trim()
|
||||
}
|
||||
|
||||
if (bucketId?.trim()) {
|
||||
createParams.bucketId = bucketId.trim()
|
||||
}
|
||||
|
||||
return createParams
|
||||
}
|
||||
|
||||
return baseParams
|
||||
},
|
||||
},
|
||||
},
|
||||
inputs: {
|
||||
operation: { type: 'string', description: 'Operation to perform' },
|
||||
credential: { type: 'string', description: 'Microsoft account credential' },
|
||||
planId: { type: 'string', description: 'Plan ID' },
|
||||
taskId: { type: 'string', description: 'Task ID' },
|
||||
title: { type: 'string', description: 'Task title' },
|
||||
description: { type: 'string', description: 'Task description' },
|
||||
dueDateTime: { type: 'string', description: 'Due date' },
|
||||
assigneeUserId: { type: 'string', description: 'Assignee user ID' },
|
||||
bucketId: { type: 'string', description: 'Bucket ID' },
|
||||
},
|
||||
outputs: {
|
||||
task: {
|
||||
type: 'json',
|
||||
description:
|
||||
'The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees.',
|
||||
},
|
||||
metadata: {
|
||||
type: 'json',
|
||||
description:
|
||||
'Additional metadata about the operation, such as timestamps, request status, or other relevant information.',
|
||||
},
|
||||
},
|
||||
}
|
||||
235
apps/sim/blocks/blocks/onedrive.ts
Normal file
235
apps/sim/blocks/blocks/onedrive.ts
Normal file
@@ -0,0 +1,235 @@
|
||||
import { MicrosoftOneDriveIcon } from '@/components/icons'
|
||||
import type { BlockConfig } from '@/blocks/types'
|
||||
import type { OneDriveResponse } from '@/tools/onedrive/types'
|
||||
|
||||
export const OneDriveBlock: BlockConfig<OneDriveResponse> = {
|
||||
type: 'onedrive',
|
||||
name: 'OneDrive',
|
||||
description: 'Create, upload, and list files',
|
||||
longDescription:
|
||||
'Integrate OneDrive functionality to manage files and folders. Upload new files, create new folders, and list contents of folders using OAuth authentication. Supports file operations with custom MIME types and folder organization.',
|
||||
docsLink: 'https://docs.sim.ai/tools/onedrive',
|
||||
category: 'tools',
|
||||
bgColor: '#E0E0E0',
|
||||
icon: MicrosoftOneDriveIcon,
|
||||
subBlocks: [
|
||||
// Operation selector
|
||||
{
|
||||
id: 'operation',
|
||||
title: 'Operation',
|
||||
type: 'dropdown',
|
||||
layout: 'full',
|
||||
options: [
|
||||
{ label: 'Create Folder', id: 'create_folder' },
|
||||
{ label: 'Upload File', id: 'upload' },
|
||||
{ label: 'List Files', id: 'list' },
|
||||
],
|
||||
},
|
||||
// One Drive Credentials
|
||||
{
|
||||
id: 'credential',
|
||||
title: 'Microsoft Account',
|
||||
type: 'oauth-input',
|
||||
layout: 'full',
|
||||
provider: 'onedrive',
|
||||
serviceId: 'onedrive',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Files.Read',
|
||||
'Files.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
placeholder: 'Select Microsoft account',
|
||||
},
|
||||
// Upload Fields
|
||||
{
|
||||
id: 'fileName',
|
||||
title: 'File Name',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Name of the file',
|
||||
condition: { field: 'operation', value: 'upload' },
|
||||
},
|
||||
{
|
||||
id: 'content',
|
||||
title: 'Content',
|
||||
type: 'long-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Content to upload to the file',
|
||||
condition: { field: 'operation', value: 'upload' },
|
||||
},
|
||||
|
||||
{
|
||||
id: 'folderSelector',
|
||||
title: 'Select Parent Folder',
|
||||
type: 'file-selector',
|
||||
layout: 'full',
|
||||
provider: 'microsoft',
|
||||
serviceId: 'onedrive',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Files.Read',
|
||||
'Files.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
mimeType: 'application/vnd.microsoft.graph.folder',
|
||||
placeholder: 'Select a parent folder',
|
||||
mode: 'basic',
|
||||
condition: { field: 'operation', value: 'upload' },
|
||||
},
|
||||
{
|
||||
id: 'manualFolderId',
|
||||
title: 'Parent Folder ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter parent folder ID (leave empty for root folder)',
|
||||
mode: 'advanced',
|
||||
condition: { field: 'operation', value: 'upload' },
|
||||
},
|
||||
{
|
||||
id: 'folderName',
|
||||
title: 'Folder Name',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Name for the new folder',
|
||||
condition: { field: 'operation', value: 'create_folder' },
|
||||
},
|
||||
{
|
||||
id: 'folderSelector',
|
||||
title: 'Select Parent Folder',
|
||||
type: 'file-selector',
|
||||
layout: 'full',
|
||||
provider: 'microsoft',
|
||||
serviceId: 'onedrive',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Files.Read',
|
||||
'Files.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
mimeType: 'application/vnd.microsoft.graph.folder',
|
||||
placeholder: 'Select a parent folder',
|
||||
mode: 'basic',
|
||||
condition: { field: 'operation', value: 'create_folder' },
|
||||
},
|
||||
// Manual Folder ID input (advanced mode)
|
||||
{
|
||||
id: 'manualFolderId',
|
||||
title: 'Parent Folder ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter parent folder ID (leave empty for root folder)',
|
||||
mode: 'advanced',
|
||||
condition: { field: 'operation', value: 'create_folder' },
|
||||
},
|
||||
// List Fields - Folder Selector (basic mode)
|
||||
{
|
||||
id: 'folderSelector',
|
||||
title: 'Select Folder',
|
||||
type: 'file-selector',
|
||||
layout: 'full',
|
||||
provider: 'microsoft',
|
||||
serviceId: 'onedrive',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Files.Read',
|
||||
'Files.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
mimeType: 'application/vnd.microsoft.graph.folder',
|
||||
placeholder: 'Select a folder to list files from',
|
||||
mode: 'basic',
|
||||
condition: { field: 'operation', value: 'list' },
|
||||
},
|
||||
// Manual Folder ID input (advanced mode)
|
||||
{
|
||||
id: 'manualFolderId',
|
||||
title: 'Folder ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter folder ID (leave empty for root folder)',
|
||||
mode: 'advanced',
|
||||
condition: { field: 'operation', value: 'list' },
|
||||
},
|
||||
{
|
||||
id: 'query',
|
||||
title: 'Search Query',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Search for specific files (e.g., name contains "report")',
|
||||
condition: { field: 'operation', value: 'list' },
|
||||
},
|
||||
{
|
||||
id: 'pageSize',
|
||||
title: 'Results Per Page',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Number of results (default: 100, max: 1000)',
|
||||
condition: { field: 'operation', value: 'list' },
|
||||
},
|
||||
],
|
||||
tools: {
|
||||
access: ['onedrive_upload', 'onedrive_create_folder', 'onedrive_list'],
|
||||
config: {
|
||||
tool: (params) => {
|
||||
switch (params.operation) {
|
||||
case 'upload':
|
||||
return 'onedrive_upload'
|
||||
case 'create_folder':
|
||||
return 'onedrive_create_folder'
|
||||
case 'list':
|
||||
return 'onedrive_list'
|
||||
default:
|
||||
throw new Error(`Invalid OneDrive operation: ${params.operation}`)
|
||||
}
|
||||
},
|
||||
params: (params) => {
|
||||
const { credential, folderSelector, manualFolderId, mimeType, ...rest } = params
|
||||
|
||||
// Use folderSelector if provided, otherwise use manualFolderId
|
||||
const effectiveFolderId = (folderSelector || manualFolderId || '').trim()
|
||||
|
||||
return {
|
||||
accessToken: credential,
|
||||
folderId: effectiveFolderId,
|
||||
pageSize: rest.pageSize ? Number.parseInt(rest.pageSize as string, 10) : undefined,
|
||||
mimeType: mimeType,
|
||||
...rest,
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
inputs: {
|
||||
operation: { type: 'string', description: 'Operation to perform' },
|
||||
credential: { type: 'string', description: 'Microsoft account credential' },
|
||||
// Upload and Create Folder operation inputs
|
||||
fileName: { type: 'string', description: 'File name' },
|
||||
content: { type: 'string', description: 'File content' },
|
||||
// Get Content operation inputs
|
||||
// fileId: { type: 'string', required: false },
|
||||
// List operation inputs
|
||||
folderSelector: { type: 'string', description: 'Folder selector' },
|
||||
manualFolderId: { type: 'string', description: 'Manual folder ID' },
|
||||
query: { type: 'string', description: 'Search query' },
|
||||
pageSize: { type: 'number', description: 'Results per page' },
|
||||
},
|
||||
outputs: {
|
||||
file: {
|
||||
type: 'json',
|
||||
description: 'The OneDrive file object, including details such as id, name, size, and more.',
|
||||
},
|
||||
files: {
|
||||
type: 'json',
|
||||
description:
|
||||
'An array of OneDrive file objects, each containing details such as id, name, size, and more.',
|
||||
},
|
||||
},
|
||||
}
|
||||
158
apps/sim/blocks/blocks/sharepoint.ts
Normal file
158
apps/sim/blocks/blocks/sharepoint.ts
Normal file
@@ -0,0 +1,158 @@
|
||||
import { MicrosoftSharepointIcon } from '@/components/icons'
|
||||
import type { BlockConfig } from '@/blocks/types'
|
||||
import type { SharepointResponse } from '@/tools/sharepoint/types'
|
||||
|
||||
export const SharepointBlock: BlockConfig<SharepointResponse> = {
|
||||
type: 'sharepoint',
|
||||
name: 'Sharepoint',
|
||||
description: 'Read and create pages',
|
||||
longDescription:
|
||||
'Integrate Sharepoint functionality to manage pages. Read and create pages, and list sites using OAuth authentication. Supports page operations with custom MIME types and folder organization.',
|
||||
docsLink: 'https://docs.sim.ai/tools/sharepoint',
|
||||
category: 'tools',
|
||||
bgColor: '#E0E0E0',
|
||||
icon: MicrosoftSharepointIcon,
|
||||
subBlocks: [
|
||||
// Operation selector
|
||||
{
|
||||
id: 'operation',
|
||||
title: 'Operation',
|
||||
type: 'dropdown',
|
||||
layout: 'full',
|
||||
options: [
|
||||
{ label: 'Create Page', id: 'create_page' },
|
||||
{ label: 'Read Page', id: 'read_page' },
|
||||
{ label: 'List Sites', id: 'list_sites' },
|
||||
],
|
||||
},
|
||||
// Sharepoint Credentials
|
||||
{
|
||||
id: 'credential',
|
||||
title: 'Microsoft Account',
|
||||
type: 'oauth-input',
|
||||
layout: 'full',
|
||||
provider: 'sharepoint',
|
||||
serviceId: 'sharepoint',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Files.Read',
|
||||
'Files.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
placeholder: 'Select Microsoft account',
|
||||
},
|
||||
|
||||
{
|
||||
id: 'siteSelector',
|
||||
title: 'Select Site',
|
||||
type: 'file-selector',
|
||||
layout: 'full',
|
||||
provider: 'microsoft',
|
||||
serviceId: 'sharepoint',
|
||||
requiredScopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Files.Read',
|
||||
'Files.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
mimeType: 'application/vnd.microsoft.graph.folder',
|
||||
placeholder: 'Select a site',
|
||||
mode: 'basic',
|
||||
condition: { field: 'operation', value: ['create_page', 'read_page', 'list_sites'] },
|
||||
},
|
||||
|
||||
{
|
||||
id: 'pageName',
|
||||
title: 'Page Name',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Name of the page',
|
||||
condition: { field: 'operation', value: ['create_page', 'read_page'] },
|
||||
},
|
||||
|
||||
{
|
||||
id: 'pageId',
|
||||
title: 'Page ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Page ID (alternative to page name)',
|
||||
condition: { field: 'operation', value: 'read_page' },
|
||||
mode: 'advanced',
|
||||
},
|
||||
|
||||
{
|
||||
id: 'pageContent',
|
||||
title: 'Page Content',
|
||||
type: 'long-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Content of the page',
|
||||
condition: { field: 'operation', value: 'create_page' },
|
||||
},
|
||||
|
||||
{
|
||||
id: 'manualSiteId',
|
||||
title: 'Site ID',
|
||||
type: 'short-input',
|
||||
layout: 'full',
|
||||
placeholder: 'Enter site ID (leave empty for root site)',
|
||||
mode: 'advanced',
|
||||
condition: { field: 'operation', value: 'create_page' },
|
||||
},
|
||||
],
|
||||
tools: {
|
||||
access: ['sharepoint_create_page', 'sharepoint_read_page', 'sharepoint_list_sites'],
|
||||
config: {
|
||||
tool: (params) => {
|
||||
switch (params.operation) {
|
||||
case 'create_page':
|
||||
return 'sharepoint_create_page'
|
||||
case 'read_page':
|
||||
return 'sharepoint_read_page'
|
||||
case 'list_sites':
|
||||
return 'sharepoint_list_sites'
|
||||
default:
|
||||
throw new Error(`Invalid Sharepoint operation: ${params.operation}`)
|
||||
}
|
||||
},
|
||||
params: (params) => {
|
||||
const { credential, siteSelector, manualSiteId, mimeType, ...rest } = params
|
||||
|
||||
// Use siteSelector if provided, otherwise use manualSiteId
|
||||
const effectiveSiteId = (siteSelector || manualSiteId || '').trim()
|
||||
|
||||
return {
|
||||
accessToken: credential,
|
||||
siteId: effectiveSiteId,
|
||||
pageSize: rest.pageSize ? Number.parseInt(rest.pageSize as string, 10) : undefined,
|
||||
mimeType: mimeType,
|
||||
...rest,
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
inputs: {
|
||||
operation: { type: 'string', description: 'Operation to perform' },
|
||||
credential: { type: 'string', description: 'Microsoft account credential' },
|
||||
// Create Page operation inputs
|
||||
pageName: { type: 'string', description: 'Page name' },
|
||||
pageContent: { type: 'string', description: 'Page content' },
|
||||
pageTitle: { type: 'string', description: 'Page title' },
|
||||
// Read Page operation inputs
|
||||
pageId: { type: 'string', description: 'Page ID' },
|
||||
// List operation inputs
|
||||
siteSelector: { type: 'string', description: 'Site selector' },
|
||||
manualSiteId: { type: 'string', description: 'Manual site ID' },
|
||||
pageSize: { type: 'number', description: 'Results per page' },
|
||||
},
|
||||
outputs: {
|
||||
sites: {
|
||||
type: 'json',
|
||||
description:
|
||||
'An array of SharePoint site objects, each containing details such as id, name, and more.',
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -36,9 +36,11 @@ import { LinkupBlock } from '@/blocks/blocks/linkup'
|
||||
import { Mem0Block } from '@/blocks/blocks/mem0'
|
||||
import { MemoryBlock } from '@/blocks/blocks/memory'
|
||||
import { MicrosoftExcelBlock } from '@/blocks/blocks/microsoft_excel'
|
||||
import { MicrosoftPlannerBlock } from '@/blocks/blocks/microsoft_planner'
|
||||
import { MicrosoftTeamsBlock } from '@/blocks/blocks/microsoft_teams'
|
||||
import { MistralParseBlock } from '@/blocks/blocks/mistral_parse'
|
||||
import { NotionBlock } from '@/blocks/blocks/notion'
|
||||
import { OneDriveBlock } from '@/blocks/blocks/onedrive'
|
||||
import { OpenAIBlock } from '@/blocks/blocks/openai'
|
||||
import { OutlookBlock } from '@/blocks/blocks/outlook'
|
||||
import { PerplexityBlock } from '@/blocks/blocks/perplexity'
|
||||
@@ -50,6 +52,7 @@ import { RouterBlock } from '@/blocks/blocks/router'
|
||||
import { S3Block } from '@/blocks/blocks/s3'
|
||||
import { ScheduleBlock } from '@/blocks/blocks/schedule'
|
||||
import { SerperBlock } from '@/blocks/blocks/serper'
|
||||
import { SharepointBlock } from '@/blocks/blocks/sharepoint'
|
||||
import { SlackBlock } from '@/blocks/blocks/slack'
|
||||
import { StagehandBlock } from '@/blocks/blocks/stagehand'
|
||||
import { StagehandAgentBlock } from '@/blocks/blocks/stagehand_agent'
|
||||
@@ -105,11 +108,13 @@ export const registry: Record<string, BlockConfig> = {
|
||||
linkup: LinkupBlock,
|
||||
mem0: Mem0Block,
|
||||
microsoft_excel: MicrosoftExcelBlock,
|
||||
microsoft_planner: MicrosoftPlannerBlock,
|
||||
microsoft_teams: MicrosoftTeamsBlock,
|
||||
mistral_parse: MistralParseBlock,
|
||||
notion: NotionBlock,
|
||||
openai: OpenAIBlock,
|
||||
outlook: OutlookBlock,
|
||||
onedrive: OneDriveBlock,
|
||||
perplexity: PerplexityBlock,
|
||||
pinecone: PineconeBlock,
|
||||
qdrant: QdrantBlock,
|
||||
@@ -120,6 +125,7 @@ export const registry: Record<string, BlockConfig> = {
|
||||
schedule: ScheduleBlock,
|
||||
s3: S3Block,
|
||||
serper: SerperBlock,
|
||||
sharepoint: SharepointBlock,
|
||||
stagehand: StagehandBlock,
|
||||
stagehand_agent: StagehandAgentBlock,
|
||||
slack: SlackBlock,
|
||||
|
||||
@@ -118,16 +118,27 @@ export interface SubBlockConfig {
|
||||
hidden?: boolean
|
||||
description?: string
|
||||
value?: (params: Record<string, any>) => string
|
||||
condition?: {
|
||||
field: string
|
||||
value: string | number | boolean | Array<string | number | boolean>
|
||||
not?: boolean
|
||||
and?: {
|
||||
field: string
|
||||
value: string | number | boolean | Array<string | number | boolean> | undefined
|
||||
not?: boolean
|
||||
}
|
||||
}
|
||||
condition?:
|
||||
| {
|
||||
field: string
|
||||
value: string | number | boolean | Array<string | number | boolean>
|
||||
not?: boolean
|
||||
and?: {
|
||||
field: string
|
||||
value: string | number | boolean | Array<string | number | boolean> | undefined
|
||||
not?: boolean
|
||||
}
|
||||
}
|
||||
| (() => {
|
||||
field: string
|
||||
value: string | number | boolean | Array<string | number | boolean>
|
||||
not?: boolean
|
||||
and?: {
|
||||
field: string
|
||||
value: string | number | boolean | Array<string | number | boolean> | undefined
|
||||
not?: boolean
|
||||
}
|
||||
})
|
||||
// Props specific to 'code' sub-block type
|
||||
language?: 'javascript' | 'json'
|
||||
generationType?: GenerationType
|
||||
|
||||
55
apps/sim/components/branded-layout.tsx
Normal file
55
apps/sim/components/branded-layout.tsx
Normal file
@@ -0,0 +1,55 @@
|
||||
'use client'
|
||||
|
||||
import { useEffect } from 'react'
|
||||
import { generateBrandCSS, getBrandConfig } from '@/lib/branding/branding'
|
||||
|
||||
interface BrandedLayoutProps {
|
||||
children: React.ReactNode
|
||||
}
|
||||
|
||||
export function BrandedLayout({ children }: BrandedLayoutProps) {
|
||||
useEffect(() => {
|
||||
const config = getBrandConfig()
|
||||
|
||||
// Update document title
|
||||
if (config.name !== 'Sim') {
|
||||
document.title = config.name
|
||||
}
|
||||
|
||||
// Update favicon
|
||||
if (config.faviconUrl) {
|
||||
const faviconLink = document.querySelector("link[rel*='icon']") as HTMLLinkElement
|
||||
if (faviconLink) {
|
||||
faviconLink.href = config.faviconUrl
|
||||
}
|
||||
}
|
||||
|
||||
// Inject brand CSS
|
||||
const brandStyleId = 'brand-styles'
|
||||
let brandStyleElement = document.getElementById(brandStyleId) as HTMLStyleElement
|
||||
|
||||
if (!brandStyleElement) {
|
||||
brandStyleElement = document.createElement('style')
|
||||
brandStyleElement.id = brandStyleId
|
||||
document.head.appendChild(brandStyleElement)
|
||||
}
|
||||
|
||||
brandStyleElement.textContent = generateBrandCSS(config)
|
||||
|
||||
// Load custom CSS if provided
|
||||
if (config.customCssUrl) {
|
||||
const customCssId = 'custom-brand-css'
|
||||
let customCssLink = document.getElementById(customCssId) as HTMLLinkElement
|
||||
|
||||
if (!customCssLink) {
|
||||
customCssLink = document.createElement('link')
|
||||
customCssLink.id = customCssId
|
||||
customCssLink.rel = 'stylesheet'
|
||||
customCssLink.href = config.customCssUrl
|
||||
document.head.appendChild(customCssLink)
|
||||
}
|
||||
}
|
||||
}, [])
|
||||
|
||||
return <>{children}</>
|
||||
}
|
||||
@@ -11,6 +11,7 @@ import {
|
||||
Section,
|
||||
Text,
|
||||
} from '@react-email/components'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
|
||||
interface WorkspaceInvitation {
|
||||
workspaceId: string
|
||||
@@ -57,6 +58,7 @@ export const BatchInvitationEmail = ({
|
||||
workspaceInvitations = [],
|
||||
acceptUrl,
|
||||
}: BatchInvitationEmailProps) => {
|
||||
const brand = getBrandConfig()
|
||||
const hasWorkspaces = workspaceInvitations.length > 0
|
||||
|
||||
return (
|
||||
@@ -69,7 +71,13 @@ export const BatchInvitationEmail = ({
|
||||
<Body style={main}>
|
||||
<Container style={container}>
|
||||
<Section style={logoContainer}>
|
||||
<Img src='https://sim.ai/logo.png' width='120' height='36' alt='Sim' style={logo} />
|
||||
<Img
|
||||
src={brand.logoUrl || 'https://sim.ai/logo.png'}
|
||||
width='120'
|
||||
height='36'
|
||||
alt={brand.name}
|
||||
style={logo}
|
||||
/>
|
||||
</Section>
|
||||
|
||||
<Heading style={h1}>You're invited to join {organizationName}!</Heading>
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { Container, Img, Link, Section, Text } from '@react-email/components'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
import { env } from '@/lib/env'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
|
||||
@@ -16,6 +17,8 @@ export const EmailFooter = ({
|
||||
baseUrl = env.NEXT_PUBLIC_APP_URL || 'https://sim.ai',
|
||||
unsubscribe,
|
||||
}: EmailFooterProps) => {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
return (
|
||||
<Container>
|
||||
<Section style={{ maxWidth: '580px', margin: '0 auto', padding: '20px 0' }}>
|
||||
@@ -62,11 +65,11 @@ export const EmailFooter = ({
|
||||
margin: '8px 0 0 0',
|
||||
}}
|
||||
>
|
||||
© {new Date().getFullYear()} Sim, All Rights Reserved
|
||||
© {new Date().getFullYear()} {brand.name}, All Rights Reserved
|
||||
<br />
|
||||
If you have any questions, please contact us at{' '}
|
||||
<a
|
||||
href='mailto:help@sim.ai'
|
||||
href={`mailto:${brand.supportEmail}`}
|
||||
style={{
|
||||
color: '#706a7b !important',
|
||||
textDecoration: 'underline',
|
||||
@@ -74,7 +77,7 @@ export const EmailFooter = ({
|
||||
fontFamily: 'HelveticaNeue, Helvetica, Arial, sans-serif',
|
||||
}}
|
||||
>
|
||||
help@sim.ai
|
||||
{brand.supportEmail}
|
||||
</a>
|
||||
</Text>
|
||||
<table cellPadding={0} cellSpacing={0} style={{ width: '100%', marginTop: '4px' }}>
|
||||
@@ -118,7 +121,7 @@ export const EmailFooter = ({
|
||||
href={
|
||||
unsubscribe?.unsubscribeToken && unsubscribe?.email
|
||||
? `${baseUrl}/unsubscribe?token=${unsubscribe.unsubscribeToken}&email=${encodeURIComponent(unsubscribe.email)}`
|
||||
: `mailto:help@sim.ai?subject=Unsubscribe%20Request&body=Please%20unsubscribe%20me%20from%20all%20emails.`
|
||||
: `mailto:${brand.supportEmail}?subject=Unsubscribe%20Request&body=Please%20unsubscribe%20me%20from%20all%20emails.`
|
||||
}
|
||||
style={{
|
||||
color: '#706a7b !important',
|
||||
|
||||
@@ -12,6 +12,7 @@ import {
|
||||
Text,
|
||||
} from '@react-email/components'
|
||||
import { format } from 'date-fns'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
import { env } from '@/lib/env'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
import { baseStyles } from './base-styles'
|
||||
@@ -34,6 +35,8 @@ export const InvitationEmail = ({
|
||||
invitedEmail = '',
|
||||
updatedDate = new Date(),
|
||||
}: InvitationEmailProps) => {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
// Extract invitation ID or token from inviteLink if present
|
||||
let enhancedLink = inviteLink
|
||||
|
||||
@@ -60,9 +63,9 @@ export const InvitationEmail = ({
|
||||
<Row>
|
||||
<Column style={{ textAlign: 'center' }}>
|
||||
<Img
|
||||
src={getAssetUrl('static/sim.png')}
|
||||
src={brand.logoUrl || getAssetUrl('static/sim.png')}
|
||||
width='114'
|
||||
alt='Sim'
|
||||
alt={brand.name}
|
||||
style={{
|
||||
margin: '0 auto',
|
||||
}}
|
||||
|
||||
@@ -10,6 +10,7 @@ import {
|
||||
Section,
|
||||
Text,
|
||||
} from '@react-email/components'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
import { env } from '@/lib/env'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
import { baseStyles } from './base-styles'
|
||||
@@ -24,18 +25,18 @@ interface OTPVerificationEmailProps {
|
||||
|
||||
const baseUrl = env.NEXT_PUBLIC_APP_URL || 'https://sim.ai'
|
||||
|
||||
const getSubjectByType = (type: string, chatTitle?: string) => {
|
||||
const getSubjectByType = (type: string, brandName: string, chatTitle?: string) => {
|
||||
switch (type) {
|
||||
case 'sign-in':
|
||||
return 'Sign in to Sim'
|
||||
return `Sign in to ${brandName}`
|
||||
case 'email-verification':
|
||||
return 'Verify your email for Sim'
|
||||
return `Verify your email for ${brandName}`
|
||||
case 'forget-password':
|
||||
return 'Reset your Sim password'
|
||||
return `Reset your ${brandName} password`
|
||||
case 'chat-access':
|
||||
return `Verification code for ${chatTitle || 'Chat'}`
|
||||
default:
|
||||
return 'Verification code for Sim'
|
||||
return `Verification code for ${brandName}`
|
||||
}
|
||||
}
|
||||
|
||||
@@ -45,17 +46,19 @@ export const OTPVerificationEmail = ({
|
||||
type = 'email-verification',
|
||||
chatTitle,
|
||||
}: OTPVerificationEmailProps) => {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
// Get a message based on the type
|
||||
const getMessage = () => {
|
||||
switch (type) {
|
||||
case 'sign-in':
|
||||
return 'Sign in to Sim'
|
||||
return `Sign in to ${brand.name}`
|
||||
case 'forget-password':
|
||||
return 'Reset your password for Sim'
|
||||
return `Reset your password for ${brand.name}`
|
||||
case 'chat-access':
|
||||
return `Access ${chatTitle || 'the chat'}`
|
||||
default:
|
||||
return 'Welcome to Sim'
|
||||
return `Welcome to ${brand.name}`
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,15 +66,15 @@ export const OTPVerificationEmail = ({
|
||||
<Html>
|
||||
<Head />
|
||||
<Body style={baseStyles.main}>
|
||||
<Preview>{getSubjectByType(type, chatTitle)}</Preview>
|
||||
<Preview>{getSubjectByType(type, brand.name, chatTitle)}</Preview>
|
||||
<Container style={baseStyles.container}>
|
||||
<Section style={{ padding: '30px 0', textAlign: 'center' }}>
|
||||
<Row>
|
||||
<Column style={{ textAlign: 'center' }}>
|
||||
<Img
|
||||
src={getAssetUrl('static/sim.png')}
|
||||
src={brand.logoUrl || getAssetUrl('static/sim.png')}
|
||||
width='114'
|
||||
alt='Sim'
|
||||
alt={brand.name}
|
||||
style={{
|
||||
margin: '0 auto',
|
||||
}}
|
||||
|
||||
@@ -12,6 +12,7 @@ import {
|
||||
Text,
|
||||
} from '@react-email/components'
|
||||
import { format } from 'date-fns'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
import { env } from '@/lib/env'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
import { baseStyles } from './base-styles'
|
||||
@@ -30,19 +31,21 @@ export const ResetPasswordEmail = ({
|
||||
resetLink = '',
|
||||
updatedDate = new Date(),
|
||||
}: ResetPasswordEmailProps) => {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
return (
|
||||
<Html>
|
||||
<Head />
|
||||
<Body style={baseStyles.main}>
|
||||
<Preview>Reset your Sim password</Preview>
|
||||
<Preview>Reset your {brand.name} password</Preview>
|
||||
<Container style={baseStyles.container}>
|
||||
<Section style={{ padding: '30px 0', textAlign: 'center' }}>
|
||||
<Row>
|
||||
<Column style={{ textAlign: 'center' }}>
|
||||
<Img
|
||||
src={getAssetUrl('static/sim.png')}
|
||||
src={brand.logoUrl || getAssetUrl('static/sim.png')}
|
||||
width='114'
|
||||
alt='Sim'
|
||||
alt={brand.name}
|
||||
style={{
|
||||
margin: '0 auto',
|
||||
}}
|
||||
@@ -62,8 +65,8 @@ export const ResetPasswordEmail = ({
|
||||
<Section style={baseStyles.content}>
|
||||
<Text style={baseStyles.paragraph}>Hello {username},</Text>
|
||||
<Text style={baseStyles.paragraph}>
|
||||
You recently requested to reset your password for your Sim account. Use the button
|
||||
below to reset it. This password reset is only valid for the next 24 hours.
|
||||
You recently requested to reset your password for your {brand.name} account. Use the
|
||||
button below to reset it. This password reset is only valid for the next 24 hours.
|
||||
</Text>
|
||||
<Link href={resetLink} style={{ textDecoration: 'none' }}>
|
||||
<Text style={baseStyles.button}>Reset Your Password</Text>
|
||||
@@ -75,7 +78,7 @@ export const ResetPasswordEmail = ({
|
||||
<Text style={baseStyles.paragraph}>
|
||||
Best regards,
|
||||
<br />
|
||||
The Sim Team
|
||||
The {brand.name} Team
|
||||
</Text>
|
||||
<Text
|
||||
style={{
|
||||
|
||||
@@ -11,6 +11,7 @@ import {
|
||||
Section,
|
||||
Text,
|
||||
} from '@react-email/components'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
import { env } from '@/lib/env'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
import { baseStyles } from './base-styles'
|
||||
@@ -29,6 +30,8 @@ export const WorkspaceInvitationEmail = ({
|
||||
inviterName = 'Someone',
|
||||
invitationLink = '',
|
||||
}: WorkspaceInvitationEmailProps) => {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
// Extract token from the link to ensure we're using the correct format
|
||||
let enhancedLink = invitationLink
|
||||
|
||||
@@ -55,9 +58,9 @@ export const WorkspaceInvitationEmail = ({
|
||||
<Row>
|
||||
<Column style={{ textAlign: 'center' }}>
|
||||
<Img
|
||||
src={getAssetUrl('static/sim.png')}
|
||||
src={brand.logoUrl || getAssetUrl('static/sim.png')}
|
||||
width='114'
|
||||
alt='Sim'
|
||||
alt={brand.name}
|
||||
style={{
|
||||
margin: '0 auto',
|
||||
}}
|
||||
|
||||
@@ -3181,3 +3181,166 @@ export function HunterIOIcon(props: SVGProps<SVGSVGElement>) {
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function MicrosoftOneDriveIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} fill='currentColor' viewBox='0 0 32 32' xmlns='http://www.w3.org/2000/svg'>
|
||||
<g>
|
||||
<path
|
||||
d='M12.20245,11.19292l.00031-.0011,6.71765,4.02379,4.00293-1.68451.00018.00068A6.4768,6.4768,0,0,1,25.5,13c.14764,0,.29358.0067.43878.01639a10.00075,10.00075,0,0,0-18.041-3.01381C7.932,10.00215,7.9657,10,8,10A7.96073,7.96073,0,0,1,12.20245,11.19292Z'
|
||||
fill='#0364b8'
|
||||
/>
|
||||
<path
|
||||
d='M12.20276,11.19182l-.00031.0011A7.96073,7.96073,0,0,0,8,10c-.0343,0-.06805.00215-.10223.00258A7.99676,7.99676,0,0,0,1.43732,22.57277l5.924-2.49292,2.63342-1.10819,5.86353-2.46746,3.06213-1.28859Z'
|
||||
fill='#0078d4'
|
||||
/>
|
||||
<path
|
||||
d='M25.93878,13.01639C25.79358,13.0067,25.64764,13,25.5,13a6.4768,6.4768,0,0,0-2.57648.53178l-.00018-.00068-4.00293,1.68451,1.16077.69528L23.88611,18.19l1.66009.99438,5.67633,3.40007a6.5002,6.5002,0,0,0-5.28375-9.56805Z'
|
||||
fill='#1490df'
|
||||
/>
|
||||
<path
|
||||
d='M25.5462,19.18437,23.88611,18.19l-3.80493-2.2791-1.16077-.69528L15.85828,16.5042,9.99475,18.97166,7.36133,20.07985l-5.924,2.49292A7.98889,7.98889,0,0,0,8,26H25.5a6.49837,6.49837,0,0,0,5.72253-3.41556Z'
|
||||
fill='#28a8ea'
|
||||
/>
|
||||
</g>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function MicrosoftSharepointIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} fill='currentColor' viewBox='0 0 32 32' xmlns='http://www.w3.org/2000/svg'>
|
||||
<circle fill='#036C70' cx='16.31' cy='8.90' r='8.90' />
|
||||
<circle fill='#1A9BA1' cx='23.72' cy='17.05' r='8.15' />
|
||||
<circle fill='#37C6D0' cx='17.42' cy='24.83' r='6.30' />
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.1'
|
||||
d='M17.79,8.03v15.82c0,0.55-0.34,1.04-0.85,1.25c-0.16,0.07-0.34,0.10-0.51,0.10H11.13c-0.01-0.13-0.01-0.24-0.01-0.37c0-0.12,0-0.25,0.01-0.37c0.14-2.37,1.59-4.46,3.77-5.40v-1.38c-4.85-0.77-8.15-5.32-7.39-10.17c0.01-0.03,0.01-0.07,0.02-0.10c0.04-0.25,0.09-0.50,0.16-0.74h8.74c0.74,0,1.36,0.60,1.36,1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.2'
|
||||
d='M15.69,7.41H7.54c-0.82,4.84,2.43,9.43,7.27,10.25c0.15,0.02,0.29,0.05,0.44,0.06c-2.30,1.09-3.97,4.18-4.12,6.73c-0.01,0.12-0.02,0.25-0.01,0.37c0,0.13,0,0.24,0.01,0.37c0.01,0.25,0.05,0.50,0.10,0.74h4.47c0.55,0,1.04-0.34,1.25-0.85c0.07-0.16,0.10-0.34,0.10-0.51V8.77c0-0.75-0.61-1.36-1.36-1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.2'
|
||||
d='M15.69,7.41H7.54c-0.82,4.84,2.43,9.43,7.27,10.26c0.10,0.02,0.20,0.03,0.30,0.05c-2.22,1.17-3.83,4.26-3.97,6.75h4.56c0.75,0,1.35-0.61,1.36-1.36V8.77c0-0.75-0.61-1.36-1.36-1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#000000'
|
||||
opacity='0.2'
|
||||
d='M14.95,7.41H7.54c-0.78,4.57,2.08,8.97,6.58,10.11c-1.84,2.43-2.27,5.61-2.58,7.22h3.82c0.75,0,1.35-0.61,1.36-1.36V8.77c0-0.75-0.61-1.36-1.36-1.36z'
|
||||
/>
|
||||
<path
|
||||
fill='#008789'
|
||||
d='M1.36,7.41h13.58c0.75,0,1.36,0.61,1.36,1.36v13.58c0,0.75-0.61,1.36-1.36,1.36H1.36c-0.75,0-1.36-0.61-1.36-1.36V8.77C0,8.02,0.61,7.41,1.36,7.41z'
|
||||
/>
|
||||
<path
|
||||
fill='#FFFFFF'
|
||||
d='M6.07,15.42c-0.32-0.21-0.58-0.49-0.78-0.82c-0.19-0.34-0.28-0.73-0.27-1.12c-0.02-0.53,0.16-1.05,0.50-1.46c0.36-0.41,0.82-0.71,1.34-0.87c0.59-0.19,1.21-0.29,1.83-0.28c0.82-0.03,1.63,0.08,2.41,0.34v1.71c-0.34-0.20-0.71-0.35-1.09-0.44c-0.42-0.10-0.84-0.15-1.27-0.15c-0.45-0.02-0.90,0.08-1.31,0.28c-0.31,0.14-0.52,0.44-0.52,0.79c0,0.21,0.08,0.41,0.22,0.56c0.17,0.18,0.37,0.32,0.59,0.42c0.25,0.12,0.62,0.29,1.11,0.49c0.05,0.02,0.11,0.04,0.16,0.06c0.49,0.19,0.96,0.42,1.40,0.69c0.34,0.21,0.62,0.49,0.83,0.83c0.21,0.39,0.31,0.82,0.30,1.26c0.02,0.54-0.14,1.08-0.47,1.52c-0.33,0.40-0.77,0.69-1.26,0.85c-0.58,0.18-1.19,0.27-1.80,0.26c-0.55,0-1.09-0.04-1.63-0.13c-0.45-0.07-0.90-0.20-1.32-0.39v-1.80c0.40,0.29,0.86,0.50,1.34,0.64c0.48,0.15,0.97,0.23,1.47,0.24c0.46,0.03,0.92-0.07,1.34-0.28c0.29-0.16,0.46-0.47,0.46-0.80c0-0.23-0.09-0.45-0.25-0.61c-0.20-0.20-0.44-0.36-0.69-0.48c-0.30-0.15-0.73-0.34-1.31-0.59C6.91,16.14,6.48,15.80,6.07,15.42z'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function MicrosoftPlannerIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} fill='currentColor' viewBox='-1 -1 27 27' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient
|
||||
id='paint0_linear_3984_11038'
|
||||
x1='6.38724'
|
||||
y1='3.74167'
|
||||
x2='2.15779'
|
||||
y2='12.777'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#8752E0' />
|
||||
<stop offset='1' stopColor='#541278' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint1_linear_3984_11038'
|
||||
x1='8.38032'
|
||||
y1='11.0696'
|
||||
x2='4.94062'
|
||||
y2='7.69244'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop offset='0.12172' stopColor='#3D0D59' />
|
||||
<stop offset='1' stopColor='#7034B0' stopOpacity='0' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint2_linear_3984_11038'
|
||||
x1='18.3701'
|
||||
y1='-3.33385e-05'
|
||||
x2='9.85717'
|
||||
y2='20.4192'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#DB45E0' />
|
||||
<stop offset='1' stopColor='#6C0F71' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint3_linear_3984_11038'
|
||||
x1='18.3701'
|
||||
y1='-3.33385e-05'
|
||||
x2='9.85717'
|
||||
y2='20.4192'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#DB45E0' />
|
||||
<stop offset='0.677403' stopColor='#A829AE' />
|
||||
<stop offset='1' stopColor='#8F28B3' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint4_linear_3984_11038'
|
||||
x1='18.0002'
|
||||
y1='7.49958'
|
||||
x2='14.0004'
|
||||
y2='23.9988'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#3DCBFF' />
|
||||
<stop offset='1' stopColor='#00479E' />
|
||||
</linearGradient>
|
||||
<linearGradient
|
||||
id='paint5_linear_3984_11038'
|
||||
x1='18.2164'
|
||||
y1='7.92626'
|
||||
x2='10.5237'
|
||||
y2='22.9363'
|
||||
gradientUnits='userSpaceOnUse'
|
||||
>
|
||||
<stop stopColor='#3DCBFF' />
|
||||
<stop offset='1' stopColor='#4A40D4' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<path
|
||||
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
|
||||
fill='url(#paint0_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
|
||||
fill='url(#paint1_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
|
||||
fill='url(#paint2_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
|
||||
fill='url(#paint3_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
|
||||
fill='url(#paint4_linear_3984_11038)'
|
||||
/>
|
||||
<path
|
||||
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
|
||||
fill='url(#paint5_linear_3984_11038)'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
1
apps/sim/db/migrations/0067_safe_bushwacker.sql
Normal file
1
apps/sim/db/migrations/0067_safe_bushwacker.sql
Normal file
@@ -0,0 +1 @@
|
||||
CREATE UNIQUE INDEX "kb_tag_definitions_kb_display_name_idx" ON "knowledge_base_tag_definitions" USING btree ("knowledge_base_id","display_name");
|
||||
5850
apps/sim/db/migrations/meta/0067_snapshot.json
Normal file
5850
apps/sim/db/migrations/meta/0067_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -463,6 +463,13 @@
|
||||
"when": 1754352106989,
|
||||
"tag": "0066_talented_mentor",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 67,
|
||||
"version": "7",
|
||||
"when": 1754424644234,
|
||||
"tag": "0067_safe_bushwacker",
|
||||
"breakpoints": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -822,6 +822,11 @@ export const knowledgeBaseTagDefinitions = pgTable(
|
||||
table.knowledgeBaseId,
|
||||
table.tagSlot
|
||||
),
|
||||
// Ensure unique display name per knowledge base
|
||||
kbDisplayNameIdx: uniqueIndex('kb_tag_definitions_kb_display_name_idx').on(
|
||||
table.knowledgeBaseId,
|
||||
table.displayName
|
||||
),
|
||||
// Index for querying by knowledge base
|
||||
kbIdIdx: index('kb_tag_definitions_kb_id_idx').on(table.knowledgeBaseId),
|
||||
})
|
||||
|
||||
@@ -58,7 +58,7 @@ export class InputResolver {
|
||||
|
||||
/**
|
||||
* Evaluates if a sub-block should be active based on its condition
|
||||
* @param condition - The condition to evaluate
|
||||
* @param condition - The condition to evaluate (can be static object or function)
|
||||
* @param currentValues - Current values of all inputs
|
||||
* @returns True if the sub-block should be active
|
||||
*/
|
||||
@@ -70,37 +70,46 @@ export class InputResolver {
|
||||
not?: boolean
|
||||
and?: { field: string; value: any; not?: boolean }
|
||||
}
|
||||
| (() => {
|
||||
field: string
|
||||
value: any
|
||||
not?: boolean
|
||||
and?: { field: string; value: any; not?: boolean }
|
||||
})
|
||||
| undefined,
|
||||
currentValues: Record<string, any>
|
||||
): boolean {
|
||||
if (!condition) return true
|
||||
|
||||
// If condition is a function, call it to get the actual condition object
|
||||
const actualCondition = typeof condition === 'function' ? condition() : condition
|
||||
|
||||
// Get the field value
|
||||
const fieldValue = currentValues[condition.field]
|
||||
const fieldValue = currentValues[actualCondition.field]
|
||||
|
||||
// Check if the condition value is an array
|
||||
const isValueMatch = Array.isArray(condition.value)
|
||||
const isValueMatch = Array.isArray(actualCondition.value)
|
||||
? fieldValue != null &&
|
||||
(condition.not
|
||||
? !condition.value.includes(fieldValue)
|
||||
: condition.value.includes(fieldValue))
|
||||
: condition.not
|
||||
? fieldValue !== condition.value
|
||||
: fieldValue === condition.value
|
||||
(actualCondition.not
|
||||
? !actualCondition.value.includes(fieldValue)
|
||||
: actualCondition.value.includes(fieldValue))
|
||||
: actualCondition.not
|
||||
? fieldValue !== actualCondition.value
|
||||
: fieldValue === actualCondition.value
|
||||
|
||||
// Check both conditions if 'and' is present
|
||||
const isAndValueMatch =
|
||||
!condition.and ||
|
||||
!actualCondition.and ||
|
||||
(() => {
|
||||
const andFieldValue = currentValues[condition.and!.field]
|
||||
return Array.isArray(condition.and!.value)
|
||||
const andFieldValue = currentValues[actualCondition.and!.field]
|
||||
return Array.isArray(actualCondition.and!.value)
|
||||
? andFieldValue != null &&
|
||||
(condition.and!.not
|
||||
? !condition.and!.value.includes(andFieldValue)
|
||||
: condition.and!.value.includes(andFieldValue))
|
||||
: condition.and!.not
|
||||
? andFieldValue !== condition.and!.value
|
||||
: andFieldValue === condition.and!.value
|
||||
(actualCondition.and!.not
|
||||
? !actualCondition.and!.value.includes(andFieldValue)
|
||||
: actualCondition.and!.value.includes(andFieldValue))
|
||||
: actualCondition.and!.not
|
||||
? andFieldValue !== actualCondition.and!.value
|
||||
: andFieldValue === actualCondition.and!.value
|
||||
})()
|
||||
|
||||
return isValueMatch && isAndValueMatch
|
||||
|
||||
112
apps/sim/hooks/use-next-available-slot.ts
Normal file
112
apps/sim/hooks/use-next-available-slot.ts
Normal file
@@ -0,0 +1,112 @@
|
||||
import { useCallback, useState } from 'react'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
|
||||
const logger = createLogger('useNextAvailableSlot')
|
||||
|
||||
interface NextAvailableSlotResponse {
|
||||
success: boolean
|
||||
data?: {
|
||||
nextAvailableSlot: string | null
|
||||
fieldType: string
|
||||
usedSlots: string[]
|
||||
totalSlots: number
|
||||
availableSlots: number
|
||||
}
|
||||
error?: string
|
||||
}
|
||||
|
||||
export function useNextAvailableSlot(knowledgeBaseId: string | null) {
|
||||
const [isLoading, setIsLoading] = useState(false)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
|
||||
const getNextAvailableSlot = useCallback(
|
||||
async (fieldType: string): Promise<string | null> => {
|
||||
if (!knowledgeBaseId) {
|
||||
setError('Knowledge base ID is required')
|
||||
return null
|
||||
}
|
||||
|
||||
setIsLoading(true)
|
||||
setError(null)
|
||||
|
||||
try {
|
||||
const url = new URL(
|
||||
`/api/knowledge/${knowledgeBaseId}/next-available-slot`,
|
||||
window.location.origin
|
||||
)
|
||||
url.searchParams.set('fieldType', fieldType)
|
||||
|
||||
const response = await fetch(url.toString())
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get next available slot: ${response.statusText}`)
|
||||
}
|
||||
|
||||
const data: NextAvailableSlotResponse = await response.json()
|
||||
|
||||
if (!data.success) {
|
||||
throw new Error(data.error || 'Failed to get next available slot')
|
||||
}
|
||||
|
||||
return data.data?.nextAvailableSlot || null
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error'
|
||||
logger.error('Error getting next available slot:', err)
|
||||
setError(errorMessage)
|
||||
return null
|
||||
} finally {
|
||||
setIsLoading(false)
|
||||
}
|
||||
},
|
||||
[knowledgeBaseId]
|
||||
)
|
||||
|
||||
const getSlotInfo = useCallback(
|
||||
async (fieldType: string) => {
|
||||
if (!knowledgeBaseId) {
|
||||
setError('Knowledge base ID is required')
|
||||
return null
|
||||
}
|
||||
|
||||
setIsLoading(true)
|
||||
setError(null)
|
||||
|
||||
try {
|
||||
const url = new URL(
|
||||
`/api/knowledge/${knowledgeBaseId}/next-available-slot`,
|
||||
window.location.origin
|
||||
)
|
||||
url.searchParams.set('fieldType', fieldType)
|
||||
|
||||
const response = await fetch(url.toString())
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get slot info: ${response.statusText}`)
|
||||
}
|
||||
|
||||
const data: NextAvailableSlotResponse = await response.json()
|
||||
|
||||
if (!data.success) {
|
||||
throw new Error(data.error || 'Failed to get slot info')
|
||||
}
|
||||
|
||||
return data.data || null
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error'
|
||||
logger.error('Error getting slot info:', err)
|
||||
setError(errorMessage)
|
||||
return null
|
||||
} finally {
|
||||
setIsLoading(false)
|
||||
}
|
||||
},
|
||||
[knowledgeBaseId]
|
||||
)
|
||||
|
||||
return {
|
||||
getNextAvailableSlot,
|
||||
getSlotInfo,
|
||||
isLoading,
|
||||
error,
|
||||
}
|
||||
}
|
||||
@@ -19,6 +19,8 @@ export interface TagDefinitionInput {
|
||||
tagSlot: TagSlot
|
||||
displayName: string
|
||||
fieldType: string
|
||||
// Optional: for editing existing definitions
|
||||
_originalDisplayName?: string
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -441,6 +441,29 @@ export const auth = betterAuth({
|
||||
pkce: true,
|
||||
redirectURI: `${env.NEXT_PUBLIC_APP_URL}/api/auth/oauth2/callback/microsoft-excel`,
|
||||
},
|
||||
{
|
||||
providerId: 'microsoft-planner',
|
||||
clientId: env.MICROSOFT_CLIENT_ID as string,
|
||||
clientSecret: env.MICROSOFT_CLIENT_SECRET as string,
|
||||
authorizationUrl: 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize',
|
||||
tokenUrl: 'https://login.microsoftonline.com/common/oauth2/v2.0/token',
|
||||
userInfoUrl: 'https://graph.microsoft.com/v1.0/me',
|
||||
scopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Group.ReadWrite.All',
|
||||
'Group.Read.All',
|
||||
'Tasks.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
responseType: 'code',
|
||||
accessType: 'offline',
|
||||
authentication: 'basic',
|
||||
prompt: 'consent',
|
||||
pkce: true,
|
||||
redirectURI: `${env.NEXT_PUBLIC_APP_URL}/api/auth/oauth2/callback/microsoft-planner`,
|
||||
},
|
||||
|
||||
{
|
||||
providerId: 'outlook',
|
||||
@@ -467,6 +490,45 @@ export const auth = betterAuth({
|
||||
redirectURI: `${env.NEXT_PUBLIC_APP_URL}/api/auth/oauth2/callback/outlook`,
|
||||
},
|
||||
|
||||
{
|
||||
providerId: 'onedrive',
|
||||
clientId: env.MICROSOFT_CLIENT_ID as string,
|
||||
clientSecret: env.MICROSOFT_CLIENT_SECRET as string,
|
||||
authorizationUrl: 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize',
|
||||
tokenUrl: 'https://login.microsoftonline.com/common/oauth2/v2.0/token',
|
||||
userInfoUrl: 'https://graph.microsoft.com/v1.0/me',
|
||||
scopes: ['openid', 'profile', 'email', 'Files.Read', 'Files.ReadWrite', 'offline_access'],
|
||||
responseType: 'code',
|
||||
accessType: 'offline',
|
||||
authentication: 'basic',
|
||||
prompt: 'consent',
|
||||
pkce: true,
|
||||
redirectURI: `${env.NEXT_PUBLIC_APP_URL}/api/auth/oauth2/callback/onedrive`,
|
||||
},
|
||||
|
||||
{
|
||||
providerId: 'sharepoint',
|
||||
clientId: env.MICROSOFT_CLIENT_ID as string,
|
||||
clientSecret: env.MICROSOFT_CLIENT_SECRET as string,
|
||||
authorizationUrl: 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize',
|
||||
tokenUrl: 'https://login.microsoftonline.com/common/oauth2/v2.0/token',
|
||||
userInfoUrl: 'https://graph.microsoft.com/v1.0/me',
|
||||
scopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Sites.Read.All',
|
||||
'Sites.ReadWrite.All',
|
||||
'offline_access',
|
||||
],
|
||||
responseType: 'code',
|
||||
accessType: 'offline',
|
||||
authentication: 'basic',
|
||||
prompt: 'consent',
|
||||
pkce: true,
|
||||
redirectURI: `${env.NEXT_PUBLIC_APP_URL}/api/auth/oauth2/callback/sharepoint`,
|
||||
},
|
||||
|
||||
{
|
||||
providerId: 'wealthbox',
|
||||
clientId: env.WEALTHBOX_CLIENT_ID as string,
|
||||
|
||||
81
apps/sim/lib/branding/branding.ts
Normal file
81
apps/sim/lib/branding/branding.ts
Normal file
@@ -0,0 +1,81 @@
|
||||
import { getEnv } from '@/lib/env'
|
||||
|
||||
export interface BrandConfig {
|
||||
name: string
|
||||
logoUrl?: string
|
||||
faviconUrl?: string
|
||||
primaryColor?: string
|
||||
secondaryColor?: string
|
||||
accentColor?: string
|
||||
customCssUrl?: string
|
||||
hideBranding?: boolean
|
||||
footerText?: string
|
||||
supportEmail?: string
|
||||
supportUrl?: string
|
||||
documentationUrl?: string
|
||||
termsUrl?: string
|
||||
privacyUrl?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Default brand configuration values
|
||||
*/
|
||||
const defaultConfig: BrandConfig = {
|
||||
name: 'Sim',
|
||||
logoUrl: undefined,
|
||||
faviconUrl: '/favicon/favicon.ico',
|
||||
primaryColor: '#000000',
|
||||
secondaryColor: '#6366f1',
|
||||
accentColor: '#f59e0b',
|
||||
customCssUrl: undefined,
|
||||
hideBranding: false,
|
||||
footerText: undefined,
|
||||
supportEmail: 'help@sim.ai',
|
||||
supportUrl: undefined,
|
||||
documentationUrl: undefined,
|
||||
termsUrl: undefined,
|
||||
privacyUrl: undefined,
|
||||
}
|
||||
|
||||
/**
|
||||
* Get branding configuration from environment variables
|
||||
* Supports runtime configuration via Docker/Kubernetes
|
||||
*/
|
||||
export const getBrandConfig = (): BrandConfig => {
|
||||
return {
|
||||
name: getEnv('NEXT_PUBLIC_BRAND_NAME') || defaultConfig.name,
|
||||
logoUrl: getEnv('NEXT_PUBLIC_BRAND_LOGO_URL') || defaultConfig.logoUrl,
|
||||
faviconUrl: getEnv('NEXT_PUBLIC_BRAND_FAVICON_URL') || defaultConfig.faviconUrl,
|
||||
primaryColor: getEnv('NEXT_PUBLIC_BRAND_PRIMARY_COLOR') || defaultConfig.primaryColor,
|
||||
secondaryColor: getEnv('NEXT_PUBLIC_BRAND_SECONDARY_COLOR') || defaultConfig.secondaryColor,
|
||||
accentColor: getEnv('NEXT_PUBLIC_BRAND_ACCENT_COLOR') || defaultConfig.accentColor,
|
||||
customCssUrl: getEnv('NEXT_PUBLIC_CUSTOM_CSS_URL') || defaultConfig.customCssUrl,
|
||||
hideBranding: getEnv('NEXT_PUBLIC_HIDE_BRANDING') === 'true',
|
||||
footerText: getEnv('NEXT_PUBLIC_CUSTOM_FOOTER_TEXT') || defaultConfig.footerText,
|
||||
supportEmail: getEnv('NEXT_PUBLIC_SUPPORT_EMAIL') || defaultConfig.supportEmail,
|
||||
supportUrl: getEnv('NEXT_PUBLIC_SUPPORT_URL') || defaultConfig.supportUrl,
|
||||
documentationUrl: getEnv('NEXT_PUBLIC_DOCUMENTATION_URL') || defaultConfig.documentationUrl,
|
||||
termsUrl: getEnv('NEXT_PUBLIC_TERMS_URL') || defaultConfig.termsUrl,
|
||||
privacyUrl: getEnv('NEXT_PUBLIC_PRIVACY_URL') || defaultConfig.privacyUrl,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate CSS custom properties for brand colors
|
||||
*/
|
||||
export const generateBrandCSS = (config: BrandConfig): string => {
|
||||
return `
|
||||
:root {
|
||||
--brand-primary: ${config.primaryColor};
|
||||
--brand-secondary: ${config.secondaryColor};
|
||||
--brand-accent: ${config.accentColor};
|
||||
}
|
||||
`
|
||||
}
|
||||
|
||||
/**
|
||||
* Hook to use brand configuration in React components
|
||||
*/
|
||||
export const useBrandConfig = () => {
|
||||
return getBrandConfig()
|
||||
}
|
||||
155
apps/sim/lib/branding/metadata.ts
Normal file
155
apps/sim/lib/branding/metadata.ts
Normal file
@@ -0,0 +1,155 @@
|
||||
import type { Metadata } from 'next'
|
||||
import { getBrandConfig } from '@/lib/branding/branding'
|
||||
import { env } from '@/lib/env'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
|
||||
/**
|
||||
* Generate dynamic metadata based on brand configuration
|
||||
*/
|
||||
export function generateBrandedMetadata(override: Partial<Metadata> = {}): Metadata {
|
||||
const brand = getBrandConfig()
|
||||
|
||||
const defaultTitle = brand.name
|
||||
const defaultDescription = `Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.`
|
||||
|
||||
return {
|
||||
title: {
|
||||
template: `%s | ${brand.name}`,
|
||||
default: defaultTitle,
|
||||
},
|
||||
description: defaultDescription,
|
||||
applicationName: brand.name,
|
||||
authors: [{ name: brand.name }],
|
||||
generator: 'Next.js',
|
||||
keywords: [
|
||||
'AI agent',
|
||||
'AI agent builder',
|
||||
'AI agent workflow',
|
||||
'AI workflow automation',
|
||||
'visual workflow editor',
|
||||
'AI agents',
|
||||
'workflow canvas',
|
||||
'intelligent automation',
|
||||
'AI tools',
|
||||
'workflow designer',
|
||||
'artificial intelligence',
|
||||
'business automation',
|
||||
'AI agent workflows',
|
||||
'visual programming',
|
||||
],
|
||||
referrer: 'origin-when-cross-origin',
|
||||
creator: brand.name,
|
||||
publisher: brand.name,
|
||||
metadataBase: env.NEXT_PUBLIC_APP_URL
|
||||
? new URL(env.NEXT_PUBLIC_APP_URL)
|
||||
: new URL('https://sim.ai'),
|
||||
alternates: {
|
||||
canonical: '/',
|
||||
languages: {
|
||||
'en-US': '/en-US',
|
||||
},
|
||||
},
|
||||
robots: {
|
||||
index: true,
|
||||
follow: true,
|
||||
googleBot: {
|
||||
index: true,
|
||||
follow: true,
|
||||
'max-image-preview': 'large',
|
||||
'max-video-preview': -1,
|
||||
'max-snippet': -1,
|
||||
},
|
||||
},
|
||||
openGraph: {
|
||||
type: 'website',
|
||||
locale: 'en_US',
|
||||
url: env.NEXT_PUBLIC_APP_URL || 'https://sim.ai',
|
||||
title: defaultTitle,
|
||||
description: defaultDescription,
|
||||
siteName: brand.name,
|
||||
images: [
|
||||
{
|
||||
url: brand.logoUrl || getAssetUrl('social/facebook.png'),
|
||||
width: 1200,
|
||||
height: 630,
|
||||
alt: brand.name,
|
||||
},
|
||||
],
|
||||
},
|
||||
twitter: {
|
||||
card: 'summary_large_image',
|
||||
title: defaultTitle,
|
||||
description: defaultDescription,
|
||||
images: [brand.logoUrl || getAssetUrl('social/twitter.png')],
|
||||
creator: '@simstudioai',
|
||||
site: '@simstudioai',
|
||||
},
|
||||
manifest: '/manifest.webmanifest',
|
||||
icons: {
|
||||
icon: [
|
||||
{ url: '/favicon/favicon-16x16.png', sizes: '16x16', type: 'image/png' },
|
||||
{ url: '/favicon/favicon-32x32.png', sizes: '32x32', type: 'image/png' },
|
||||
{
|
||||
url: '/favicon/favicon-192x192.png',
|
||||
sizes: '192x192',
|
||||
type: 'image/png',
|
||||
},
|
||||
{
|
||||
url: '/favicon/favicon-512x512.png',
|
||||
sizes: '512x512',
|
||||
type: 'image/png',
|
||||
},
|
||||
{ url: brand.faviconUrl || '/sim.png', sizes: 'any', type: 'image/png' },
|
||||
],
|
||||
apple: '/favicon/apple-touch-icon.png',
|
||||
shortcut: brand.faviconUrl || '/favicon/favicon.ico',
|
||||
},
|
||||
appleWebApp: {
|
||||
capable: true,
|
||||
statusBarStyle: 'default',
|
||||
title: brand.name,
|
||||
},
|
||||
formatDetection: {
|
||||
telephone: false,
|
||||
},
|
||||
category: 'technology',
|
||||
other: {
|
||||
'apple-mobile-web-app-capable': 'yes',
|
||||
'mobile-web-app-capable': 'yes',
|
||||
'msapplication-TileColor': brand.primaryColor || '#ffffff',
|
||||
'msapplication-config': '/favicon/browserconfig.xml',
|
||||
},
|
||||
...override,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate static structured data for SEO
|
||||
*/
|
||||
export function generateStructuredData() {
|
||||
return {
|
||||
'@context': 'https://schema.org',
|
||||
'@type': 'SoftwareApplication',
|
||||
name: 'Sim',
|
||||
description:
|
||||
'Build and deploy AI agents using our Figma-like canvas. Build, write evals, and deploy AI agent workflows that automate workflows and streamline your business processes.',
|
||||
url: 'https://sim.ai',
|
||||
applicationCategory: 'BusinessApplication',
|
||||
operatingSystem: 'Web Browser',
|
||||
offers: {
|
||||
'@type': 'Offer',
|
||||
category: 'SaaS',
|
||||
},
|
||||
creator: {
|
||||
'@type': 'Organization',
|
||||
name: 'Sim',
|
||||
url: 'https://sim.ai',
|
||||
},
|
||||
featureList: [
|
||||
'Visual AI Agent Builder',
|
||||
'Workflow Canvas Interface',
|
||||
'AI Agent Automation',
|
||||
'Custom AI Workflows',
|
||||
],
|
||||
}
|
||||
}
|
||||
@@ -2,14 +2,52 @@
|
||||
* Knowledge base and document constants
|
||||
*/
|
||||
|
||||
// Maximum number of tag slots allowed per knowledge base
|
||||
export const MAX_TAG_SLOTS = 7
|
||||
// Tag slot configuration by field type
|
||||
// Each field type maps to specific database columns
|
||||
export const TAG_SLOT_CONFIG = {
|
||||
text: {
|
||||
slots: ['tag1', 'tag2', 'tag3', 'tag4', 'tag5', 'tag6', 'tag7'] as const,
|
||||
maxSlots: 7,
|
||||
},
|
||||
// Future field types would be added here with their own database columns
|
||||
// date: {
|
||||
// slots: ['tag8', 'tag9'] as const,
|
||||
// maxSlots: 2,
|
||||
// },
|
||||
// number: {
|
||||
// slots: ['tag10', 'tag11'] as const,
|
||||
// maxSlots: 2,
|
||||
// },
|
||||
} as const
|
||||
|
||||
// Tag slot names (derived from MAX_TAG_SLOTS)
|
||||
export const TAG_SLOTS = Array.from({ length: MAX_TAG_SLOTS }, (_, i) => `tag${i + 1}`) as [
|
||||
string,
|
||||
...string[],
|
||||
]
|
||||
// Currently supported field types
|
||||
export const SUPPORTED_FIELD_TYPES = Object.keys(TAG_SLOT_CONFIG) as Array<
|
||||
keyof typeof TAG_SLOT_CONFIG
|
||||
>
|
||||
|
||||
// All tag slots (for backward compatibility)
|
||||
export const TAG_SLOTS = TAG_SLOT_CONFIG.text.slots
|
||||
|
||||
// Maximum number of tag slots for text type (for backward compatibility)
|
||||
export const MAX_TAG_SLOTS = TAG_SLOT_CONFIG.text.maxSlots
|
||||
|
||||
// Type for tag slot names
|
||||
export type TagSlot = (typeof TAG_SLOTS)[number]
|
||||
|
||||
// Helper function to get available slots for a field type
|
||||
export function getSlotsForFieldType(fieldType: string): readonly string[] {
|
||||
const config = TAG_SLOT_CONFIG[fieldType as keyof typeof TAG_SLOT_CONFIG]
|
||||
if (!config) {
|
||||
return [] // Return empty array for unsupported field types - system will naturally handle this
|
||||
}
|
||||
return config.slots
|
||||
}
|
||||
|
||||
// Helper function to get max slots for a field type
|
||||
export function getMaxSlotsForFieldType(fieldType: string): number {
|
||||
const config = TAG_SLOT_CONFIG[fieldType as keyof typeof TAG_SLOT_CONFIG]
|
||||
if (!config) {
|
||||
return 0 // Return 0 for unsupported field types
|
||||
}
|
||||
return config.maxSlots
|
||||
}
|
||||
|
||||
@@ -37,6 +37,17 @@ export interface CopilotChat {
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* File attachment interface for message requests
|
||||
*/
|
||||
export interface MessageFileAttachment {
|
||||
id: string
|
||||
s3_key: string
|
||||
filename: string
|
||||
media_type: string
|
||||
size: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Request interface for sending messages
|
||||
*/
|
||||
@@ -49,6 +60,7 @@ export interface SendMessageRequest {
|
||||
createNewChat?: boolean
|
||||
stream?: boolean
|
||||
implicitFeedback?: string
|
||||
fileAttachments?: MessageFileAttachment[]
|
||||
abortSignal?: AbortSignal
|
||||
}
|
||||
|
||||
|
||||
286
apps/sim/lib/copilot/tools/client-tools/get-user-workflow.ts
Normal file
286
apps/sim/lib/copilot/tools/client-tools/get-user-workflow.ts
Normal file
@@ -0,0 +1,286 @@
|
||||
/**
|
||||
* Get User Workflow Tool - Client-side implementation
|
||||
*/
|
||||
|
||||
import { BaseTool } from '@/lib/copilot/tools/base-tool'
|
||||
import type {
|
||||
CopilotToolCall,
|
||||
ToolExecuteResult,
|
||||
ToolExecutionOptions,
|
||||
ToolMetadata,
|
||||
} from '@/lib/copilot/tools/types'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { useWorkflowDiffStore } from '@/stores/workflow-diff/store'
|
||||
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
|
||||
|
||||
interface GetUserWorkflowParams {
|
||||
workflowId?: string
|
||||
includeMetadata?: boolean
|
||||
}
|
||||
|
||||
export class GetUserWorkflowTool extends BaseTool {
|
||||
static readonly id = 'get_user_workflow'
|
||||
|
||||
metadata: ToolMetadata = {
|
||||
id: GetUserWorkflowTool.id,
|
||||
displayConfig: {
|
||||
states: {
|
||||
executing: {
|
||||
displayName: 'Analyzing your workflow',
|
||||
icon: 'spinner',
|
||||
},
|
||||
accepted: {
|
||||
displayName: 'Analyzing your workflow',
|
||||
icon: 'spinner',
|
||||
},
|
||||
success: {
|
||||
displayName: 'Workflow analyzed',
|
||||
icon: 'workflow',
|
||||
},
|
||||
rejected: {
|
||||
displayName: 'Skipped workflow analysis',
|
||||
icon: 'skip',
|
||||
},
|
||||
errored: {
|
||||
displayName: 'Failed to analyze workflow',
|
||||
icon: 'error',
|
||||
},
|
||||
aborted: {
|
||||
displayName: 'Aborted workflow analysis',
|
||||
icon: 'abort',
|
||||
},
|
||||
},
|
||||
},
|
||||
schema: {
|
||||
name: GetUserWorkflowTool.id,
|
||||
description: 'Get the current workflow state as JSON',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
workflowId: {
|
||||
type: 'string',
|
||||
description:
|
||||
'The ID of the workflow to fetch (optional, uses active workflow if not provided)',
|
||||
},
|
||||
includeMetadata: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include workflow metadata',
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
},
|
||||
requiresInterrupt: false, // Client tools handle their own interrupts
|
||||
stateMessages: {
|
||||
success: 'Successfully retrieved workflow',
|
||||
error: 'Failed to retrieve workflow',
|
||||
rejected: 'User chose to skip workflow retrieval',
|
||||
},
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the tool - fetch the workflow from stores and write to Redis
|
||||
*/
|
||||
async execute(
|
||||
toolCall: CopilotToolCall,
|
||||
options?: ToolExecutionOptions
|
||||
): Promise<ToolExecuteResult> {
|
||||
const logger = createLogger('GetUserWorkflowTool')
|
||||
|
||||
logger.info('Starting client tool execution', {
|
||||
toolCallId: toolCall.id,
|
||||
toolName: toolCall.name,
|
||||
})
|
||||
|
||||
try {
|
||||
// Parse parameters
|
||||
const rawParams = toolCall.parameters || toolCall.input || {}
|
||||
const params = rawParams as GetUserWorkflowParams
|
||||
|
||||
// Get workflow ID - use provided or active workflow
|
||||
let workflowId = params.workflowId
|
||||
if (!workflowId) {
|
||||
const { activeWorkflowId } = useWorkflowRegistry.getState()
|
||||
if (!activeWorkflowId) {
|
||||
options?.onStateChange?.('errored')
|
||||
return {
|
||||
success: false,
|
||||
error: 'No active workflow found',
|
||||
}
|
||||
}
|
||||
workflowId = activeWorkflowId
|
||||
}
|
||||
|
||||
logger.info('Fetching user workflow from stores', {
|
||||
workflowId,
|
||||
includeMetadata: params.includeMetadata,
|
||||
})
|
||||
|
||||
// Try to get workflow from diff/preview store first, then main store
|
||||
let workflowState: any = null
|
||||
|
||||
// Check diff store first
|
||||
const diffStore = useWorkflowDiffStore.getState()
|
||||
if (diffStore.diffWorkflow && Object.keys(diffStore.diffWorkflow.blocks || {}).length > 0) {
|
||||
workflowState = diffStore.diffWorkflow
|
||||
logger.info('Using workflow from diff/preview store', { workflowId })
|
||||
} else {
|
||||
// Get the actual workflow state from the workflow store
|
||||
const workflowStore = useWorkflowStore.getState()
|
||||
const fullWorkflowState = workflowStore.getWorkflowState()
|
||||
|
||||
if (!fullWorkflowState || !fullWorkflowState.blocks) {
|
||||
// Fallback to workflow registry metadata if no workflow state
|
||||
const workflowRegistry = useWorkflowRegistry.getState()
|
||||
const workflow = workflowRegistry.workflows[workflowId]
|
||||
|
||||
if (!workflow) {
|
||||
options?.onStateChange?.('errored')
|
||||
return {
|
||||
success: false,
|
||||
error: `Workflow ${workflowId} not found in any store`,
|
||||
}
|
||||
}
|
||||
|
||||
logger.warn('No workflow state found, using workflow metadata only', { workflowId })
|
||||
workflowState = workflow
|
||||
} else {
|
||||
workflowState = fullWorkflowState
|
||||
logger.info('Using workflow state from workflow store', {
|
||||
workflowId,
|
||||
blockCount: Object.keys(fullWorkflowState.blocks || {}).length,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure workflow state has all required properties with proper defaults
|
||||
if (workflowState) {
|
||||
if (!workflowState.loops) {
|
||||
workflowState.loops = {}
|
||||
}
|
||||
if (!workflowState.parallels) {
|
||||
workflowState.parallels = {}
|
||||
}
|
||||
if (!workflowState.edges) {
|
||||
workflowState.edges = []
|
||||
}
|
||||
if (!workflowState.blocks) {
|
||||
workflowState.blocks = {}
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('Validating workflow state', {
|
||||
workflowId,
|
||||
hasWorkflowState: !!workflowState,
|
||||
hasBlocks: !!workflowState?.blocks,
|
||||
workflowStateType: typeof workflowState,
|
||||
})
|
||||
|
||||
if (!workflowState || !workflowState.blocks) {
|
||||
logger.error('Workflow state validation failed', {
|
||||
workflowId,
|
||||
workflowState: workflowState,
|
||||
hasBlocks: !!workflowState?.blocks,
|
||||
})
|
||||
options?.onStateChange?.('errored')
|
||||
return {
|
||||
success: false,
|
||||
error: 'Workflow state is empty or invalid',
|
||||
}
|
||||
}
|
||||
|
||||
// Include metadata if requested and available
|
||||
if (params.includeMetadata && workflowState.metadata) {
|
||||
// Metadata is already included in the workflow state
|
||||
}
|
||||
|
||||
logger.info('Successfully fetched user workflow from stores', {
|
||||
workflowId,
|
||||
blockCount: Object.keys(workflowState.blocks || {}).length,
|
||||
fromDiffStore:
|
||||
!!diffStore.diffWorkflow && Object.keys(diffStore.diffWorkflow.blocks || {}).length > 0,
|
||||
})
|
||||
|
||||
logger.info('About to stringify workflow state', {
|
||||
workflowId,
|
||||
workflowStateKeys: Object.keys(workflowState),
|
||||
})
|
||||
|
||||
// Convert workflow state to JSON string
|
||||
let workflowJson: string
|
||||
try {
|
||||
workflowJson = JSON.stringify(workflowState, null, 2)
|
||||
logger.info('Successfully stringified workflow state', {
|
||||
workflowId,
|
||||
jsonLength: workflowJson.length,
|
||||
})
|
||||
} catch (stringifyError) {
|
||||
logger.error('Error stringifying workflow state', {
|
||||
workflowId,
|
||||
error: stringifyError,
|
||||
})
|
||||
options?.onStateChange?.('errored')
|
||||
return {
|
||||
success: false,
|
||||
error: `Failed to convert workflow to JSON: ${stringifyError instanceof Error ? stringifyError.message : 'Unknown error'}`,
|
||||
}
|
||||
}
|
||||
logger.info('About to notify server with workflow data', {
|
||||
workflowId,
|
||||
toolCallId: toolCall.id,
|
||||
dataLength: workflowJson.length,
|
||||
})
|
||||
|
||||
// Notify server of success with structured data containing userWorkflow
|
||||
const structuredData = JSON.stringify({
|
||||
userWorkflow: workflowJson,
|
||||
})
|
||||
|
||||
logger.info('Calling notify with structured data', {
|
||||
toolCallId: toolCall.id,
|
||||
structuredDataLength: structuredData.length,
|
||||
})
|
||||
|
||||
await this.notify(toolCall.id, 'success', structuredData)
|
||||
|
||||
logger.info('Successfully notified server of success', {
|
||||
toolCallId: toolCall.id,
|
||||
})
|
||||
|
||||
options?.onStateChange?.('success')
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: workflowJson, // Return the same data that goes to Redis
|
||||
}
|
||||
} catch (error: any) {
|
||||
logger.error('Error in client tool execution:', {
|
||||
toolCallId: toolCall.id,
|
||||
error: error,
|
||||
stack: error instanceof Error ? error.stack : undefined,
|
||||
message: error instanceof Error ? error.message : String(error),
|
||||
})
|
||||
|
||||
try {
|
||||
// Notify server of error
|
||||
await this.notify(toolCall.id, 'errored', error.message || 'Failed to fetch workflow')
|
||||
logger.info('Successfully notified server of error', {
|
||||
toolCallId: toolCall.id,
|
||||
})
|
||||
} catch (notifyError) {
|
||||
logger.error('Failed to notify server of error:', {
|
||||
toolCallId: toolCall.id,
|
||||
notifyError: notifyError,
|
||||
})
|
||||
}
|
||||
|
||||
options?.onStateChange?.('errored')
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: error.message || 'Failed to fetch workflow',
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -8,6 +8,7 @@
|
||||
* It also provides metadata for server-side tools for display purposes
|
||||
*/
|
||||
|
||||
import { GetUserWorkflowTool } from '@/lib/copilot/tools/client-tools/get-user-workflow'
|
||||
import { RunWorkflowTool } from '@/lib/copilot/tools/client-tools/run-workflow'
|
||||
import { SERVER_TOOL_METADATA } from '@/lib/copilot/tools/server-tools/definitions'
|
||||
import type { Tool, ToolMetadata } from '@/lib/copilot/tools/types'
|
||||
@@ -112,6 +113,7 @@ export class ToolRegistry {
|
||||
private registerDefaultTools(): void {
|
||||
// Register actual client tool implementations
|
||||
this.register(new RunWorkflowTool())
|
||||
this.register(new GetUserWorkflowTool())
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -273,11 +273,16 @@ async function applyOperationsToYaml(
|
||||
return yamlDump(workflowData)
|
||||
}
|
||||
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers'
|
||||
import { db } from '@/db'
|
||||
import { workflow as workflowTable } from '@/db/schema'
|
||||
import { BaseCopilotTool } from '../base'
|
||||
|
||||
interface EditWorkflowParams {
|
||||
operations: EditWorkflowOperation[]
|
||||
workflowId: string
|
||||
currentUserWorkflow?: string // Optional current workflow JSON - if not provided, will fetch from DB
|
||||
}
|
||||
|
||||
interface EditWorkflowResult {
|
||||
@@ -297,28 +302,110 @@ class EditWorkflowTool extends BaseCopilotTool<EditWorkflowParams, EditWorkflowR
|
||||
// Export the tool instance
|
||||
export const editWorkflowTool = new EditWorkflowTool()
|
||||
|
||||
/**
|
||||
* Get user workflow from database - backend function for edit workflow
|
||||
*/
|
||||
async function getUserWorkflow(workflowId: string): Promise<string> {
|
||||
logger.info('Fetching workflow from database', { workflowId })
|
||||
|
||||
// Fetch workflow from database
|
||||
const [workflowRecord] = await db
|
||||
.select()
|
||||
.from(workflowTable)
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
.limit(1)
|
||||
|
||||
if (!workflowRecord) {
|
||||
throw new Error(`Workflow ${workflowId} not found in database`)
|
||||
}
|
||||
|
||||
// Try to load from normalized tables first, fallback to JSON blob
|
||||
let workflowState: any = null
|
||||
const subBlockValues: Record<string, Record<string, any>> = {}
|
||||
|
||||
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
|
||||
if (normalizedData) {
|
||||
workflowState = {
|
||||
blocks: normalizedData.blocks,
|
||||
edges: normalizedData.edges,
|
||||
loops: normalizedData.loops,
|
||||
parallels: normalizedData.parallels,
|
||||
}
|
||||
|
||||
// Extract subblock values from normalized data
|
||||
Object.entries(normalizedData.blocks).forEach(([blockId, block]) => {
|
||||
subBlockValues[blockId] = {}
|
||||
Object.entries((block as any).subBlocks || {}).forEach(([subBlockId, subBlock]) => {
|
||||
if ((subBlock as any).value !== undefined) {
|
||||
subBlockValues[blockId][subBlockId] = (subBlock as any).value
|
||||
}
|
||||
})
|
||||
})
|
||||
} else if (workflowRecord.state) {
|
||||
// Fallback to JSON blob
|
||||
const jsonState = workflowRecord.state as any
|
||||
workflowState = {
|
||||
blocks: jsonState.blocks || {},
|
||||
edges: jsonState.edges || [],
|
||||
loops: jsonState.loops || {},
|
||||
parallels: jsonState.parallels || {},
|
||||
}
|
||||
// For JSON blob, subblock values are embedded in the block state
|
||||
Object.entries((workflowState.blocks as any) || {}).forEach(([blockId, block]) => {
|
||||
subBlockValues[blockId] = {}
|
||||
Object.entries((block as any).subBlocks || {}).forEach(([subBlockId, subBlock]) => {
|
||||
if ((subBlock as any).value !== undefined) {
|
||||
subBlockValues[blockId][subBlockId] = (subBlock as any).value
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
if (!workflowState || !workflowState.blocks) {
|
||||
throw new Error('Workflow state is empty or invalid')
|
||||
}
|
||||
|
||||
logger.info('Successfully fetched workflow from database', {
|
||||
workflowId,
|
||||
blockCount: Object.keys(workflowState.blocks).length,
|
||||
})
|
||||
|
||||
// Return the raw JSON workflow state
|
||||
return JSON.stringify(workflowState, null, 2)
|
||||
}
|
||||
|
||||
// Implementation function
|
||||
async function editWorkflow(params: EditWorkflowParams): Promise<EditWorkflowResult> {
|
||||
const { operations, workflowId } = params
|
||||
const { operations, workflowId, currentUserWorkflow } = params
|
||||
|
||||
logger.info('Processing targeted update request', {
|
||||
workflowId,
|
||||
operationCount: operations.length,
|
||||
hasCurrentUserWorkflow: !!currentUserWorkflow,
|
||||
})
|
||||
|
||||
// Get current workflow state as JSON
|
||||
const { getUserWorkflowTool } = await import('./get-user-workflow')
|
||||
// Get current workflow state - use provided currentUserWorkflow or fetch from DB
|
||||
let workflowStateJson: string
|
||||
|
||||
const getUserWorkflowResult = await getUserWorkflowTool.execute({
|
||||
workflowId: workflowId,
|
||||
includeMetadata: false,
|
||||
})
|
||||
|
||||
if (!getUserWorkflowResult.success || !getUserWorkflowResult.data) {
|
||||
throw new Error('Failed to get current workflow state')
|
||||
if (currentUserWorkflow) {
|
||||
logger.info('Using provided currentUserWorkflow for edits', {
|
||||
workflowId,
|
||||
jsonLength: currentUserWorkflow.length,
|
||||
})
|
||||
workflowStateJson = currentUserWorkflow
|
||||
} else {
|
||||
logger.info('No currentUserWorkflow provided, fetching from database', {
|
||||
workflowId,
|
||||
})
|
||||
workflowStateJson = await getUserWorkflow(workflowId)
|
||||
}
|
||||
|
||||
const workflowStateJson = getUserWorkflowResult.data
|
||||
// Also get the DB version for diff calculation if we're using a different current workflow
|
||||
let dbWorkflowStateJson: string = workflowStateJson
|
||||
if (currentUserWorkflow) {
|
||||
logger.info('Fetching DB workflow for diff calculation', { workflowId })
|
||||
dbWorkflowStateJson = await getUserWorkflow(workflowId)
|
||||
}
|
||||
|
||||
logger.info('Retrieved current workflow state', {
|
||||
jsonLength: workflowStateJson.length,
|
||||
@@ -328,6 +415,20 @@ async function editWorkflow(params: EditWorkflowParams): Promise<EditWorkflowRes
|
||||
// Parse the JSON to get the workflow state object
|
||||
const workflowState = JSON.parse(workflowStateJson)
|
||||
|
||||
// Ensure workflow state has all required properties with proper defaults
|
||||
if (!workflowState.loops) {
|
||||
workflowState.loops = {}
|
||||
}
|
||||
if (!workflowState.parallels) {
|
||||
workflowState.parallels = {}
|
||||
}
|
||||
if (!workflowState.edges) {
|
||||
workflowState.edges = []
|
||||
}
|
||||
if (!workflowState.blocks) {
|
||||
workflowState.blocks = {}
|
||||
}
|
||||
|
||||
// Extract subblock values from the workflow state (same logic as get-user-workflow.ts)
|
||||
const subBlockValues: Record<string, Record<string, any>> = {}
|
||||
Object.entries(workflowState.blocks || {}).forEach(([blockId, block]) => {
|
||||
|
||||
@@ -1,90 +1,96 @@
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers'
|
||||
import { db } from '@/db'
|
||||
import { workflow as workflowTable } from '@/db/schema'
|
||||
import { BaseCopilotTool } from '../base'
|
||||
|
||||
interface GetUserWorkflowParams {
|
||||
workflowId: string
|
||||
workflowId?: string
|
||||
includeMetadata?: boolean
|
||||
confirmationMessage?: string
|
||||
fullData?: any
|
||||
}
|
||||
|
||||
class GetUserWorkflowTool extends BaseCopilotTool<GetUserWorkflowParams, string> {
|
||||
readonly id = 'get_user_workflow'
|
||||
readonly displayName = 'Analyzing your workflow'
|
||||
readonly requiresInterrupt = true // This triggers automatic Redis polling
|
||||
|
||||
protected async executeImpl(params: GetUserWorkflowParams): Promise<string> {
|
||||
return getUserWorkflow(params)
|
||||
const logger = createLogger('GetUserWorkflow')
|
||||
|
||||
logger.info('Server tool received params', {
|
||||
hasFullData: !!params.fullData,
|
||||
hasConfirmationMessage: !!params.confirmationMessage,
|
||||
fullDataType: typeof params.fullData,
|
||||
fullDataKeys: params.fullData ? Object.keys(params.fullData) : null,
|
||||
confirmationMessageLength: params.confirmationMessage?.length || 0,
|
||||
})
|
||||
|
||||
// Extract the workflow data from fullData or confirmationMessage
|
||||
let workflowData: string | null = null
|
||||
|
||||
if (params.fullData?.userWorkflow) {
|
||||
// New format: fullData contains structured data with userWorkflow field
|
||||
workflowData = params.fullData.userWorkflow
|
||||
logger.info('Using workflow data from fullData.userWorkflow', {
|
||||
dataLength: workflowData?.length || 0,
|
||||
})
|
||||
} else if (params.confirmationMessage) {
|
||||
// The confirmationMessage might contain the structured JSON data
|
||||
logger.info('Attempting to parse confirmationMessage as structured data', {
|
||||
messageLength: params.confirmationMessage.length,
|
||||
messagePreview: params.confirmationMessage.substring(0, 100),
|
||||
})
|
||||
|
||||
try {
|
||||
// Try to parse the confirmation message as structured data
|
||||
const parsedMessage = JSON.parse(params.confirmationMessage)
|
||||
if (parsedMessage?.userWorkflow) {
|
||||
workflowData = parsedMessage.userWorkflow
|
||||
logger.info('Successfully extracted userWorkflow from confirmationMessage', {
|
||||
dataLength: workflowData?.length || 0,
|
||||
})
|
||||
} else {
|
||||
// Fallback: treat the entire message as workflow data
|
||||
workflowData = params.confirmationMessage
|
||||
logger.info('Using confirmationMessage directly as workflow data', {
|
||||
dataLength: workflowData.length,
|
||||
})
|
||||
}
|
||||
} catch (parseError) {
|
||||
// If parsing fails, use the message directly
|
||||
workflowData = params.confirmationMessage
|
||||
logger.info('Failed to parse confirmationMessage, using directly', {
|
||||
dataLength: workflowData.length,
|
||||
parseError: parseError instanceof Error ? parseError.message : 'Unknown error',
|
||||
})
|
||||
}
|
||||
} else {
|
||||
throw new Error('No workflow data received from client tool')
|
||||
}
|
||||
|
||||
if (!workflowData) {
|
||||
throw new Error('No workflow data available')
|
||||
}
|
||||
|
||||
try {
|
||||
// Parse the workflow data to validate it's valid JSON
|
||||
const workflowState = JSON.parse(workflowData)
|
||||
|
||||
if (!workflowState || !workflowState.blocks) {
|
||||
throw new Error('Invalid workflow state received from client tool')
|
||||
}
|
||||
|
||||
logger.info('Successfully parsed and validated workflow data', {
|
||||
blockCount: Object.keys(workflowState.blocks).length,
|
||||
})
|
||||
|
||||
// Return the workflow data as properly formatted JSON string
|
||||
return JSON.stringify(workflowState, null, 2)
|
||||
} catch (error) {
|
||||
logger.error('Failed to parse workflow data from client tool', { error })
|
||||
throw new Error('Invalid workflow data format received from client tool')
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Export the tool instance
|
||||
export const getUserWorkflowTool = new GetUserWorkflowTool()
|
||||
|
||||
// Implementation function
|
||||
async function getUserWorkflow(params: GetUserWorkflowParams): Promise<string> {
|
||||
const logger = createLogger('GetUserWorkflow')
|
||||
const { workflowId, includeMetadata = false } = params
|
||||
|
||||
logger.info('Fetching user workflow', { workflowId })
|
||||
|
||||
// Fetch workflow from database
|
||||
const [workflowRecord] = await db
|
||||
.select()
|
||||
.from(workflowTable)
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
.limit(1)
|
||||
|
||||
if (!workflowRecord) {
|
||||
throw new Error(`Workflow ${workflowId} not found`)
|
||||
}
|
||||
|
||||
// Try to load from normalized tables first, fallback to JSON blob
|
||||
let workflowState: any = null
|
||||
const subBlockValues: Record<string, Record<string, any>> = {}
|
||||
|
||||
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
|
||||
if (normalizedData) {
|
||||
workflowState = {
|
||||
blocks: normalizedData.blocks,
|
||||
edges: normalizedData.edges,
|
||||
loops: normalizedData.loops,
|
||||
parallels: normalizedData.parallels,
|
||||
}
|
||||
|
||||
// Extract subblock values from normalized data
|
||||
Object.entries(normalizedData.blocks).forEach(([blockId, block]) => {
|
||||
subBlockValues[blockId] = {}
|
||||
Object.entries((block as any).subBlocks || {}).forEach(([subBlockId, subBlock]) => {
|
||||
if ((subBlock as any).value !== undefined) {
|
||||
subBlockValues[blockId][subBlockId] = (subBlock as any).value
|
||||
}
|
||||
})
|
||||
})
|
||||
} else if (workflowRecord.state) {
|
||||
// Fallback to JSON blob
|
||||
workflowState = workflowRecord.state as any
|
||||
// For JSON blob, subblock values are embedded in the block state
|
||||
Object.entries((workflowState.blocks as any) || {}).forEach(([blockId, block]) => {
|
||||
subBlockValues[blockId] = {}
|
||||
Object.entries((block as any).subBlocks || {}).forEach(([subBlockId, subBlock]) => {
|
||||
if ((subBlock as any).value !== undefined) {
|
||||
subBlockValues[blockId][subBlockId] = (subBlock as any).value
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
if (!workflowState || !workflowState.blocks) {
|
||||
throw new Error('Workflow state is empty or invalid')
|
||||
}
|
||||
|
||||
logger.info('Successfully fetched user workflow as JSON', {
|
||||
workflowId,
|
||||
blockCount: Object.keys(workflowState.blocks).length,
|
||||
})
|
||||
|
||||
// Return the raw JSON workflow state
|
||||
return JSON.stringify(workflowState, null, 2)
|
||||
}
|
||||
|
||||
@@ -96,6 +96,7 @@ export const env = createEnv({
|
||||
S3_LOGS_BUCKET_NAME: z.string().optional(), // S3 bucket for storing logs
|
||||
S3_KB_BUCKET_NAME: z.string().optional(), // S3 bucket for knowledge base files
|
||||
S3_CHAT_BUCKET_NAME: z.string().optional(), // S3 bucket for chat logos
|
||||
S3_COPILOT_BUCKET_NAME: z.string().optional(), // S3 bucket for copilot files
|
||||
|
||||
// Cloud Storage - Azure Blob
|
||||
AZURE_ACCOUNT_NAME: z.string().optional(), // Azure storage account name
|
||||
@@ -104,10 +105,23 @@ export const env = createEnv({
|
||||
AZURE_STORAGE_CONTAINER_NAME: z.string().optional(), // Azure container for general files
|
||||
AZURE_STORAGE_KB_CONTAINER_NAME: z.string().optional(), // Azure container for knowledge base files
|
||||
AZURE_STORAGE_CHAT_CONTAINER_NAME: z.string().optional(), // Azure container for chat logos
|
||||
AZURE_STORAGE_COPILOT_CONTAINER_NAME: z.string().optional(), // Azure container for copilot files
|
||||
|
||||
// Data Retention
|
||||
FREE_PLAN_LOG_RETENTION_DAYS: z.string().optional(), // Log retention days for free plan users
|
||||
|
||||
// Rate Limiting Configuration
|
||||
RATE_LIMIT_WINDOW_MS: z.string().optional().default('60000'), // Rate limit window duration in milliseconds (default: 1 minute)
|
||||
MANUAL_EXECUTION_LIMIT: z.string().optional().default('999999'), // Manual execution bypass value (effectively unlimited)
|
||||
RATE_LIMIT_FREE_SYNC: z.string().optional().default('10'), // Free tier sync API executions per minute
|
||||
RATE_LIMIT_FREE_ASYNC: z.string().optional().default('50'), // Free tier async API executions per minute
|
||||
RATE_LIMIT_PRO_SYNC: z.string().optional().default('25'), // Pro tier sync API executions per minute
|
||||
RATE_LIMIT_PRO_ASYNC: z.string().optional().default('200'), // Pro tier async API executions per minute
|
||||
RATE_LIMIT_TEAM_SYNC: z.string().optional().default('75'), // Team tier sync API executions per minute
|
||||
RATE_LIMIT_TEAM_ASYNC: z.string().optional().default('500'), // Team tier async API executions per minute
|
||||
RATE_LIMIT_ENTERPRISE_SYNC: z.string().optional().default('150'), // Enterprise tier sync API executions per minute
|
||||
RATE_LIMIT_ENTERPRISE_ASYNC: z.string().optional().default('1000'), // Enterprise tier async API executions per minute
|
||||
|
||||
// Real-time Communication
|
||||
SOCKET_SERVER_URL: z.string().url().optional(), // WebSocket server URL for real-time features
|
||||
SOCKET_PORT: z.number().optional(), // Port for WebSocket server
|
||||
@@ -168,6 +182,22 @@ export const env = createEnv({
|
||||
NEXT_PUBLIC_RB2B_KEY: z.string().optional(), // RB2B tracking key for B2B analytics
|
||||
NEXT_PUBLIC_GOOGLE_API_KEY: z.string().optional(), // Google API key for client-side API calls
|
||||
NEXT_PUBLIC_GOOGLE_PROJECT_NUMBER: z.string().optional(), // Google project number for Drive picker
|
||||
|
||||
// UI Branding & Whitelabeling
|
||||
NEXT_PUBLIC_BRAND_NAME: z.string().optional(), // Custom brand name (defaults to "Sim")
|
||||
NEXT_PUBLIC_BRAND_LOGO_URL: z.string().url().optional(), // Custom logo URL
|
||||
NEXT_PUBLIC_BRAND_FAVICON_URL: z.string().url().optional(), // Custom favicon URL
|
||||
NEXT_PUBLIC_BRAND_PRIMARY_COLOR: z.string().optional(), // Primary brand color (hex)
|
||||
NEXT_PUBLIC_BRAND_SECONDARY_COLOR: z.string().optional(), // Secondary brand color (hex)
|
||||
NEXT_PUBLIC_BRAND_ACCENT_COLOR: z.string().optional(), // Accent brand color (hex)
|
||||
NEXT_PUBLIC_CUSTOM_CSS_URL: z.string().url().optional(), // Custom CSS stylesheet URL
|
||||
NEXT_PUBLIC_HIDE_BRANDING: z.string().optional(), // Hide "Powered by" branding
|
||||
NEXT_PUBLIC_CUSTOM_FOOTER_TEXT: z.string().optional(), // Custom footer text
|
||||
NEXT_PUBLIC_SUPPORT_EMAIL: z.string().email().optional(), // Custom support email
|
||||
NEXT_PUBLIC_SUPPORT_URL: z.string().url().optional(), // Custom support URL
|
||||
NEXT_PUBLIC_DOCUMENTATION_URL: z.string().url().optional(), // Custom documentation URL
|
||||
NEXT_PUBLIC_TERMS_URL: z.string().url().optional(), // Custom terms of service URL
|
||||
NEXT_PUBLIC_PRIVACY_URL: z.string().url().optional(), // Custom privacy policy URL
|
||||
},
|
||||
|
||||
// Variables available on both server and client
|
||||
@@ -186,6 +216,20 @@ export const env = createEnv({
|
||||
NEXT_PUBLIC_GOOGLE_API_KEY: process.env.NEXT_PUBLIC_GOOGLE_API_KEY,
|
||||
NEXT_PUBLIC_GOOGLE_PROJECT_NUMBER: process.env.NEXT_PUBLIC_GOOGLE_PROJECT_NUMBER,
|
||||
NEXT_PUBLIC_SOCKET_URL: process.env.NEXT_PUBLIC_SOCKET_URL,
|
||||
NEXT_PUBLIC_BRAND_NAME: process.env.NEXT_PUBLIC_BRAND_NAME,
|
||||
NEXT_PUBLIC_BRAND_LOGO_URL: process.env.NEXT_PUBLIC_BRAND_LOGO_URL,
|
||||
NEXT_PUBLIC_BRAND_FAVICON_URL: process.env.NEXT_PUBLIC_BRAND_FAVICON_URL,
|
||||
NEXT_PUBLIC_BRAND_PRIMARY_COLOR: process.env.NEXT_PUBLIC_BRAND_PRIMARY_COLOR,
|
||||
NEXT_PUBLIC_BRAND_SECONDARY_COLOR: process.env.NEXT_PUBLIC_BRAND_SECONDARY_COLOR,
|
||||
NEXT_PUBLIC_BRAND_ACCENT_COLOR: process.env.NEXT_PUBLIC_BRAND_ACCENT_COLOR,
|
||||
NEXT_PUBLIC_CUSTOM_CSS_URL: process.env.NEXT_PUBLIC_CUSTOM_CSS_URL,
|
||||
NEXT_PUBLIC_HIDE_BRANDING: process.env.NEXT_PUBLIC_HIDE_BRANDING,
|
||||
NEXT_PUBLIC_CUSTOM_FOOTER_TEXT: process.env.NEXT_PUBLIC_CUSTOM_FOOTER_TEXT,
|
||||
NEXT_PUBLIC_SUPPORT_EMAIL: process.env.NEXT_PUBLIC_SUPPORT_EMAIL,
|
||||
NEXT_PUBLIC_SUPPORT_URL: process.env.NEXT_PUBLIC_SUPPORT_URL,
|
||||
NEXT_PUBLIC_DOCUMENTATION_URL: process.env.NEXT_PUBLIC_DOCUMENTATION_URL,
|
||||
NEXT_PUBLIC_TERMS_URL: process.env.NEXT_PUBLIC_TERMS_URL,
|
||||
NEXT_PUBLIC_PRIVACY_URL: process.env.NEXT_PUBLIC_PRIVACY_URL,
|
||||
NODE_ENV: process.env.NODE_ENV,
|
||||
NEXT_TELEMETRY_DISABLED: process.env.NEXT_TELEMETRY_DISABLED,
|
||||
},
|
||||
|
||||
@@ -14,6 +14,9 @@ import {
|
||||
LinearIcon,
|
||||
MicrosoftExcelIcon,
|
||||
MicrosoftIcon,
|
||||
MicrosoftOneDriveIcon,
|
||||
MicrosoftPlannerIcon,
|
||||
MicrosoftSharepointIcon,
|
||||
MicrosoftTeamsIcon,
|
||||
NotionIcon,
|
||||
OutlookIcon,
|
||||
@@ -62,12 +65,14 @@ export type OAuthService =
|
||||
| 'discord'
|
||||
| 'microsoft-excel'
|
||||
| 'microsoft-teams'
|
||||
| 'microsoft-planner'
|
||||
| 'sharepoint'
|
||||
| 'outlook'
|
||||
| 'linear'
|
||||
| 'slack'
|
||||
| 'reddit'
|
||||
| 'wealthbox'
|
||||
|
||||
| 'onedrive'
|
||||
export interface OAuthProviderConfig {
|
||||
id: OAuthProvider
|
||||
name: string
|
||||
@@ -159,6 +164,23 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
|
||||
baseProviderIcon: (props) => MicrosoftIcon(props),
|
||||
scopes: ['openid', 'profile', 'email', 'Files.Read', 'Files.ReadWrite', 'offline_access'],
|
||||
},
|
||||
'microsoft-planner': {
|
||||
id: 'microsoft-planner',
|
||||
name: 'Microsoft Planner',
|
||||
description: 'Connect to Microsoft Planner and manage tasks.',
|
||||
providerId: 'microsoft-planner',
|
||||
icon: (props) => MicrosoftPlannerIcon(props),
|
||||
baseProviderIcon: (props) => MicrosoftIcon(props),
|
||||
scopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Group.ReadWrite.All',
|
||||
'Group.Read.All',
|
||||
'Tasks.ReadWrite',
|
||||
'offline_access',
|
||||
],
|
||||
},
|
||||
'microsoft-teams': {
|
||||
id: 'microsoft-teams',
|
||||
name: 'Microsoft Teams',
|
||||
@@ -201,6 +223,31 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
|
||||
'offline_access',
|
||||
],
|
||||
},
|
||||
onedrive: {
|
||||
id: 'onedrive',
|
||||
name: 'OneDrive',
|
||||
description: 'Connect to OneDrive and manage files.',
|
||||
providerId: 'onedrive',
|
||||
icon: (props) => MicrosoftOneDriveIcon(props),
|
||||
baseProviderIcon: (props) => MicrosoftIcon(props),
|
||||
scopes: ['openid', 'profile', 'email', 'Files.Read', 'Files.ReadWrite', 'offline_access'],
|
||||
},
|
||||
sharepoint: {
|
||||
id: 'sharepoint',
|
||||
name: 'SharePoint',
|
||||
description: 'Connect to SharePoint and manage sites.',
|
||||
providerId: 'sharepoint',
|
||||
icon: (props) => MicrosoftSharepointIcon(props),
|
||||
baseProviderIcon: (props) => MicrosoftIcon(props),
|
||||
scopes: [
|
||||
'openid',
|
||||
'profile',
|
||||
'email',
|
||||
'Sites.Read.All',
|
||||
'Sites.ReadWrite.All',
|
||||
'offline_access',
|
||||
],
|
||||
},
|
||||
},
|
||||
defaultService: 'microsoft',
|
||||
},
|
||||
@@ -472,6 +519,12 @@ export function getServiceIdFromScopes(provider: OAuthProvider, scopes: string[]
|
||||
return 'microsoft-teams'
|
||||
} else if (provider === 'outlook') {
|
||||
return 'outlook'
|
||||
} else if (provider === 'sharepoint') {
|
||||
return 'sharepoint'
|
||||
} else if (provider === 'microsoft-planner') {
|
||||
return 'microsoft-planner'
|
||||
} else if (provider === 'onedrive') {
|
||||
return 'onedrive'
|
||||
} else if (provider === 'github') {
|
||||
return 'github'
|
||||
} else if (provider === 'supabase') {
|
||||
@@ -543,6 +596,18 @@ export function parseProvider(provider: OAuthProvider): ProviderConfig {
|
||||
featureType: 'outlook',
|
||||
}
|
||||
}
|
||||
if (provider === 'onedrive') {
|
||||
return {
|
||||
baseProvider: 'microsoft',
|
||||
featureType: 'onedrive',
|
||||
}
|
||||
}
|
||||
if (provider === 'sharepoint') {
|
||||
return {
|
||||
baseProvider: 'microsoft',
|
||||
featureType: 'sharepoint',
|
||||
}
|
||||
}
|
||||
|
||||
// Handle compound providers (e.g., 'google-email' -> { baseProvider: 'google', featureType: 'email' })
|
||||
const [base, feature] = provider.split('-')
|
||||
@@ -712,6 +777,30 @@ function getProviderAuthConfig(provider: string): ProviderAuthConfig {
|
||||
useBasicAuth: false,
|
||||
}
|
||||
}
|
||||
case 'onedrive': {
|
||||
const { clientId, clientSecret } = getCredentials(
|
||||
env.MICROSOFT_CLIENT_ID,
|
||||
env.MICROSOFT_CLIENT_SECRET
|
||||
)
|
||||
return {
|
||||
tokenEndpoint: 'https://login.microsoftonline.com/common/oauth2/v2.0/token',
|
||||
clientId,
|
||||
clientSecret,
|
||||
useBasicAuth: false,
|
||||
}
|
||||
}
|
||||
case 'sharepoint': {
|
||||
const { clientId, clientSecret } = getCredentials(
|
||||
env.MICROSOFT_CLIENT_ID,
|
||||
env.MICROSOFT_CLIENT_SECRET
|
||||
)
|
||||
return {
|
||||
tokenEndpoint: 'https://login.microsoftonline.com/common/oauth2/v2.0/token',
|
||||
clientId,
|
||||
clientSecret,
|
||||
useBasicAuth: false,
|
||||
}
|
||||
}
|
||||
case 'linear': {
|
||||
const { clientId, clientSecret } = getCredentials(
|
||||
env.LINEAR_CLIENT_ID,
|
||||
|
||||
@@ -229,6 +229,30 @@ export async function downloadFromS3(key: string) {
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Download a file from S3 with custom bucket configuration
|
||||
* @param key S3 object key
|
||||
* @param customConfig Custom S3 configuration
|
||||
* @returns File buffer
|
||||
*/
|
||||
export async function downloadFromS3WithConfig(key: string, customConfig: CustomS3Config) {
|
||||
const command = new GetObjectCommand({
|
||||
Bucket: customConfig.bucket,
|
||||
Key: key,
|
||||
})
|
||||
|
||||
const response = await getS3Client().send(command)
|
||||
const stream = response.Body as any
|
||||
|
||||
// Convert stream to buffer
|
||||
return new Promise<Buffer>((resolve, reject) => {
|
||||
const chunks: Buffer[] = []
|
||||
stream.on('data', (chunk: Buffer) => chunks.push(chunk))
|
||||
stream.on('end', () => resolve(Buffer.concat(chunks)))
|
||||
stream.on('error', reject)
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a file from S3
|
||||
* @param key S3 object key
|
||||
|
||||
@@ -69,9 +69,15 @@ if (typeof process !== 'undefined') {
|
||||
if (USE_BLOB_STORAGE && env.AZURE_STORAGE_KB_CONTAINER_NAME) {
|
||||
logger.info(`Azure Blob knowledge base container: ${env.AZURE_STORAGE_KB_CONTAINER_NAME}`)
|
||||
}
|
||||
if (USE_BLOB_STORAGE && env.AZURE_STORAGE_COPILOT_CONTAINER_NAME) {
|
||||
logger.info(`Azure Blob copilot container: ${env.AZURE_STORAGE_COPILOT_CONTAINER_NAME}`)
|
||||
}
|
||||
if (USE_S3_STORAGE && env.S3_KB_BUCKET_NAME) {
|
||||
logger.info(`S3 knowledge base bucket: ${env.S3_KB_BUCKET_NAME}`)
|
||||
}
|
||||
if (USE_S3_STORAGE && env.S3_COPILOT_BUCKET_NAME) {
|
||||
logger.info(`S3 copilot bucket: ${env.S3_COPILOT_BUCKET_NAME}`)
|
||||
}
|
||||
}
|
||||
|
||||
export default ensureUploadsDirectory
|
||||
|
||||
@@ -60,6 +60,18 @@ export const BLOB_CHAT_CONFIG = {
|
||||
containerName: env.AZURE_STORAGE_CHAT_CONTAINER_NAME || '',
|
||||
}
|
||||
|
||||
export const S3_COPILOT_CONFIG = {
|
||||
bucket: env.S3_COPILOT_BUCKET_NAME || '',
|
||||
region: env.AWS_REGION || '',
|
||||
}
|
||||
|
||||
export const BLOB_COPILOT_CONFIG = {
|
||||
accountName: env.AZURE_ACCOUNT_NAME || '',
|
||||
accountKey: env.AZURE_ACCOUNT_KEY || '',
|
||||
connectionString: env.AZURE_CONNECTION_STRING || '',
|
||||
containerName: env.AZURE_STORAGE_COPILOT_CONTAINER_NAME || '',
|
||||
}
|
||||
|
||||
export async function ensureUploadsDirectory() {
|
||||
if (USE_S3_STORAGE) {
|
||||
logger.info('Using S3 storage, skipping local uploads directory creation')
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import { db } from '@/db'
|
||||
import { workflowBlocks, workflowEdges, workflowSubflows } from '@/db/schema'
|
||||
import type { LoopConfig, WorkflowState } from '@/stores/workflows/workflow/types'
|
||||
import { workflow, workflowBlocks, workflowEdges, workflowSubflows } from '@/db/schema'
|
||||
import type { WorkflowState } from '@/stores/workflows/workflow/types'
|
||||
import { SUBFLOW_TYPES } from '@/stores/workflows/workflow/types'
|
||||
|
||||
const logger = createLogger('WorkflowDBHelpers')
|
||||
@@ -12,7 +12,49 @@ export interface NormalizedWorkflowData {
|
||||
edges: any[]
|
||||
loops: Record<string, any>
|
||||
parallels: Record<string, any>
|
||||
isFromNormalizedTables: true // Flag to indicate this came from new tables
|
||||
isFromNormalizedTables: boolean // Flag to indicate source (true = normalized tables, false = deployed state)
|
||||
}
|
||||
|
||||
/**
|
||||
* Load deployed workflow state for execution
|
||||
* Returns deployed state if available, otherwise throws error
|
||||
*/
|
||||
export async function loadDeployedWorkflowState(
|
||||
workflowId: string
|
||||
): Promise<NormalizedWorkflowData> {
|
||||
try {
|
||||
// First check if workflow is deployed and get deployed state
|
||||
const [workflowResult] = await db
|
||||
.select({
|
||||
isDeployed: workflow.isDeployed,
|
||||
deployedState: workflow.deployedState,
|
||||
})
|
||||
.from(workflow)
|
||||
.where(eq(workflow.id, workflowId))
|
||||
.limit(1)
|
||||
|
||||
if (!workflowResult) {
|
||||
throw new Error(`Workflow ${workflowId} not found`)
|
||||
}
|
||||
|
||||
if (!workflowResult.isDeployed || !workflowResult.deployedState) {
|
||||
throw new Error(`Workflow ${workflowId} is not deployed or has no deployed state`)
|
||||
}
|
||||
|
||||
const deployedState = workflowResult.deployedState as any
|
||||
|
||||
// Convert deployed state to normalized format
|
||||
return {
|
||||
blocks: deployedState.blocks || {},
|
||||
edges: deployedState.edges || [],
|
||||
loops: deployedState.loops || {},
|
||||
parallels: deployedState.parallels || {},
|
||||
isFromNormalizedTables: false, // Flag to indicate this came from deployed state
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Error loading deployed workflow state ${workflowId}:`, error)
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -88,7 +130,6 @@ export async function loadWorkflowFromNormalizedTables(
|
||||
const config = subflow.config || {}
|
||||
|
||||
if (subflow.type === SUBFLOW_TYPES.LOOP) {
|
||||
const loopConfig = config as LoopConfig
|
||||
loops[subflow.id] = {
|
||||
id: subflow.id,
|
||||
...config,
|
||||
@@ -126,7 +167,7 @@ export async function saveWorkflowToNormalizedTables(
|
||||
): Promise<{ success: boolean; jsonBlob?: any; error?: string }> {
|
||||
try {
|
||||
// Start a transaction
|
||||
const result = await db.transaction(async (tx) => {
|
||||
await db.transaction(async (tx) => {
|
||||
// Clear existing data for this workflow
|
||||
await Promise.all([
|
||||
tx.delete(workflowBlocks).where(eq(workflowBlocks.workflowId, workflowId)),
|
||||
|
||||
@@ -77,6 +77,54 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'gpt-5',
|
||||
pricing: {
|
||||
input: 1.25,
|
||||
cachedInput: 0.125,
|
||||
output: 10.0,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'gpt-5-mini',
|
||||
pricing: {
|
||||
input: 0.25,
|
||||
cachedInput: 0.025,
|
||||
output: 2.0,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'gpt-5-nano',
|
||||
pricing: {
|
||||
input: 0.05,
|
||||
cachedInput: 0.005,
|
||||
output: 0.4,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'gpt-5-chat-latest',
|
||||
pricing: {
|
||||
input: 1.25,
|
||||
cachedInput: 0.125,
|
||||
output: 10.0,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'o1',
|
||||
pricing: {
|
||||
@@ -175,6 +223,54 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'azure/gpt-5',
|
||||
pricing: {
|
||||
input: 1.25,
|
||||
cachedInput: 0.125,
|
||||
output: 10.0,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'azure/gpt-5-mini',
|
||||
pricing: {
|
||||
input: 0.25,
|
||||
cachedInput: 0.025,
|
||||
output: 2.0,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'azure/gpt-5-nano',
|
||||
pricing: {
|
||||
input: 0.05,
|
||||
cachedInput: 0.005,
|
||||
output: 0.4,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'azure/gpt-5-chat-latest',
|
||||
pricing: {
|
||||
input: 1.25,
|
||||
cachedInput: 0.125,
|
||||
output: 10.0,
|
||||
updatedAt: '2025-08-07',
|
||||
},
|
||||
capabilities: {
|
||||
toolUsageControl: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'azure/o3',
|
||||
pricing: {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import OpenAI from 'openai'
|
||||
import { env } from '@/lib/env'
|
||||
import { createLogger } from '@/lib/logs/console/logger'
|
||||
import type { StreamingExecution } from '@/executor/types'
|
||||
import type { ModelsObject } from '@/providers/ollama/types'
|
||||
import type {
|
||||
ProviderConfig,
|
||||
@@ -8,12 +9,57 @@ import type {
|
||||
ProviderResponse,
|
||||
TimeSegment,
|
||||
} from '@/providers/types'
|
||||
import {
|
||||
prepareToolExecution,
|
||||
prepareToolsWithUsageControl,
|
||||
trackForcedToolUsage,
|
||||
} from '@/providers/utils'
|
||||
import { useOllamaStore } from '@/stores/ollama/store'
|
||||
import { executeTool } from '@/tools'
|
||||
|
||||
const logger = createLogger('OllamaProvider')
|
||||
const OLLAMA_HOST = env.OLLAMA_URL || 'http://localhost:11434'
|
||||
|
||||
/**
|
||||
* Helper function to convert an Ollama stream to a standard ReadableStream
|
||||
* and collect completion metrics
|
||||
*/
|
||||
function createReadableStreamFromOllamaStream(
|
||||
ollamaStream: any,
|
||||
onComplete?: (content: string, usage?: any) => void
|
||||
): ReadableStream {
|
||||
let fullContent = ''
|
||||
let usageData: any = null
|
||||
|
||||
return new ReadableStream({
|
||||
async start(controller) {
|
||||
try {
|
||||
for await (const chunk of ollamaStream) {
|
||||
// Check for usage data in the final chunk
|
||||
if (chunk.usage) {
|
||||
usageData = chunk.usage
|
||||
}
|
||||
|
||||
const content = chunk.choices[0]?.delta?.content || ''
|
||||
if (content) {
|
||||
fullContent += content
|
||||
controller.enqueue(new TextEncoder().encode(content))
|
||||
}
|
||||
}
|
||||
|
||||
// Once stream is complete, call the completion callback with the final content and usage
|
||||
if (onComplete) {
|
||||
onComplete(fullContent, usageData)
|
||||
}
|
||||
|
||||
controller.close()
|
||||
} catch (error) {
|
||||
controller.error(error)
|
||||
}
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
export const ollamaProvider: ProviderConfig = {
|
||||
id: 'ollama',
|
||||
name: 'Ollama',
|
||||
@@ -46,91 +92,238 @@ export const ollamaProvider: ProviderConfig = {
|
||||
}
|
||||
},
|
||||
|
||||
executeRequest: async (request: ProviderRequest): Promise<ProviderResponse> => {
|
||||
executeRequest: async (
|
||||
request: ProviderRequest
|
||||
): Promise<ProviderResponse | StreamingExecution> => {
|
||||
logger.info('Preparing Ollama request', {
|
||||
model: request.model,
|
||||
hasSystemPrompt: !!request.systemPrompt,
|
||||
hasMessages: !!request.context,
|
||||
hasMessages: !!request.messages?.length,
|
||||
hasTools: !!request.tools?.length,
|
||||
toolCount: request.tools?.length || 0,
|
||||
hasResponseFormat: !!request.responseFormat,
|
||||
stream: !!request.stream,
|
||||
})
|
||||
|
||||
const startTime = Date.now()
|
||||
// Create Ollama client using OpenAI-compatible API
|
||||
const ollama = new OpenAI({
|
||||
apiKey: 'empty',
|
||||
baseURL: `${OLLAMA_HOST}/v1`,
|
||||
})
|
||||
|
||||
// Start with an empty array for all messages
|
||||
const allMessages = []
|
||||
|
||||
// Add system prompt if present
|
||||
if (request.systemPrompt) {
|
||||
allMessages.push({
|
||||
role: 'system',
|
||||
content: request.systemPrompt,
|
||||
})
|
||||
}
|
||||
|
||||
// Add context if present
|
||||
if (request.context) {
|
||||
allMessages.push({
|
||||
role: 'user',
|
||||
content: request.context,
|
||||
})
|
||||
}
|
||||
|
||||
// Add remaining messages
|
||||
if (request.messages) {
|
||||
allMessages.push(...request.messages)
|
||||
}
|
||||
|
||||
// Transform tools to OpenAI format if provided
|
||||
const tools = request.tools?.length
|
||||
? request.tools.map((tool) => ({
|
||||
type: 'function',
|
||||
function: {
|
||||
name: tool.id,
|
||||
description: tool.description,
|
||||
parameters: tool.parameters,
|
||||
},
|
||||
}))
|
||||
: undefined
|
||||
|
||||
// Build the request payload
|
||||
const payload: any = {
|
||||
model: request.model,
|
||||
messages: allMessages,
|
||||
}
|
||||
|
||||
// Add optional parameters
|
||||
if (request.temperature !== undefined) payload.temperature = request.temperature
|
||||
if (request.maxTokens !== undefined) payload.max_tokens = request.maxTokens
|
||||
|
||||
// Add response format for structured output if specified
|
||||
if (request.responseFormat) {
|
||||
// Use OpenAI's JSON schema format (Ollama supports this)
|
||||
payload.response_format = {
|
||||
type: 'json_schema',
|
||||
json_schema: {
|
||||
name: request.responseFormat.name || 'response_schema',
|
||||
schema: request.responseFormat.schema || request.responseFormat,
|
||||
strict: request.responseFormat.strict !== false,
|
||||
},
|
||||
}
|
||||
|
||||
logger.info('Added JSON schema response format to Ollama request')
|
||||
}
|
||||
|
||||
// Handle tools and tool usage control
|
||||
let preparedTools: ReturnType<typeof prepareToolsWithUsageControl> | null = null
|
||||
|
||||
if (tools?.length) {
|
||||
preparedTools = prepareToolsWithUsageControl(tools, request.tools, logger, 'ollama')
|
||||
const { tools: filteredTools, toolChoice } = preparedTools
|
||||
|
||||
if (filteredTools?.length && toolChoice) {
|
||||
payload.tools = filteredTools
|
||||
// Ollama supports 'auto' but not forced tool selection - convert 'force' to 'auto'
|
||||
payload.tool_choice = typeof toolChoice === 'string' ? toolChoice : 'auto'
|
||||
|
||||
logger.info('Ollama request configuration:', {
|
||||
toolCount: filteredTools.length,
|
||||
toolChoice: payload.tool_choice,
|
||||
model: request.model,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Start execution timer for the entire provider execution
|
||||
const providerStartTime = Date.now()
|
||||
const providerStartTimeISO = new Date(providerStartTime).toISOString()
|
||||
|
||||
try {
|
||||
// Prepare messages array
|
||||
const ollama = new OpenAI({
|
||||
apiKey: 'empty',
|
||||
baseURL: `${OLLAMA_HOST}/v1`,
|
||||
})
|
||||
// Check if we can stream directly (no tools required)
|
||||
if (request.stream && (!tools || tools.length === 0)) {
|
||||
logger.info('Using streaming response for Ollama request')
|
||||
|
||||
// Start with an empty array for all messages
|
||||
const allMessages = []
|
||||
|
||||
// Add system prompt if present
|
||||
if (request.systemPrompt) {
|
||||
allMessages.push({ role: 'system', content: request.systemPrompt })
|
||||
}
|
||||
|
||||
// Add context if present
|
||||
if (request.context) {
|
||||
allMessages.push({ role: 'user', content: request.context })
|
||||
}
|
||||
|
||||
// Add remaining messages
|
||||
if (request.messages) {
|
||||
allMessages.push(...request.messages)
|
||||
}
|
||||
|
||||
// Build the basic payload
|
||||
const payload: any = {
|
||||
model: request.model,
|
||||
messages: allMessages,
|
||||
stream: false,
|
||||
}
|
||||
|
||||
// Add optional parameters
|
||||
if (request.temperature !== undefined) payload.temperature = request.temperature
|
||||
if (request.maxTokens !== undefined) payload.max_tokens = request.maxTokens
|
||||
|
||||
// Transform tools to OpenAI format if provided
|
||||
const tools = request.tools?.length
|
||||
? request.tools.map((tool) => ({
|
||||
type: 'function',
|
||||
function: {
|
||||
name: tool.id,
|
||||
description: tool.description,
|
||||
parameters: tool.parameters,
|
||||
},
|
||||
}))
|
||||
: undefined
|
||||
|
||||
// Handle tools and tool usage control
|
||||
if (tools?.length) {
|
||||
// Filter out any tools with usageControl='none', but ignore 'force' since Ollama doesn't support it
|
||||
const filteredTools = tools.filter((tool) => {
|
||||
const toolId = tool.function?.name
|
||||
const toolConfig = request.tools?.find((t) => t.id === toolId)
|
||||
// Only filter out 'none', treat 'force' as 'auto'
|
||||
return toolConfig?.usageControl !== 'none'
|
||||
// Create a streaming request with token usage tracking
|
||||
const streamResponse = await ollama.chat.completions.create({
|
||||
...payload,
|
||||
stream: true,
|
||||
stream_options: { include_usage: true },
|
||||
})
|
||||
|
||||
if (filteredTools?.length) {
|
||||
payload.tools = filteredTools
|
||||
// Always use 'auto' for Ollama, regardless of the tool_choice setting
|
||||
payload.tool_choice = 'auto'
|
||||
// Start collecting token usage from the stream
|
||||
const tokenUsage = {
|
||||
prompt: 0,
|
||||
completion: 0,
|
||||
total: 0,
|
||||
}
|
||||
|
||||
logger.info('Ollama request configuration:', {
|
||||
toolCount: filteredTools.length,
|
||||
toolChoice: 'auto', // Ollama always uses auto
|
||||
model: request.model,
|
||||
})
|
||||
// Create a StreamingExecution response with a callback to update content and tokens
|
||||
const streamingResult = {
|
||||
stream: createReadableStreamFromOllamaStream(streamResponse, (content, usage) => {
|
||||
// Update the execution data with the final content and token usage
|
||||
streamingResult.execution.output.content = content
|
||||
|
||||
// Clean up the response content
|
||||
if (content) {
|
||||
streamingResult.execution.output.content = content
|
||||
.replace(/```json\n?|\n?```/g, '')
|
||||
.trim()
|
||||
}
|
||||
|
||||
// Update the timing information with the actual completion time
|
||||
const streamEndTime = Date.now()
|
||||
const streamEndTimeISO = new Date(streamEndTime).toISOString()
|
||||
|
||||
if (streamingResult.execution.output.providerTiming) {
|
||||
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
|
||||
streamingResult.execution.output.providerTiming.duration =
|
||||
streamEndTime - providerStartTime
|
||||
|
||||
// Update the time segment as well
|
||||
if (streamingResult.execution.output.providerTiming.timeSegments?.[0]) {
|
||||
streamingResult.execution.output.providerTiming.timeSegments[0].endTime =
|
||||
streamEndTime
|
||||
streamingResult.execution.output.providerTiming.timeSegments[0].duration =
|
||||
streamEndTime - providerStartTime
|
||||
}
|
||||
}
|
||||
|
||||
// Update token usage if available from the stream
|
||||
if (usage) {
|
||||
const newTokens = {
|
||||
prompt: usage.prompt_tokens || tokenUsage.prompt,
|
||||
completion: usage.completion_tokens || tokenUsage.completion,
|
||||
total: usage.total_tokens || tokenUsage.total,
|
||||
}
|
||||
|
||||
streamingResult.execution.output.tokens = newTokens
|
||||
}
|
||||
}),
|
||||
execution: {
|
||||
success: true,
|
||||
output: {
|
||||
content: '', // Will be filled by the stream completion callback
|
||||
model: request.model,
|
||||
tokens: tokenUsage,
|
||||
toolCalls: undefined,
|
||||
providerTiming: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
timeSegments: [
|
||||
{
|
||||
type: 'model',
|
||||
name: 'Streaming response',
|
||||
startTime: providerStartTime,
|
||||
endTime: Date.now(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
logs: [], // No block logs for direct streaming
|
||||
metadata: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
},
|
||||
},
|
||||
} as StreamingExecution
|
||||
|
||||
// Return the streaming execution object
|
||||
return streamingResult as StreamingExecution
|
||||
}
|
||||
|
||||
// Make the initial API request
|
||||
const initialCallTime = Date.now()
|
||||
|
||||
// Track the original tool_choice for forced tool tracking
|
||||
const originalToolChoice = payload.tool_choice
|
||||
|
||||
// Track forced tools and their usage
|
||||
const forcedTools = preparedTools?.forcedTools || []
|
||||
let usedForcedTools: string[] = []
|
||||
|
||||
// Helper function to check for forced tool usage in responses
|
||||
const checkForForcedToolUsage = (
|
||||
response: any,
|
||||
toolChoice: string | { type: string; function?: { name: string }; name?: string; any?: any }
|
||||
) => {
|
||||
if (typeof toolChoice === 'object' && response.choices[0]?.message?.tool_calls) {
|
||||
const toolCallsResponse = response.choices[0].message.tool_calls
|
||||
const result = trackForcedToolUsage(
|
||||
toolCallsResponse,
|
||||
toolChoice,
|
||||
logger,
|
||||
'ollama',
|
||||
forcedTools,
|
||||
usedForcedTools
|
||||
)
|
||||
hasUsedForcedTool = result.hasUsedForcedTool
|
||||
usedForcedTools = result.usedForcedTools
|
||||
}
|
||||
}
|
||||
|
||||
let currentResponse = await ollama.chat.completions.create(payload)
|
||||
const firstResponseTime = Date.now() - startTime
|
||||
const firstResponseTime = Date.now() - initialCallTime
|
||||
|
||||
let content = currentResponse.choices[0]?.message?.content || ''
|
||||
|
||||
@@ -140,6 +333,7 @@ export const ollamaProvider: ProviderConfig = {
|
||||
content = content.trim()
|
||||
}
|
||||
|
||||
// Collect token information
|
||||
const tokens = {
|
||||
prompt: currentResponse.usage?.prompt_tokens || 0,
|
||||
completion: currentResponse.usage?.completion_tokens || 0,
|
||||
@@ -155,201 +349,307 @@ export const ollamaProvider: ProviderConfig = {
|
||||
let modelTime = firstResponseTime
|
||||
let toolsTime = 0
|
||||
|
||||
// Track if a forced tool has been used
|
||||
let hasUsedForcedTool = false
|
||||
|
||||
// Track each model and tool call segment with timestamps
|
||||
const timeSegments: TimeSegment[] = [
|
||||
{
|
||||
type: 'model',
|
||||
name: 'Initial response',
|
||||
startTime: startTime,
|
||||
endTime: startTime + firstResponseTime,
|
||||
startTime: initialCallTime,
|
||||
endTime: initialCallTime + firstResponseTime,
|
||||
duration: firstResponseTime,
|
||||
},
|
||||
]
|
||||
|
||||
try {
|
||||
while (iterationCount < MAX_ITERATIONS) {
|
||||
// Check for tool calls
|
||||
const toolCallsInResponse = currentResponse.choices[0]?.message?.tool_calls
|
||||
if (!toolCallsInResponse || toolCallsInResponse.length === 0) {
|
||||
break
|
||||
}
|
||||
// Check if a forced tool was used in the first response
|
||||
checkForForcedToolUsage(currentResponse, originalToolChoice)
|
||||
|
||||
// Track time for tool calls in this batch
|
||||
const toolsStartTime = Date.now()
|
||||
|
||||
// Process each tool call
|
||||
for (const toolCall of toolCallsInResponse) {
|
||||
try {
|
||||
const toolName = toolCall.function.name
|
||||
const toolArgs = JSON.parse(toolCall.function.arguments)
|
||||
|
||||
// Get the tool from the tools registry
|
||||
const tool = request.tools?.find((t) => t.id === toolName)
|
||||
if (!tool) continue
|
||||
|
||||
// Execute the tool
|
||||
const toolCallStartTime = Date.now()
|
||||
|
||||
// Only merge actual tool parameters for logging
|
||||
const toolParams = {
|
||||
...tool.params,
|
||||
...toolArgs,
|
||||
}
|
||||
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
const result = await executeTool(toolName, executionParams, true)
|
||||
const toolCallEndTime = Date.now()
|
||||
const toolCallDuration = toolCallEndTime - toolCallStartTime
|
||||
|
||||
// Add to time segments for both success and failure
|
||||
timeSegments.push({
|
||||
type: 'tool',
|
||||
name: toolName,
|
||||
startTime: toolCallStartTime,
|
||||
endTime: toolCallEndTime,
|
||||
duration: toolCallDuration,
|
||||
})
|
||||
|
||||
// Prepare result content for the LLM
|
||||
let resultContent: any
|
||||
if (result.success) {
|
||||
toolResults.push(result.output)
|
||||
resultContent = result.output
|
||||
} else {
|
||||
// Include error information so LLM can respond appropriately
|
||||
resultContent = {
|
||||
error: true,
|
||||
message: result.error || 'Tool execution failed',
|
||||
tool: toolName,
|
||||
}
|
||||
}
|
||||
|
||||
toolCalls.push({
|
||||
name: toolName,
|
||||
arguments: toolParams,
|
||||
startTime: new Date(toolCallStartTime).toISOString(),
|
||||
endTime: new Date(toolCallEndTime).toISOString(),
|
||||
duration: toolCallDuration,
|
||||
result: resultContent,
|
||||
success: result.success,
|
||||
})
|
||||
|
||||
// Add the tool call and result to messages (both success and failure)
|
||||
currentMessages.push({
|
||||
role: 'assistant',
|
||||
content: null,
|
||||
tool_calls: [
|
||||
{
|
||||
id: toolCall.id,
|
||||
type: 'function',
|
||||
function: {
|
||||
name: toolName,
|
||||
arguments: toolCall.function.arguments,
|
||||
},
|
||||
},
|
||||
],
|
||||
})
|
||||
|
||||
currentMessages.push({
|
||||
role: 'tool',
|
||||
tool_call_id: toolCall.id,
|
||||
content: JSON.stringify(resultContent),
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('Error processing tool call:', { error })
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate tool call time for this iteration
|
||||
const thisToolsTime = Date.now() - toolsStartTime
|
||||
toolsTime += thisToolsTime
|
||||
|
||||
// Make the next request with updated messages
|
||||
const nextPayload = {
|
||||
...payload,
|
||||
messages: currentMessages,
|
||||
}
|
||||
|
||||
// Time the next model call
|
||||
const nextModelStartTime = Date.now()
|
||||
|
||||
// Make the next request
|
||||
currentResponse = await ollama.chat.completions.create(nextPayload)
|
||||
|
||||
const nextModelEndTime = Date.now()
|
||||
const thisModelTime = nextModelEndTime - nextModelStartTime
|
||||
|
||||
// Add to time segments
|
||||
timeSegments.push({
|
||||
type: 'model',
|
||||
name: `Model response (iteration ${iterationCount + 1})`,
|
||||
startTime: nextModelStartTime,
|
||||
endTime: nextModelEndTime,
|
||||
duration: thisModelTime,
|
||||
})
|
||||
|
||||
// Add to model time
|
||||
modelTime += thisModelTime
|
||||
|
||||
// Update content if we have a text response
|
||||
if (currentResponse.choices[0]?.message?.content) {
|
||||
content = currentResponse.choices[0].message.content
|
||||
// Clean up the response content
|
||||
content = content.replace(/```json\n?|\n?```/g, '')
|
||||
content = content.trim()
|
||||
}
|
||||
|
||||
// Update token counts
|
||||
if (currentResponse.usage) {
|
||||
tokens.prompt += currentResponse.usage.prompt_tokens || 0
|
||||
tokens.completion += currentResponse.usage.completion_tokens || 0
|
||||
tokens.total += currentResponse.usage.total_tokens || 0
|
||||
}
|
||||
|
||||
iterationCount++
|
||||
while (iterationCount < MAX_ITERATIONS) {
|
||||
// Check for tool calls
|
||||
const toolCallsInResponse = currentResponse.choices[0]?.message?.tool_calls
|
||||
if (!toolCallsInResponse || toolCallsInResponse.length === 0) {
|
||||
break
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Error in Ollama request:', { error })
|
||||
|
||||
logger.info(
|
||||
`Processing ${toolCallsInResponse.length} tool calls (iteration ${iterationCount + 1}/${MAX_ITERATIONS})`
|
||||
)
|
||||
|
||||
// Track time for tool calls in this batch
|
||||
const toolsStartTime = Date.now()
|
||||
|
||||
// Process each tool call
|
||||
for (const toolCall of toolCallsInResponse) {
|
||||
try {
|
||||
const toolName = toolCall.function.name
|
||||
const toolArgs = JSON.parse(toolCall.function.arguments)
|
||||
|
||||
// Get the tool from the tools registry
|
||||
const tool = request.tools?.find((t) => t.id === toolName)
|
||||
if (!tool) continue
|
||||
|
||||
// Execute the tool
|
||||
const toolCallStartTime = Date.now()
|
||||
|
||||
const { toolParams, executionParams } = prepareToolExecution(tool, toolArgs, request)
|
||||
const result = await executeTool(toolName, executionParams, true)
|
||||
const toolCallEndTime = Date.now()
|
||||
const toolCallDuration = toolCallEndTime - toolCallStartTime
|
||||
|
||||
// Add to time segments for both success and failure
|
||||
timeSegments.push({
|
||||
type: 'tool',
|
||||
name: toolName,
|
||||
startTime: toolCallStartTime,
|
||||
endTime: toolCallEndTime,
|
||||
duration: toolCallDuration,
|
||||
})
|
||||
|
||||
// Prepare result content for the LLM
|
||||
let resultContent: any
|
||||
if (result.success) {
|
||||
toolResults.push(result.output)
|
||||
resultContent = result.output
|
||||
} else {
|
||||
// Include error information so LLM can respond appropriately
|
||||
resultContent = {
|
||||
error: true,
|
||||
message: result.error || 'Tool execution failed',
|
||||
tool: toolName,
|
||||
}
|
||||
}
|
||||
|
||||
toolCalls.push({
|
||||
name: toolName,
|
||||
arguments: toolParams,
|
||||
startTime: new Date(toolCallStartTime).toISOString(),
|
||||
endTime: new Date(toolCallEndTime).toISOString(),
|
||||
duration: toolCallDuration,
|
||||
result: resultContent,
|
||||
success: result.success,
|
||||
})
|
||||
|
||||
// Add the tool call and result to messages (both success and failure)
|
||||
currentMessages.push({
|
||||
role: 'assistant',
|
||||
content: null,
|
||||
tool_calls: [
|
||||
{
|
||||
id: toolCall.id,
|
||||
type: 'function',
|
||||
function: {
|
||||
name: toolName,
|
||||
arguments: toolCall.function.arguments,
|
||||
},
|
||||
},
|
||||
],
|
||||
})
|
||||
|
||||
currentMessages.push({
|
||||
role: 'tool',
|
||||
tool_call_id: toolCall.id,
|
||||
content: JSON.stringify(resultContent),
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('Error processing tool call:', {
|
||||
error,
|
||||
toolName: toolCall?.function?.name,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate tool call time for this iteration
|
||||
const thisToolsTime = Date.now() - toolsStartTime
|
||||
toolsTime += thisToolsTime
|
||||
|
||||
// Make the next request with updated messages
|
||||
const nextPayload = {
|
||||
...payload,
|
||||
messages: currentMessages,
|
||||
}
|
||||
|
||||
// Update tool_choice based on which forced tools have been used
|
||||
if (typeof originalToolChoice === 'object' && hasUsedForcedTool && forcedTools.length > 0) {
|
||||
// If we have remaining forced tools, get the next one to force
|
||||
const remainingTools = forcedTools.filter((tool) => !usedForcedTools.includes(tool))
|
||||
|
||||
if (remainingTools.length > 0) {
|
||||
// Ollama doesn't support forced tool selection, so we keep using 'auto'
|
||||
nextPayload.tool_choice = 'auto'
|
||||
logger.info(`Ollama doesn't support forced tools, using auto for: ${remainingTools[0]}`)
|
||||
} else {
|
||||
// All forced tools have been used, continue with auto
|
||||
nextPayload.tool_choice = 'auto'
|
||||
logger.info('All forced tools have been used, continuing with auto tool_choice')
|
||||
}
|
||||
}
|
||||
|
||||
// Time the next model call
|
||||
const nextModelStartTime = Date.now()
|
||||
|
||||
// Make the next request
|
||||
currentResponse = await ollama.chat.completions.create(nextPayload)
|
||||
|
||||
// Check if any forced tools were used in this response
|
||||
checkForForcedToolUsage(currentResponse, nextPayload.tool_choice)
|
||||
|
||||
const nextModelEndTime = Date.now()
|
||||
const thisModelTime = nextModelEndTime - nextModelStartTime
|
||||
|
||||
// Add to time segments
|
||||
timeSegments.push({
|
||||
type: 'model',
|
||||
name: `Model response (iteration ${iterationCount + 1})`,
|
||||
startTime: nextModelStartTime,
|
||||
endTime: nextModelEndTime,
|
||||
duration: thisModelTime,
|
||||
})
|
||||
|
||||
// Add to model time
|
||||
modelTime += thisModelTime
|
||||
|
||||
// Update content if we have a text response
|
||||
if (currentResponse.choices[0]?.message?.content) {
|
||||
content = currentResponse.choices[0].message.content
|
||||
// Clean up the response content
|
||||
content = content.replace(/```json\n?|\n?```/g, '')
|
||||
content = content.trim()
|
||||
}
|
||||
|
||||
// Update token counts
|
||||
if (currentResponse.usage) {
|
||||
tokens.prompt += currentResponse.usage.prompt_tokens || 0
|
||||
tokens.completion += currentResponse.usage.completion_tokens || 0
|
||||
tokens.total += currentResponse.usage.total_tokens || 0
|
||||
}
|
||||
|
||||
iterationCount++
|
||||
}
|
||||
|
||||
const endTime = Date.now()
|
||||
// After all tool processing complete, if streaming was requested and we have messages, use streaming for the final response
|
||||
if (request.stream && iterationCount > 0) {
|
||||
logger.info('Using streaming for final response after tool calls')
|
||||
|
||||
const streamingPayload = {
|
||||
...payload,
|
||||
messages: currentMessages,
|
||||
tool_choice: 'auto', // Always use 'auto' for the streaming response after tool calls
|
||||
stream: true,
|
||||
stream_options: { include_usage: true },
|
||||
}
|
||||
|
||||
const streamResponse = await ollama.chat.completions.create(streamingPayload)
|
||||
|
||||
// Create the StreamingExecution object with all collected data
|
||||
const streamingResult = {
|
||||
stream: createReadableStreamFromOllamaStream(streamResponse, (content, usage) => {
|
||||
// Update the execution data with the final content and token usage
|
||||
streamingResult.execution.output.content = content
|
||||
|
||||
// Clean up the response content
|
||||
if (content) {
|
||||
streamingResult.execution.output.content = content
|
||||
.replace(/```json\n?|\n?```/g, '')
|
||||
.trim()
|
||||
}
|
||||
|
||||
// Update token usage if available from the stream
|
||||
if (usage) {
|
||||
const newTokens = {
|
||||
prompt: usage.prompt_tokens || tokens.prompt,
|
||||
completion: usage.completion_tokens || tokens.completion,
|
||||
total: usage.total_tokens || tokens.total,
|
||||
}
|
||||
|
||||
streamingResult.execution.output.tokens = newTokens
|
||||
}
|
||||
}),
|
||||
execution: {
|
||||
success: true,
|
||||
output: {
|
||||
content: '', // Will be filled by the callback
|
||||
model: request.model,
|
||||
tokens: {
|
||||
prompt: tokens.prompt,
|
||||
completion: tokens.completion,
|
||||
total: tokens.total,
|
||||
},
|
||||
toolCalls:
|
||||
toolCalls.length > 0
|
||||
? {
|
||||
list: toolCalls,
|
||||
count: toolCalls.length,
|
||||
}
|
||||
: undefined,
|
||||
providerTiming: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
modelTime: modelTime,
|
||||
toolsTime: toolsTime,
|
||||
firstResponseTime: firstResponseTime,
|
||||
iterations: iterationCount + 1,
|
||||
timeSegments: timeSegments,
|
||||
},
|
||||
},
|
||||
logs: [], // No block logs at provider level
|
||||
metadata: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
},
|
||||
},
|
||||
} as StreamingExecution
|
||||
|
||||
// Return the streaming execution object
|
||||
return streamingResult as StreamingExecution
|
||||
}
|
||||
|
||||
// Calculate overall timing
|
||||
const providerEndTime = Date.now()
|
||||
const providerEndTimeISO = new Date(providerEndTime).toISOString()
|
||||
const totalDuration = providerEndTime - providerStartTime
|
||||
|
||||
return {
|
||||
content: content,
|
||||
content,
|
||||
model: request.model,
|
||||
tokens,
|
||||
toolCalls: toolCalls.length > 0 ? toolCalls : undefined,
|
||||
toolResults: toolResults.length > 0 ? toolResults : undefined,
|
||||
timing: {
|
||||
startTime: new Date(startTime).toISOString(),
|
||||
endTime: new Date(endTime).toISOString(),
|
||||
duration: endTime - startTime,
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: providerEndTimeISO,
|
||||
duration: totalDuration,
|
||||
modelTime: modelTime,
|
||||
toolsTime: toolsTime,
|
||||
firstResponseTime: firstResponseTime,
|
||||
iterations: iterationCount + 1,
|
||||
timeSegments,
|
||||
timeSegments: timeSegments,
|
||||
},
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Error in Ollama request', {
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
model: request.model,
|
||||
// Include timing information even for errors
|
||||
const providerEndTime = Date.now()
|
||||
const providerEndTimeISO = new Date(providerEndTime).toISOString()
|
||||
const totalDuration = providerEndTime - providerStartTime
|
||||
|
||||
logger.error('Error in Ollama request:', {
|
||||
error,
|
||||
duration: totalDuration,
|
||||
})
|
||||
throw error
|
||||
|
||||
// Create a new error with timing information
|
||||
const enhancedError = new Error(error instanceof Error ? error.message : String(error))
|
||||
// @ts-ignore - Adding timing property to the error
|
||||
enhancedError.timing = {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: providerEndTimeISO,
|
||||
duration: totalDuration,
|
||||
}
|
||||
|
||||
throw enhancedError
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -27,6 +27,7 @@ import { openaiProvider } from '@/providers/openai'
|
||||
import type { ProviderConfig, ProviderId, ProviderToolConfig } from '@/providers/types'
|
||||
import { xAIProvider } from '@/providers/xai'
|
||||
import { useCustomToolsStore } from '@/stores/custom-tools/store'
|
||||
import { useOllamaStore } from '@/stores/ollama/store'
|
||||
|
||||
const logger = createLogger('ProviderUtils')
|
||||
|
||||
@@ -548,6 +549,12 @@ export function getApiKey(provider: string, model: string, userProvidedKey?: str
|
||||
// If user provided a key, use it as a fallback
|
||||
const hasUserKey = !!userProvidedKey
|
||||
|
||||
// Ollama models don't require API keys - they run locally
|
||||
const isOllamaModel = provider === 'ollama' || useOllamaStore.getState().models.includes(model)
|
||||
if (isOllamaModel) {
|
||||
return 'empty' // Ollama uses 'empty' as a placeholder API key
|
||||
}
|
||||
|
||||
// Use server key rotation for all OpenAI models and Anthropic's Claude models on the hosted platform
|
||||
const isOpenAIModel = provider === 'openai'
|
||||
const isClaudeModel = provider === 'anthropic'
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check that at least one argument is provided. If not, display the usage help.
|
||||
if [ "$#" -eq 0 ]; then
|
||||
echo "Usage: $(basename "$0") <ollama command> [args...]"
|
||||
echo "Example: $(basename "$0") ps # This will run 'ollama ps' inside the container"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Start a detached container from the ollama/ollama image,
|
||||
# mounting the host's ~/.ollama directory directly into the container.
|
||||
# Here we mount it to /root/.ollama, assuming that's where the image expects it.
|
||||
CONTAINER_ID=$(docker run -d -v ~/.ollama:/root/.ollama -p 11434:11434 ollama/ollama
|
||||
)
|
||||
|
||||
# Define a cleanup function to stop the container regardless of how the script exits.
|
||||
cleanup() {
|
||||
docker stop "$CONTAINER_ID" >/dev/null
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
# Execute the command provided by the user within the running container.
|
||||
# The command runs as: "ollama <user-arguments>"
|
||||
docker exec -it "$CONTAINER_ID" ollama "$@"
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user