Compare commits

...

154 Commits

Author SHA1 Message Date
Waleed Latif
e107363ea7 v0.3.35: migrations, custom email address support 2025-08-21 12:36:51 -07:00
Waleed Latif
7e364a7977 fix(emails): remove unused useCustomFromFormat param (#1082)
* fix(mailer): remove unused useCustomFormat

* bun.lock changes
2025-08-21 12:09:03 -07:00
Waleed Latif
35a37d8b45 fix(acs): added FROM_EMAIL_ADDRESS envvar for ACS (#1081)
* fix: clear Docker build cache to use correct Next.js version

* fix(mailer): add FROM_EMAIL_ADDRESS envvar for ACS

* bun.lock

* added tests
2025-08-21 11:57:44 -07:00
Vikhyath Mondreti
2b52d88cee fix(migrations): add missing migration for document table (#1080)
* fix(migrations): add missing migration for document table

* add newline at end of file
2025-08-21 11:48:54 -07:00
Waleed Latif
abad3620a3 fix(build): clear docker build cache to use correct Next.js version 2025-08-21 01:43:45 -07:00
Waleed Latif
a37c6bc812 fix(build): clear docker build cache to use correct Next.js version (#1075)
* fix: clear Docker build cache to use correct Next.js version

- Changed GitHub Actions cache scope from build-v2 to build-v3
- This should force a fresh build without cached Next.js 15.5.0 layers
- Reverted to ^15.3.2 version format that worked on main branch

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* run install

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-21 01:38:47 -07:00
Waleed Latif
cd1bd95952 fix(nextjs): downgrade nextjs due to known issue with bun commonjs module bundling (#1073) 2025-08-21 01:24:06 -07:00
Waleed Latif
4c9fdbe7fb fix(nextjs): downgrade nextjs due to known issue with bun commonjs module bundling (#1073) 2025-08-21 01:23:10 -07:00
Waleed Latif
2c47cf4161 v0.3.34: azure-openai options, billing fixes, mistral OCR via Azure, start block input format changes 2025-08-20 21:05:48 -07:00
Waleed Latif
db1cf8a6db fix(placeholder): fix starter block placeholder (#1071) 2025-08-20 21:01:37 -07:00
Vikhyath Mondreti
c6912095f7 fix placeholder text 2025-08-20 20:38:15 -07:00
Waleed Latif
154d9eef6a fix(gpt-5): fix chat-completions api (#1070) 2025-08-20 20:36:12 -07:00
Emir Karabeg
c2ded1f3e1 fix(theme-provider): preventing flash on page load (#1067)
* fix(theme-provider): preventing flash on page load

* consolidated themes to use NextJS theme logic

* improvement: optimized latency
2025-08-20 20:20:23 -07:00
Waleed Latif
ff43528d35 fix(gpt-5): fixed verbosity and reasoning params (#1069)
* fix(gpt-5): fixed verbosity and reasoning parsm

* fixed dropdown

* default values for verbosity and reasoning effort

* cleanup

* use default value in dropdown
2025-08-20 20:18:02 -07:00
Vikhyath Mondreti
692ba69864 fix type 2025-08-20 20:00:41 -07:00
Adam Gough
cb7ce8659b fix(msverify): changed consent for microsoft (#1057)
* changed consent

* changed excel error message and default sheets

* changed variable res for excel

---------

Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
2025-08-20 19:54:51 -07:00
Vikhyath Mondreti
5caef3a37d fix(input-format): first time execution bug (#1068) 2025-08-20 19:52:04 -07:00
Waleed Latif
a6888da124 fix(semantics): fix incorrect imports (#1066)
* fix(semantics): fix incorrect import

* fixed all incorrecr imports
2025-08-20 19:02:52 -07:00
Vikhyath Mondreti
07b0597f4f improvement(trigger): upgrade import path for trigger (#1065) 2025-08-20 18:41:13 -07:00
Vikhyath Mondreti
71e2994f9d improvement(trigger): upgrade trigger (#1063) 2025-08-20 18:33:01 -07:00
Vikhyath Mondreti
9973b2c165 Merge branch 'staging' of github.com:simstudioai/sim into staging 2025-08-20 18:26:08 -07:00
Vikhyath Mondreti
d9e5777538 use personal access token 2025-08-20 18:24:17 -07:00
Waleed Latif
dd74267313 feat(nextjs): upgrade nextjs to 15.5 (#1062) 2025-08-20 18:22:35 -07:00
Vikhyath Mondreti
1db72dc823 pin version 2025-08-20 18:13:15 -07:00
Vikhyath Mondreti
da707fa491 improvement(gh-action): add gh action to deploy to correct environment for trigger.dev (#1060)
* improvement(gh-action): add gh action to deploy to correct environment for trigger.dev

* add dep installation

* change away from pull request target
2025-08-20 18:10:43 -07:00
Vikhyath Mondreti
9ffaf305bd feat(input-format): add value field to test input formats (#1059)
* feat(input-format): add value field to test input formats

* fix lint

* fix typing issue

* change to dropdown for boolean
2025-08-20 18:03:47 -07:00
Waleed Latif
26e6286fda fix(billing): fix team plan upgrade (#1053) 2025-08-20 17:05:35 -07:00
Waleed Latif
c795fc83aa feat(azure-openai): allow usage of azure-openai for knowledgebase uploads and wand generation (#1056)
* feat(azure-openai): allow usage of azure-openai for knowledgebase uploads

* feat(azure-openai): added azure-openai for kb and wand

* added embeddings utils, added the ability to use mistral through Azure

* fix(oauth): gdrive picker race condition, token route cleanup

* fix test

* feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS (#1054)

* feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS

* fix batch invitation email template

* cleanup

* improvement(emails): add help template instead of doing it inline

* remove fallback version

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2025-08-20 17:04:52 -07:00
Waleed Latif
cea42f5135 improvement(gpt-5): added reasoning level and verbosity to gpt-5 models (#1058) 2025-08-20 17:04:39 -07:00
Waleed Latif
6fd6f921dc feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS (#1054)
* feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS

* fix batch invitation email template

* cleanup

* improvement(emails): add help template instead of doing it inline
2025-08-20 16:02:49 -07:00
Vikhyath Mondreti
7530fb9a4e Merge pull request #1055 from simstudioai/fix/picker-race-cond
fix(oauth): gdrive picker race condition, token route cleanup
2025-08-20 15:03:57 -07:00
Vikhyath Mondreti
9a5b035822 fix test 2025-08-20 13:55:54 -07:00
Vikhyath Mondreti
0c0b6bf967 fix(oauth): gdrive picker race condition, token route cleanup 2025-08-20 12:33:46 -07:00
Vikhyath Mondreti
5d74db53ff v0.3.33: update copilot docs 2025-08-20 09:56:09 -07:00
Siddharth Ganesan
b39bdfd55e feat(copilot-docs): update readme and docs with local hosting instructions (#1043)
* Docs

* Lint
2025-08-20 09:47:50 -07:00
Waleed Latif
6b185be9a4 v0.3.32: loop block max increase, url-encoded API calls, subflow logs, new supabase tools 2025-08-20 00:36:46 -07:00
Waleed Latif
214a0358b6 fix(billing): fix upgrade to team plan (#1045) 2025-08-20 00:28:07 -07:00
Waleed Latif
bbb5e53e43 improvement(supabase): add supabase upsert tool, insert/replace on PK conflict (#1038) 2025-08-19 21:21:09 -07:00
Waleed Latif
79e932fed9 feat(logs): added sub-workflow logs, updated trace spans UI, fix scroll behavior in workflow registry sidebar (#1037)
* added sub-workflow logs

* indent input/output in trace spans display

* better color scheme for workflow logs

* scroll behavior in sidebar updated

* cleanup

* fixed failing tests
2025-08-19 21:21:09 -07:00
Vikhyath Mondreti
9ad36c0e34 fix(oauth-block): race condition for rendering credential selectors and other subblocks + gdrive fixes (#1029)
* fix(oauth-block): race condition for rendering credential selectors and other subblocks

* fix import

* add dependsOn field to track cros-subblock deps

* remove redundant check

* remove redundant checks

* remove misleading comment

* fix

* fix jira

* fix

* fix

* confluence

* fix triggers

* fix

* fix

* make trigger creds collab supported

* fix for backwards compat

* fix trigger modal
2025-08-19 21:21:09 -07:00
Waleed Latif
2771c688ff improvement(supabase): added more verbose error logging for supabase operations (#1035)
* improvement(supabase): added more verbose error logging for supabase operations

* updated docs
2025-08-19 21:21:09 -07:00
Waleed Latif
d58ceb4bce improvement(api): add native support for form-urlencoded inputs into API block (#1033) 2025-08-19 21:21:09 -07:00
Waleed Latif
69773c3174 improvement(console): increase console max entries for larger workflows (#1032)
* improvement(console): increase console max entries for larger workflows

* increase safety limit for infinite loops
2025-08-19 21:21:09 -07:00
Waleed Latif
1619d63f2a v0.3.31: webhook fixes, advanced mode parameter filtering, credentials fixes, UI/UX improvements 2025-08-19 01:01:45 -07:00
Waleed Latif
9aa1fe8037 fix(logger): fixed logger to show prod server-side logs (#1027) 2025-08-19 00:44:24 -07:00
Emir Karabeg
1b7c111c46 Update README.md (#1026)
* Update README.md

* Update README.md
2025-08-18 23:10:18 -07:00
Siddharth Ganesan
bdfb56b262 fix(copilot): streaming (#1023)
* Fix 1

* Fix

* Bugfix

* Make thinking streaming smoother

* Better autoscroll, still not great

* Updates

* Updates

* Updates

* Restore checkpoitn logic

* Fix aborts

* Checkpoitn ui

* Lint

* Fix empty file
2025-08-18 22:48:56 -07:00
Emir Karabeg
4a7de31eee uploaded brandbook (#1024) 2025-08-18 22:04:55 -07:00
Waleed Latif
adfe56c720 improvement(logger): restore server-side logs in prod (#1022) 2025-08-18 21:01:38 -07:00
Emir Karabeg
72e3efa875 improvement(settings): ui/ux (#1021)
* completed general

* completed environment

* completed account; updated general and environment

* fixed skeleton

* finished credentials

* finished privacy; adjusted all colors and styling

* added reset password

* refactor: team and subscription

* finalized subscription settings

* fixed copilot key UI
2025-08-18 20:57:29 -07:00
Vikhyath Mondreti
b40fa3aa6e fix(picker-ui): picker UI confusing when credential not set + Microsoft OAuth Fixes (#1016)
* fix(picker-ui): picker UI confusing when credential not set

* remove comments

* remove chevron down

* fix collaboration oauth

* fix jira"

* fix

* fix ms excel selector

* fix selectors for MS blocks

* fix ms selectors

* fix

* fix ms onedrive and sharepoint

* fix to grey out dropdowns

* fix background fetches

* fix planner

* fix confluence

* fix

* fix confluence realtime sharing

* fix outlook folder selector

* check outlook folder

* make shared hook

---------

Co-authored-by: waleedlatif1 <walif6@gmail.com>
2025-08-18 20:21:23 -07:00
Waleed Latif
f924edde3a improvement(console): redact api keys from console store (#1020) 2025-08-18 16:36:33 -07:00
Waleed Latif
073030bfaa improvement(serializer): filter out advanced mode fields when executing in basic mode, persist the values but don't include them in serialized block for execution (#1018)
* improvement(serializer): filter out advanced mode fields when executing in basic mode, persist the values but don't include them in serialized block for execution

* fix serializer exclusion logic
2025-08-18 16:34:53 -07:00
Siddharth Ganesan
871f4e8e18 fix(copilot): env key validation (#1017)
* Fix v1

* Use env var

* Lint

* Fix env key validation

* Remove logger

* Fix agent url

* Fix tests
2025-08-18 16:00:56 -07:00
Siddharth Ganesan
091343a132 fix(copilot): fix origin (#1015)
* Fix v1

* Use env var

* Lint
2025-08-18 13:57:31 -07:00
Waleed Latif
63c66bfc31 fix(webhook): pin webhook URL when creating/saving generic webhook trigger (#1014)
* fix(webhook): pin webhook URL when creating a new generic webhook trigger

* change instructions copy

* remove unrelated scripts

* added optional API key for webhooks, validation tests

* remove extraneous logs
2025-08-18 13:39:49 -07:00
Waleed Latif
445ca78395 fix(export): swap upload & download icons (#1013) 2025-08-18 10:22:55 -07:00
Waleed Latif
d75cc1ed84 v0.3.30: duplication, control bar fixes 2025-08-18 08:57:26 -07:00
Waleed Latif
5a8a703ecb fix(duplicate): fixed detached state on duplication (#1011) 2025-08-18 08:51:18 -07:00
Waleed Latif
6f64188b8d fix(control-bar): fix icons styling in disabled state (#1010) 2025-08-18 08:22:06 -07:00
Vikhyath Mondreti
60a9a25553 Merge pull request #1009 from simstudioai/staging
update migration file for notekeeping purpose
2025-08-18 01:59:02 -07:00
Vikhyath Mondreti
52fa388f81 update migration file for notekeeping purpose 2025-08-18 01:56:34 -07:00
Vikhyath Mondreti
5c56cbd558 Merge pull request #1008 from simstudioai/staging
reduce batch size to prevent timeouts
2025-08-18 01:11:49 -07:00
Vikhyath Mondreti
dc19525a6f reduce batch size to prevent timeouts 2025-08-18 01:10:47 -07:00
Vikhyath Mondreti
3873f44875 Merge pull request #1007 from simstudioai/staging
syntax issue in migration
2025-08-18 00:59:53 -07:00
Vikhyath Mondreti
09b95f41ea syntax issue in migration 2025-08-18 00:58:09 -07:00
Vikhyath Mondreti
af60ccd188 fix: migration mem issues bypass
fix: migration mem issues bypass
2025-08-18 00:50:20 -07:00
Vikhyath Mondreti
eb75afd115 make logs migration batched to prevent mem issues (#1005) 2025-08-18 00:42:38 -07:00
Waleed Latif
fdb8256468 fix(subflow): remove all edges when removing a block from a subflow (#1003) 2025-08-18 00:21:26 -07:00
Vikhyath Mondreti
570c07bf2a Merge pull request #1004 from simstudioai/staging
v0.3.29: copilot fixes, remove block from subflow, code cleanups
2025-08-18 00:18:44 -07:00
Adam Gough
5c16e7d390 fix(subflow): add ability to remove block from subflow and refactor to consolidate subflow code (#983)
* added logic to remove blocks from subflows

* refactored logic into just subflow-node

* bun run lint

* added subflow test

* added a safety check for data.parentId

* added state update logic

* bun run lint

* removed old logic

* removed any

* added tests

* added type safety

* removed test script

* type safety

---------

Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: waleedlatif1 <walif6@gmail.com>
2025-08-17 22:25:31 -07:00
Waleed Latif
bd38062705 fix(workflow-error): allow users to delete workflows with invalid configs/state (#1000)
* fix(workflow-error): allow users to delete workflows with invalid configs/state

* cleanup
2025-08-17 22:23:41 -07:00
Siddharth Ganesan
d7fd4a9618 feat(copilot): diff improvements (#1002)
* Fix abort

* Cred updates

* Updates

* Fix sheet id showing up in diff view

* Update diff view

* Text overflow

* Optimistic accept

* Serialization catching

* Depth 0 fix

* Fix icons

* Updates

* Lint
2025-08-16 15:09:48 -07:00
Vikhyath Mondreti
d972bab206 fix(logs-sidebar): remove message and fix race condition for quickly switching b/w logs (#1001) 2025-08-16 15:05:39 -07:00
Vikhyath Mondreti
f254d70624 improvement(logs): cleanup code (#999) 2025-08-16 13:44:00 -07:00
Waleed Latif
8748e1d5f9 improvement(db): remove deprecated 'state' column from workflow table (#994)
* improvement(db): remove deprecated  column from workflow table

* removed extraneous logs

* update sockets envvar
2025-08-16 13:04:49 -07:00
Siddharth Ganesan
133a32e6d3 Fix abort (#998) 2025-08-16 11:10:09 -07:00
Waleed Latif
97b6bcc43d v0.3.28: autolayout, export, copilot, kb ui improvements 2025-08-16 09:12:17 -07:00
Waleed Latif
42917ce641 fix(agent): stringify input into user prompt for agent (#984) 2025-08-15 19:36:49 -07:00
Waleed Latif
5f6d219223 fix(kb-ui): fixed upload files modal ui, processing ui to match the rest of the kb (#991)
* fix(kb-ui): fixed upload files modal, processing ui to match the rest of the kb

* more ui fixes

* ack PR comments

* fix help modal
2025-08-15 19:35:50 -07:00
Siddharth Ganesan
bab74307f4 fix(ishosted): make ishosted true on staging (#993)
* Add staging to ishosted

* www
2025-08-15 18:36:32 -07:00
Siddharth Ganesan
16aaa37dad improvement(agent): enable autolayout, export, copilot (#992)
* Enable autolayout, export, and copilot in dev

* Updates
2025-08-15 18:29:34 -07:00
Siddharth Ganesan
c6166a9483 feat(copilot): generate agent api key (#989)
* Add skeleton copilot to settings modal and add migration for copilot api keys

* Add hash index on encrypted key

* Security 1

* Remove sim agent api key

* Fix api key stuff

* Auth

* Status code handling

* Update env key

* Copilot api key ui

* Update copilot costs

* Add copilot stats

* Lint

* Remove logs

* Remove migrations

* Remove another migration

* Updates

* Hide if hosted

* Fix test

* Lint

* Lint

* Fixes

* Lint

---------

Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
2025-08-15 18:05:54 -07:00
Waleed Latif
0258a1b4ce fix(loading): fix workflow detached on first load (#987) 2025-08-15 17:26:47 -07:00
Vikhyath Mondreti
4d4aefa346 fix(envvar): clear separation between server-side and client-side billing envvar (#988) 2025-08-15 16:41:02 -07:00
Vikhyath Mondreti
a0cf003abf Merge pull request #986 from simstudioai/staging
attempt to fix build issues (#985)
2025-08-15 15:22:26 -07:00
Vikhyath Mondreti
2e027dd77d attempt to fix build issues (#985) 2025-08-15 15:21:34 -07:00
Vikhyath Mondreti
6133db53d0 v0.3.27: oauth/webhook fixes, whitelabel fixes, code cleanups
v0.3.27: oauth/webhook fixes, whitelabel fixes, code cleanups
2025-08-15 13:33:55 -07:00
Waleed Latif
03bb437e09 fix(chat-deploy): fixed chat-deploy (#981) 2025-08-15 13:07:54 -07:00
Vikhyath Mondreti
9f02f88bf5 fix(oauth): webhook + oauthblocks in workflow (#979)
* fix(oauth): webhook + oauthblocks in workflow

* propagate workflow id

* requireWorkflowId for internal can be false
2025-08-15 13:07:46 -07:00
Waleed Latif
7a1711282e improvement/function: remove unused function execution logic in favor of vm, update turborepo (#980)
* improvement(function): remove freestyle in favor of vm exec

* update imports

* remove unused test suite

* update turborepo
2025-08-15 12:51:27 -07:00
Waleed Latif
58613888b0 improvement(redirects): move redirects to middleware, push to login if no session and workspace if session exists, remove telemetry consent dialog (#976)
* improvement(redirects): move redirects to middleware, push to login if no session and workspace if session exists

* remove telemetry consent dialog

* remove migrations

* rerun migrations
2025-08-15 12:36:34 -07:00
Waleed Latif
f1fe2f52cc improvement(billing): add billing enforcement for webhook executions, consolidate helpers (#975)
* fix(billing): clinet-side envvar for billing

* remove unrelated files

* fix(billing): add billing enforcement for webhook executions, consolidate implementation

* cleanup

* add back server envvar
2025-08-15 12:28:34 -07:00
Waleed Latif
7d05999a70 fix(force-dynamic): revert force-dynamic for the 38 routes that we previously added it to (#971) 2025-08-15 12:05:51 -07:00
Siddharth Ganesan
bf07240cfa Fix user message color (#978) 2025-08-15 11:59:28 -07:00
Siddharth Ganesan
0c7a8efc8d feat(copilot): add depths (#974)
* Checkpont

* can edit names and types

* Add reasoning and thinking

* Update agent max

* Max mode v1

* Add best practices

* Todo list shows up

* Todolist works

* Updates to todo

* Updates

* Updates

* Checkpoitn

* Yaml export updates

* Updates

* Checkpoint fr

* Fix diff veiw on new workflow

* Subflow autolayout fix v1

* Autolayout fixes 2

* Gdrive list files

* Get oauth credential (email)

* Gdrive file picker

* Gdrive file access prompt

* Api request

* Copilot ui for some tool calls

* Updates

* Fix overflow

* Openai

* Streaming

* Checkpoint

* Update

* Openai responses api

* Depth skeleton

* Depth tooltips

* Mode selector tool tips

* Update ui

* Update ordering

* Lint

* Remove migrations

* Add migrations back

* Lint

* Fix isdev

* Fix tests

* Comments

---------

Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
2025-08-15 11:37:58 -07:00
Vikhyath Mondreti
f081f5a73c Revert 1a7de84 except tag dropdown changes (keep apps/sim/components/ui/tag-dropdown.tsx) (#972) 2025-08-15 00:37:16 -07:00
Waleed Latif
72c07e8ad2 fix(whitelabel): fix privacy policy & terms, remove unused/unnecessary envvars for whitelabeling (#969)
* fix(whitelabel): fix privacy policy & terms for whitelabeling

* remove unused hide branding url

* removed support email envvar, remove landing page except for hosted version

* remove unnecessary comments

* removed primary, secondary, accent color envvars and standardized usage of brand colors in css file

* fix primaryColor refernce

* fix invalid css
2025-08-14 20:03:01 -07:00
Vikhyath Mondreti
e1f04f42f8 v0.3.26: fix billing, bubble up workflow block errors, credentials security improvements
v0.3.26: fix billing, bubble up workflow block errors, credentials security improvements
2025-08-14 14:17:25 -05:00
Vikhyath Mondreti
fd9e61f85a improvement(credentials-security): use clear credentials sharing helper, fix google sheets block url split bug (#968)
* improvement(credentials-sharing-security): cleanup and reuse helper to determine credential access

* few more routes

* fix google sheets block

* fix test mocks

* fix calendar route
2025-08-14 14:13:18 -05:00
Waleed Latif
f1934fe76b fix(billing): separate client side and server side envvars for billing (#966) 2025-08-14 11:29:02 -07:00
Vikhyath Mondreti
ac41bf8c17 Revert "fix(workflow-block): revert change bubbling up error for workflow block" (#965)
* Revert "fix(workflow-block): revert change bubbling up error for workflow blo…"

This reverts commit 9f0993ed57.

* revert test changes
2025-08-14 12:18:47 -05:00
Vikhyath Mondreti
56ffb538a0 Merge pull request #964 from simstudioai/staging
v0.3.25: oauth credentials sharing mechanism, workflow block error handling changes
2025-08-14 02:36:19 -05:00
Vikhyath Mondreti
2e8f051e58 fix workflow block test 2025-08-14 02:28:17 -05:00
Vikhyath Mondreti
9f0993ed57 fix(workflow-block): revert change bubbling up error for workflow block (#963) 2025-08-14 02:18:18 -05:00
Waleed Latif
472a22cc94 improvement(helm): added template for external db secret (#957) 2025-08-13 21:21:46 -07:00
Waleed Latif
da04ea0e9f fix(subflows): added change detection for parallels, updated deploy and status schemas to match parallel/loop (#956) 2025-08-13 21:18:07 -07:00
Waleed Latif
d4f412af92 fix(api): fix api post and get without stringifying (#955) 2025-08-13 18:49:22 -05:00
Siddharth Ganesan
70fa628a2a improvement(uploads): add multipart upload + batching + retries (#938)
* File upload retries + multipart uploads

* Lint

* FIle uploads

* File uploads 2

* Lint

* Fix file uploads

* Add auth to file upload routes

* Lint
2025-08-13 15:18:14 -07:00
Vikhyath Mondreti
b159d63fbb improvement(oauth): credentials sharing for workflows (#939)
* improvement(oauth): credential UX while sharing workflows

* fix tests

* address greptile comments

* fix linear, jira, folder selectors

* fix routes

* fix linear

* jira fix attempt

* jira fix attempt

* jira fixes

* fix

* fix

* fix jira

* fix selector disable behaviour

* minor fixes

* clear selectors correctly

* fix project selector jira

* fix gdrive

* fix labels dropdown

* fix webhook realtime collab

* fix

* fix webhooks persistence

* fix folders route

* fix lint

* test webhook intermittent error

* fix

* fix display
2025-08-13 16:51:46 -05:00
Adam Gough
5dfe9330bb added file for microsoft verification (#946)
Co-authored-by: Adam Gough <adamgough@Adams-MacBook-Pro.local>
2025-08-13 12:18:31 -05:00
Waleed Latif
4107948554 Merge pull request #954 from simstudioai/staging
fix
2025-08-12 21:12:18 -07:00
Vikhyath Mondreti
7ebc87564d fix(double-read): API Block (#950)
* fix(double-read-http): double reading body json

* fix

* fix tests
2025-08-12 23:08:31 -05:00
Vikhyath Mondreti
8aa0ed19f1 Revert "fix(api): fix api block (#951)" (#953)
This reverts commit 8016af60f4.
2025-08-12 23:05:08 -05:00
Waleed Latif
f7573fadb1 v0.3.24: api block fixes 2025-08-12 20:35:07 -07:00
Waleed Latif
8016af60f4 fix(api): fix api block (#951) 2025-08-12 20:31:41 -07:00
Vikhyath Mondreti
8fccd5c20d Merge pull request #948 from simstudioai/staging
v0.3.24: revert redis session management change
2025-08-12 17:56:16 -05:00
Vikhyath Mondreti
8de06b63d1 Revert "improvement(performance): use redis for session data (#934)" (#947)
This reverts commit 3c7b3e1a4b.
2025-08-12 17:30:21 -05:00
Vikhyath Mondreti
1c818b2e3e v0.3.23: multiplayer variables, api key fixes, kb improvements, triggers fixes
v0.3.23: multiplayer variables, api key fixes, kb improvements, triggers fixes
2025-08-12 15:23:09 -05:00
Vikhyath Mondreti
1a7de84c7a fix(tag-dropdown): last char dropped bug (#945) 2025-08-12 11:48:34 -05:00
Waleed Latif
a2dea384a4 fix(kb): kb-level deletion should reflect in doc level kb tags sidebar registry (#944) 2025-08-12 09:26:28 -07:00
Waleed Latif
1c3e923f1b fix(kb-ui): fixed tags hover effect (#942) 2025-08-12 08:49:19 -07:00
Waleed Latif
e1d5e38528 fix(chunks): instantaneous search + server side searching instead of client-side (#940)
* fix(chunks): instantaneous search + server side searching instead of client-side

* add knowledge tags component to sidebar, replace old knowledge tags UI

* add types, remove extraneous comments

* added knowledge-base level tag definitions viewer, ability to create/delete slots in sidebar and respective routes

* ui

* fix stale tag issue

* use logger
2025-08-12 01:53:47 -07:00
Waleed Latif
3c7b3e1a4b improvement(performance): use redis for session data (#934) 2025-08-11 22:42:22 -05:00
Waleed Latif
bc455d5bf4 feat(variables): multiplayer variables through sockets, persist server side (#933)
* feat(variables): multiplayer variables through sockets, persist server side

* remove extraneous comments

* breakout variables handler in sockets
2025-08-11 18:32:21 -05:00
Waleed Latif
2a333c7cf7 fix(kb): added proper pagination for documents in kb (#937) 2025-08-11 14:16:15 -07:00
Adam Gough
41cc0cdadc fix(webhooks): fixed all webhook structures (#935)
* fix for variable format + trig

* fixed slack variable

* microsoft teams working

* fixed outlook, plus added other minor documentation changes and fixed subblock

* removed discord webhook logic

* added airtable logic

* bun run lint

* test

* test again

* test again 2

* test again 3

* test again 4

* test again 4

* test again 4

* bun run lint

* test 5

* test 6

* test 7

* test 7

* test 7

* test 7

* test 7

* test 7

* test 8

* test 9

* test 9

* test 9

* test 10

* test 10

* bun run lint, plus github fixed

* removed some debug statements #935

* testing resolver removing

* testing trig

---------

Co-authored-by: Adam Gough <adamgough@Adams-MacBook-Pro.local>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
2025-08-11 12:50:55 -07:00
Waleed Latif
70aeb0c298 fix(sidebar-ui): fix small ui bug to close gap when creating new workflow (#932) 2025-08-10 18:33:01 -07:00
Emir Karabeg
83f113984d feat(usage-indicator): added ability to see current usage (#925)
* feat(usage-indicator): added ability to see current usage

* feat(billing): added billing ennabled flag for usage indicator, enforcement of billing usage

---------

Co-authored-by: waleedlatif1 <walif6@gmail.com>
2025-08-10 17:20:53 -07:00
Waleed Latif
56ede1c980 improvement(tools): removed transformError, isInternalRoute, directExecution (#928)
* standardized response format for transformError

* removed trasnformError, moved error handling to executeTool for all different error formats

* remove isInternalRoute, make it implicit in executeTool

* removed directExecution, everything on the server nothing on the client

* fix supabase

* fix(tag-dropdown): fix values for parallel & loop blocks (#929)

* fix(search-modal): add parallel and loop blocks to search modal

* reordered tool params

* update docs
2025-08-10 17:19:46 -07:00
Waleed Latif
df16382a19 improvement(subflow): consolidated parallel/loop tags and collaborativeUpdate (#931)
* fix(console): fix typo

* improvement(subflows): consolidated subflows tags
2025-08-10 17:19:21 -07:00
Waleed Latif
e271ed86b6 improvement(console): added iteration info to console entry for parallel/loop (#930) 2025-08-10 16:27:39 -07:00
Waleed Latif
785b86a32e fix(tag-dropdown): fix values for parallel & loop blocks (#929) 2025-08-10 11:55:56 -07:00
Waleed Latif
e5e8082de4 fix(workflow-block): improvements to pulsing effect, active execution state, and running workflow blocks in parallel (#927)
* fix: same child workflow executing in parallel with workflow block

* fixed run button prematurely showing completion before child workflows completed

* prevent child worklfows from touching the activeBlocks & layer logic in the parent executor

* surface child workflow errors to main workfow

* ack PR comments
2025-08-09 16:57:56 -07:00
Waleed Latif
8a08afd733 improvement(control-bar): standardize styling across all control bar buttons (#926) 2025-08-09 12:32:37 -07:00
Vikhyath Mondreti
ebb25469ab fix(apikeys): pinned api key to track API key a workflow is deployed with (#924)
* fix(apikeys): pinned api key to track API key a workflow is deployed with

* remove deprecated behaviour tests
2025-08-09 01:37:27 -05:00
Waleed Latif
a2040322e7 fix(chat): fix chat attachments style in dark mode (#923) 2025-08-08 20:12:30 -07:00
Waleed Latif
a8be7e9fb3 fix(help): fix email for help route (#922) 2025-08-08 20:06:19 -07:00
Waleed Latif
aedf5e70b0 v0.3.22: handle files, trigger mode, email validation, tag dropdown types (#919)
* feat(execution-filesystem): system to pass files between blocks  (#866)

* feat(files): pass files between blocks

* presigned URL for downloads

* Remove latest migration before merge

* starter block file upload wasn't getting logged

* checkpoint in human readable form

* checkpoint files / file type outputs

* file downloads working for block outputs

* checkpoint file download

* fix type issues

* remove filereference interface with simpler user file interface

* show files in the tag dropdown for start block

* more migration to simple url object, reduce presigned time to 5 min

* Remove migration 0065_parallel_nightmare and related files

- Deleted apps/sim/db/migrations/0065_parallel_nightmare.sql
- Deleted apps/sim/db/migrations/meta/0065_snapshot.json
- Removed 0065 entry from apps/sim/db/migrations/meta/_journal.json

Preparing for merge with origin/staging and migration regeneration

* add migration files

* fix tests

* Update apps/sim/lib/uploads/setup.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* cleanup types

* fix lint

* fix logs typing for file refs

* open download in new tab

* fixed

* Update apps/sim/tools/index.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix file block

* cleanup unused code

* fix bugs

* remove hacky file id logic

* fix drag and drop

* fix tests

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* feat(trigger-mode): added trigger-mode to workflow_blocks table (#902)

* fix(schedules-perms): use regular perm system to view/edit schedule info (#901)

* fix(schedules-perms): use regular perm system to view schedule info

* fix perms

* improve logging

* feat(webhooks): deprecate singular webhook block + add trigger mode to blocks (#903)

* feat(triggers): added new trigger mode for blocks, added socket event, ran migrations

* Rename old trigger/ directory to background/

* cleaned up, ensured that we display active webhook at the block-level

* fix submenu in tag dropdown

* keyboard nav on tag dropdown submenu

* feat(triggers): add outlook to new triggers system

* cleanup

* add types to tag dropdown, type all outputs for tools and use that over block outputs

* update doc generator to truly reflect outputs

* fix docs

* add trigger handler

* fix active webhook tag

* tag dropdown fix for triggers

* remove trigger mode schema change

* feat(execution-filesystem): system to pass files between blocks  (#866)

* feat(files): pass files between blocks

* presigned URL for downloads

* Remove latest migration before merge

* starter block file upload wasn't getting logged

* checkpoint in human readable form

* checkpoint files / file type outputs

* file downloads working for block outputs

* checkpoint file download

* fix type issues

* remove filereference interface with simpler user file interface

* show files in the tag dropdown for start block

* more migration to simple url object, reduce presigned time to 5 min

* Remove migration 0065_parallel_nightmare and related files

- Deleted apps/sim/db/migrations/0065_parallel_nightmare.sql
- Deleted apps/sim/db/migrations/meta/0065_snapshot.json
- Removed 0065 entry from apps/sim/db/migrations/meta/_journal.json

Preparing for merge with origin/staging and migration regeneration

* add migration files

* fix tests

* Update apps/sim/lib/uploads/setup.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* cleanup types

* fix lint

* fix logs typing for file refs

* open download in new tab

* fixed

* Update apps/sim/tools/index.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix file block

* cleanup unused code

* fix bugs

* remove hacky file id logic

* fix drag and drop

* fix tests

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* feat(trigger-mode): added trigger-mode to workflow_blocks table (#902)

* fix(schedules-perms): use regular perm system to view/edit schedule info (#901)

* fix(schedules-perms): use regular perm system to view schedule info

* fix perms

* improve logging

* cleanup

* prevent tooltip showing up on modal open

* updated trigger config

* fix type issues

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(helm): fix helm charts migrations using wrong image (#907)

* fix(helm): fix helm charts migrations using wrong image

* fixed migrations

* feat(whitelist): add email & domain-based whitelisting for signups (#908)

* improvement(helm): fix duplicate SOCKET_SERVER_URL and add additional envvars to template (#909)

* improvement(helm): fix duplicate SOCKET_SERVER_URL and add additional envvars to template

* rm serper & freestyle

* improvement(tag-dropdown): typed tag dropdown values (#910)

* fix(min-chunk): remove minsize for chunk (#911)

* fix(min-chunk): remove minsize for chunk

* fix tests

* improvement(chunk-config): migrate unused default for consistency (#913)

* fix(mailer): update mailer to use the EMAIL_DOMAIN (#914)

* fix(mailer): update mailer to use the EMAIL_DOMAIn

* add more

* Improvement(cc): added cc to gmail and outlook (#900)

* changed just gmail

* bun run lint

* fixed bcc

* updated docs

---------

Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: waleedlatif1 <walif6@gmail.com>

* fix(email-validation): add email validation to prevent bouncing, fixed OTP validation (#916)

* feat(email-validation): add email validation to prevent bouncing

* removed suspicious patterns

* fix(verification): fixed OTP verification

* fix failing tests, cleanup

* fix(otp): fix email not sending (#917)

* fix(email): manual OTP instead of better-auth (#921)

* fix(email): manual OTP instead of better-auth

* lint

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
2025-08-08 19:08:30 -07:00
Waleed Latif
503268ebcd fix(email): manual OTP instead of better-auth (#921)
* fix(email): manual OTP instead of better-auth

* lint
2025-08-08 18:49:54 -07:00
Waleed Latif
9a4de1f0c6 fix(otp): fix email not sending (#917) 2025-08-08 17:25:44 -07:00
Waleed Latif
43a3416347 fix(email-validation): add email validation to prevent bouncing, fixed OTP validation (#916)
* feat(email-validation): add email validation to prevent bouncing

* removed suspicious patterns

* fix(verification): fixed OTP verification

* fix failing tests, cleanup
2025-08-08 17:01:41 -07:00
Adam Gough
7f39cd0f23 Improvement(cc): added cc to gmail and outlook (#900)
* changed just gmail

* bun run lint

* fixed bcc

* updated docs

---------

Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: waleedlatif1 <walif6@gmail.com>
2025-08-08 14:20:16 -07:00
Waleed Latif
658942deb3 fix(mailer): update mailer to use the EMAIL_DOMAIN (#914)
* fix(mailer): update mailer to use the EMAIL_DOMAIn

* add more
2025-08-08 13:13:52 -07:00
Vikhyath Mondreti
061bd6d5a8 improvement(chunk-config): migrate unused default for consistency (#913) 2025-08-08 12:46:30 -07:00
Vikhyath Mondreti
0ec91f9010 fix(min-chunk): remove minsize for chunk (#911)
* fix(min-chunk): remove minsize for chunk

* fix tests
2025-08-08 12:37:41 -07:00
Waleed Latif
db581dc727 improvement(tag-dropdown): typed tag dropdown values (#910) 2025-08-08 11:34:40 -07:00
Waleed Latif
87e0586d0a improvement(helm): fix duplicate SOCKET_SERVER_URL and add additional envvars to template (#909)
* improvement(helm): fix duplicate SOCKET_SERVER_URL and add additional envvars to template

* rm serper & freestyle
2025-08-08 10:59:34 -07:00
Waleed Latif
9a7c58c8a2 feat(whitelist): add email & domain-based whitelisting for signups (#908) 2025-08-07 23:38:04 -07:00
Waleed Latif
004cd3339d fix(helm): fix helm charts migrations using wrong image (#907)
* fix(helm): fix helm charts migrations using wrong image

* fixed migrations
2025-08-07 23:11:17 -07:00
Waleed Latif
9bd3491eac feat(webhooks): deprecate singular webhook block + add trigger mode to blocks (#903)
* feat(triggers): added new trigger mode for blocks, added socket event, ran migrations

* Rename old trigger/ directory to background/

* cleaned up, ensured that we display active webhook at the block-level

* fix submenu in tag dropdown

* keyboard nav on tag dropdown submenu

* feat(triggers): add outlook to new triggers system

* cleanup

* add types to tag dropdown, type all outputs for tools and use that over block outputs

* update doc generator to truly reflect outputs

* fix docs

* add trigger handler

* fix active webhook tag

* tag dropdown fix for triggers

* remove trigger mode schema change

* feat(execution-filesystem): system to pass files between blocks  (#866)

* feat(files): pass files between blocks

* presigned URL for downloads

* Remove latest migration before merge

* starter block file upload wasn't getting logged

* checkpoint in human readable form

* checkpoint files / file type outputs

* file downloads working for block outputs

* checkpoint file download

* fix type issues

* remove filereference interface with simpler user file interface

* show files in the tag dropdown for start block

* more migration to simple url object, reduce presigned time to 5 min

* Remove migration 0065_parallel_nightmare and related files

- Deleted apps/sim/db/migrations/0065_parallel_nightmare.sql
- Deleted apps/sim/db/migrations/meta/0065_snapshot.json
- Removed 0065 entry from apps/sim/db/migrations/meta/_journal.json

Preparing for merge with origin/staging and migration regeneration

* add migration files

* fix tests

* Update apps/sim/lib/uploads/setup.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* cleanup types

* fix lint

* fix logs typing for file refs

* open download in new tab

* fixed

* Update apps/sim/tools/index.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix file block

* cleanup unused code

* fix bugs

* remove hacky file id logic

* fix drag and drop

* fix tests

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* feat(trigger-mode): added trigger-mode to workflow_blocks table (#902)

* fix(schedules-perms): use regular perm system to view/edit schedule info (#901)

* fix(schedules-perms): use regular perm system to view schedule info

* fix perms

* improve logging

* cleanup

* prevent tooltip showing up on modal open

* updated trigger config

* fix type issues

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2025-08-07 20:27:54 -07:00
Vikhyath Mondreti
fd3ca87c38 fix(schedules-perms): use regular perm system to view/edit schedule info (#901)
* fix(schedules-perms): use regular perm system to view schedule info

* fix perms

* improve logging
2025-08-07 15:38:09 -07:00
Waleed Latif
d264a6ade8 feat(trigger-mode): added trigger-mode to workflow_blocks table (#902) 2025-08-07 14:59:25 -07:00
Vikhyath Mondreti
de93e167af feat(execution-filesystem): system to pass files between blocks (#866)
* feat(files): pass files between blocks

* presigned URL for downloads

* Remove latest migration before merge

* starter block file upload wasn't getting logged

* checkpoint in human readable form

* checkpoint files / file type outputs

* file downloads working for block outputs

* checkpoint file download

* fix type issues

* remove filereference interface with simpler user file interface

* show files in the tag dropdown for start block

* more migration to simple url object, reduce presigned time to 5 min

* Remove migration 0065_parallel_nightmare and related files

- Deleted apps/sim/db/migrations/0065_parallel_nightmare.sql
- Deleted apps/sim/db/migrations/meta/0065_snapshot.json
- Removed 0065 entry from apps/sim/db/migrations/meta/_journal.json

Preparing for merge with origin/staging and migration regeneration

* add migration files

* fix tests

* Update apps/sim/lib/uploads/setup.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update apps/sim/lib/workflows/execution-file-storage.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* cleanup types

* fix lint

* fix logs typing for file refs

* open download in new tab

* fixed

* Update apps/sim/tools/index.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix file block

* cleanup unused code

* fix bugs

* remove hacky file id logic

* fix drag and drop

* fix tests

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2025-08-07 12:51:30 -07:00
861 changed files with 92532 additions and 23079 deletions

View File

@@ -416,8 +416,8 @@ In addition, you will need to update the registries:
Your tool should export a constant with a naming convention of `{toolName}Tool`. The tool ID should follow the format `{provider}_{tool_name}`. For example:
```typescript:/apps/sim/tools/pinecone/fetch.ts
import { ToolConfig, ToolResponse } from '../types'
import { PineconeParams, PineconeResponse } from './types'
import { ToolConfig, ToolResponse } from '@/tools/types'
import { PineconeParams, PineconeResponse } from '@/tools/pinecone/types'
export const fetchTool: ToolConfig<PineconeParams, PineconeResponse> = {
id: 'pinecone_fetch', // Follow the {provider}_{tool_name} format
@@ -448,9 +448,6 @@ In addition, you will need to update the registries:
transformResponse: async (response: Response) => {
// Transform response
},
transformError: (error) => {
// Handle errors
},
}
```
@@ -458,7 +455,7 @@ In addition, you will need to update the registries:
Update the tools registry in `/apps/sim/tools/index.ts` to include your new tool:
```typescript:/apps/sim/tools/index.ts
import { fetchTool, generateEmbeddingsTool, searchTextTool } from './pinecone'
import { fetchTool, generateEmbeddingsTool, searchTextTool } from '/@tools/pinecone'
// ... other imports
export const tools: Record<string, ToolConfig> = {

View File

@@ -85,8 +85,8 @@ jobs:
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=build-v2
cache-to: type=gha,mode=max,scope=build-v2
cache-from: type=gha,scope=build-v3
cache-to: type=gha,mode=max,scope=build-v3
provenance: false
sbom: false

44
.github/workflows/trigger-deploy.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: Trigger.dev Deploy
on:
push:
branches:
- main
- staging
jobs:
deploy:
name: Trigger.dev Deploy
runs-on: ubuntu-latest
concurrency:
group: trigger-deploy-${{ github.ref }}
cancel-in-progress: false
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 'lts/*'
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install
- name: Deploy to Staging
if: github.ref == 'refs/heads/staging'
working-directory: ./apps/sim
run: npx --yes trigger.dev@4.0.0 deploy -e staging
- name: Deploy to Production
if: github.ref == 'refs/heads/main'
working-directory: ./apps/sim
run: npx --yes trigger.dev@4.0.0 deploy

View File

@@ -1,50 +1,46 @@
<p align="center">
<img src="apps/sim/public/static/sim.png" alt="Sim Logo" width="500"/>
<a href="https://sim.ai" target="_blank" rel="noopener noreferrer">
<img src="apps/sim/public/logo/reverse/text/large.png" alt="Sim Logo" width="500"/>
</a>
</p>
<p align="center">
<a href="https://www.apache.org/licenses/LICENSE-2.0"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License: Apache-2.0"></a>
<a href="https://discord.gg/Hr4UWYEcTT"><img src="https://img.shields.io/badge/Discord-Join%20Server-7289DA?logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://x.com/simdotai"><img src="https://img.shields.io/twitter/follow/simstudioai?style=social" alt="Twitter"></a>
<a href="https://github.com/simstudioai/sim/pulls"><img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs welcome"></a>
<a href="https://docs.sim.ai"><img src="https://img.shields.io/badge/Docs-visit%20documentation-blue.svg" alt="Documentation"></a>
</p>
<p align="center">Build and deploy AI agent workflows in minutes.</p>
<p align="center">
<strong>Sim</strong> is a lightweight, user-friendly platform for building AI agent workflows.
<a href="https://sim.ai" target="_blank" rel="noopener noreferrer"><img src="https://img.shields.io/badge/sim.ai-6F3DFA" alt="Sim.ai"></a>
<a href="https://discord.gg/Hr4UWYEcTT" target="_blank" rel="noopener noreferrer"><img src="https://img.shields.io/badge/Discord-Join%20Server-5865F2?logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://x.com/simdotai" target="_blank" rel="noopener noreferrer"><img src="https://img.shields.io/twitter/follow/simstudioai?style=social" alt="Twitter"></a>
<a href="https://docs.sim.ai" target="_blank" rel="noopener noreferrer"><img src="https://img.shields.io/badge/Docs-6F3DFA.svg" alt="Documentation"></a>
</p>
<p align="center">
<img src="apps/sim/public/static/demo.gif" alt="Sim Demo" width="800"/>
</p>
## Getting Started
## Quickstart
1. Use our [cloud-hosted version](https://sim.ai)
2. Self-host using one of the methods below
### Cloud-hosted: [sim.ai](https://sim.ai)
## Self-Hosting Options
<a href="https://sim.ai" target="_blank" rel="noopener noreferrer"><img src="https://img.shields.io/badge/sim.ai-6F3DFA?logo=data:image/svg%2bxml;base64,PHN2ZyB3aWR0aD0iNjE2IiBoZWlnaHQ9IjYxNiIgdmlld0JveD0iMCAwIDYxNiA2MTYiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxnIGNsaXAtcGF0aD0idXJsKCNjbGlwMF8xMTU5XzMxMykiPgo8cGF0aCBkPSJNNjE2IDBIMFY2MTZINjE2VjBaIiBmaWxsPSIjNkYzREZBIi8+CjxwYXRoIGQ9Ik04MyAzNjUuNTY3SDExM0MxMTMgMzczLjgwNSAxMTYgMzgwLjM3MyAxMjIgMzg1LjI3MkMxMjggMzg5Ljk0OCAxMzYuMTExIDM5Mi4yODUgMTQ2LjMzMyAzOTIuMjg1QzE1Ny40NDQgMzkyLjI4NSAxNjYgMzkwLjE3MSAxNzIgMzg1LjkzOUMxNzcuOTk5IDM4MS40ODcgMTgxIDM3NS41ODYgMTgxIDM2OC4yMzlDMTgxIDM2Mi44OTUgMTc5LjMzMyAzNTguNDQyIDE3NiAzNTQuODhDMTcyLjg4OSAzNTEuMzE4IDE2Ny4xMTEgMzQ4LjQyMiAxNTguNjY3IDM0Ni4xOTZMMTMwIDMzOS41MTdDMTE1LjU1NSAzMzUuOTU1IDEwNC43NzggMzMwLjQ5OSA5Ny42NjY1IDMyMy4xNTFDOTAuNzc3NSAzMTUuODA0IDg3LjMzMzQgMzA2LjExOSA4Ny4zMzM0IDI5NC4wOTZDODcuMzMzNCAyODQuMDc2IDg5Ljg4OSAyNzUuMzkyIDk0Ljk5OTYgMjY4LjA0NUMxMDAuMzMzIDI2MC42OTcgMTA3LjU1NSAyNTUuMDIgMTE2LjY2NiAyNTEuMDEyQzEyNiAyNDcuMDA0IDEzNi42NjcgMjQ1IDE0OC42NjYgMjQ1QzE2MC42NjcgMjQ1IDE3MSAyNDcuMTE2IDE3OS42NjcgMjUxLjM0NkMxODguNTU1IDI1NS41NzYgMTk1LjQ0NCAyNjEuNDc3IDIwMC4zMzMgMjY5LjA0N0MyMDUuNDQ0IDI3Ni42MTcgMjA4LjExMSAyODUuNjM0IDIwOC4zMzMgMjk2LjA5OUgxNzguMzMzQzE3OC4xMTEgMjg3LjYzOCAxNzUuMzMzIDI4MS4wNyAxNjkuOTk5IDI3Ni4zOTRDMTY0LjY2NiAyNzEuNzE5IDE1Ny4yMjIgMjY5LjM4MSAxNDcuNjY3IDI2OS4zODFDMTM3Ljg4OSAyNjkuMzgxIDEzMC4zMzMgMjcxLjQ5NiAxMjUgMjc1LjcyNkMxMTkuNjY2IDI3OS45NTcgMTE3IDI4NS43NDYgMTE3IDI5My4wOTNDMTE3IDMwNC4wMDMgMTI1IDMxMS40NjIgMTQxIDMxNS40N0wxNjkuNjY3IDMyMi40ODNDMTgzLjQ0NSAzMjUuNiAxOTMuNzc4IDMzMC43MjIgMjAwLjY2NyAzMzcuODQ3QzIwNy41NTUgMzQ0Ljc0OSAyMTEgMzU0LjIxMiAyMTEgMzY2LjIzNUMyMTEgMzc2LjQ3NyAyMDguMjIyIDM4NS40OTQgMjAyLjY2NiAzOTMuMjg3QzE5Ny4xMTEgNDAwLjg1NyAxODkuNDQ0IDQwNi43NTggMTc5LjY2NyA0MTAuOTg5QzE3MC4xMTEgNDE0Ljk5NiAxNTguNzc4IDQxNyAxNDUuNjY3IDQxN0MxMjYuNTU1IDQxNyAxMTEuMzMzIDQxMi4zMjUgOTkuOTk5NyA0MDIuOTczQzg4LjY2NjggMzkzLjYyMSA4MyAzODEuMTUzIDgzIDM2NS41NjdaIiBmaWxsPSJ3aGl0ZSIvPgo8cGF0aCBkPSJNMjMyLjI5MSA0MTNWMjUwLjA4MkMyNDQuNjg0IDI1NC42MTQgMjUwLjE0OCAyNTQuNjE0IDI2My4zNzEgMjUwLjA4MlY0MTNIMjMyLjI5MVpNMjQ3LjUgMjM5LjMxM0MyNDEuOTkgMjM5LjMxMyAyMzcuMTQgMjM3LjMxMyAyMzIuOTUyIDIzMy4zMTZDMjI4Ljk4NCAyMjkuMDk1IDIyNyAyMjQuMjA5IDIyNyAyMTguNjU2QzIyNyAyMTIuODgyIDIyOC45ODQgMjA3Ljk5NSAyMzIuOTUyIDIwMy45OTdDMjM3LjE0IDE5OS45OTkgMjQxLjk5IDE5OCAyNDcuNSAxOThDMjUzLjIzMSAxOTggMjU4LjA4IDE5OS45OTkgMjYyLjA0OSAyMDMuOTk3QzI2Ni4wMTYgMjA3Ljk5NSAyNjggMjEyLjg4MiAyNjggMjE4LjY1NkMyNjggMjI0LjIwOSAyNjYuMDE2IDIyOS4wOTUgMjYyLjA0OSAyMzMuMzE2QzI1OC4wOCAyMzcuMzEzIDI1My4yMzEgMjM5LjMxMyAyNDcuNSAyMzkuMzEzWiIgZmlsbD0id2hpdGUiLz4KPHBhdGggZD0iTTMxOS4zMzMgNDEzSDI4OFYyNDkuNjc2SDMxNlYyNzcuMjMzQzMxOS4zMzMgMjY4LjEwNCAzMjUuNzc4IDI2MC4zNjQgMzM0LjY2NyAyNTQuMzUyQzM0My43NzggMjQ4LjExNyAzNTQuNzc4IDI0NSAzNjcuNjY3IDI0NUMzODIuMTExIDI0NSAzOTQuMTEyIDI0OC44OTcgNDAzLjY2NyAyNTYuNjlDNDEzLjIyMiAyNjQuNDg0IDQxOS40NDQgMjc0LjgzNyA0MjIuMzM0IDI4Ny43NTJINDE2LjY2N0M0MTguODg5IDI3NC44MzcgNDI1IDI2NC40ODQgNDM1IDI1Ni42OUM0NDUgMjQ4Ljg5NyA0NTcuMzM0IDI0NSA0NzIgMjQ1QzQ5MC42NjYgMjQ1IDUwNS4zMzQgMjUwLjQ1NSA1MTYgMjYxLjM2NkM1MjYuNjY3IDI3Mi4yNzYgNTMyIDI4Ny4xOTUgNTMyIDMwNi4xMjFWNDEzSDUwMS4zMzNWMzEzLjgwNEM1MDEuMzMzIDMwMC44ODkgNDk4IDI5MC45ODEgNDkxLjMzMyAyODQuMDc4QzQ4NC44ODkgMjc2Ljk1MiA0NzYuMTExIDI3My4zOSA0NjUgMjczLjM5QzQ1Ny4yMjIgMjczLjM5IDQ1MC4zMzMgMjc1LjE3MSA0NDQuMzM0IDI3OC43MzRDNDM4LjU1NiAyODIuMDc0IDQzNCAyODYuOTcyIDQzMC42NjcgMjkzLjQzQzQyNy4zMzMgMjk5Ljg4NyA0MjUuNjY3IDMwNy40NTcgNDI1LjY2NyAzMTYuMTQxVjQxM0gzOTQuNjY3VjMxMy40NjlDMzk0LjY2NyAzMDAuNTU1IDM5MS40NDUgMjkwLjc1OCAzODUgMjg0LjA3OEMzNzguNTU2IDI3Ny4xNzUgMzY5Ljc3OCAyNzMuNzI0IDM1OC42NjcgMjczLjcyNEMzNTAuODg5IDI3My43MjQgMzQ0IDI3NS41MDUgMzM4IDI3OS4wNjhDMzMyLjIyMiAyODIuNDA4IDMyNy42NjcgMjg3LjMwNyAzMjQuMzMzIDI5My43NjNDMzIxIDI5OS45OTggMzE5LjMzMyAzMDcuNDU3IDMxOS4zMzMgMzE2LjE0MVY0MTNaIiBmaWxsPSJ3aGl0ZSIvPgo8L2c+CjxkZWZzPgo8Y2xpcFBhdGggaWQ9ImNsaXAwXzExNTlfMzEzIj4KPHJlY3Qgd2lkdGg9IjYxNiIgaGVpZ2h0PSI2MTYiIGZpbGw9IndoaXRlIi8+CjwvY2xpcFBhdGg+CjwvZGVmcz4KPC9zdmc+Cg==&logoColor=white" alt="Sim.ai"></a>
### Option 1: NPM Package (Simplest)
The easiest way to run Sim locally is using our [NPM package](https://www.npmjs.com/package/simstudio?activeTab=readme):
### Self-hosted: NPM Package
```bash
npx simstudio
```
→ http://localhost:3000
After running these commands, open [http://localhost:3000/](http://localhost:3000/) in your browser.
#### Note
Docker must be installed and running on your machine.
#### Options
- `-p, --port <port>`: Specify the port to run Sim on (default: 3000)
- `--no-pull`: Skip pulling the latest Docker images
| Flag | Description |
|------|-------------|
| `-p, --port <port>` | Port to run Sim on (default `3000`) |
| `--no-pull` | Skip pulling latest Docker images |
#### Requirements
- Docker must be installed and running on your machine
### Option 2: Docker Compose
### Self-hosted: Docker Compose
```bash
# Clone the repository
@@ -76,14 +72,14 @@ Wait for the model to download, then visit [http://localhost:3000](http://localh
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b
```
### Option 3: Dev Containers
### Self-hosted: Dev Containers
1. Open VS Code with the [Remote - Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
2. Open the project and click "Reopen in Container" when prompted
3. Run `bun run dev:full` in the terminal or use the `sim-start` alias
- This starts both the main application and the realtime socket server
### Option 4: Manual Setup
### Self-hosted: Manual Setup
**Requirements:**
- [Bun](https://bun.sh/) runtime
@@ -158,6 +154,14 @@ cd apps/sim
bun run dev:sockets
```
## Copilot API Keys
Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set `COPILOT_API_KEY` in your self-hosted environment to that value
- Host Sim on a publicly available DNS and set NEXT_PUBLIC_APP_URL and BETTER_AUTH_URL to that value ([ngrok](https://ngrok.com/))
## Tech Stack
- **Framework**: [Next.js](https://nextjs.org/) (App Router)
@@ -180,4 +184,4 @@ We welcome contributions! Please see our [Contributing Guide](.github/CONTRIBUTI
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
<p align="center">Made with ❤️ by the Sim Team</p>
<p align="center">Made with ❤️ by the Sim Team</p>

View File

@@ -0,0 +1,97 @@
---
title: Copilot
description: Build and edit workflows with Sim Copilot
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { MessageCircle, Package, Zap, Infinity as InfinityIcon, Brain, BrainCircuit } from 'lucide-react'
## What is Copilot
Copilot is your in-editor assistant that helps you build, understand, and improve workflows. It can:
- **Explain**: Answer questions about Sim and your current workflow
- **Guide**: Suggest edits and best practices
- **Edit**: Make changes to blocks, connections, and settings when you approve
<Callout type="info">
Copilot is a Sim-managed service. For self-hosted deployments, generate a Copilot API key in the hosted app (sim.ai → Settings → Copilot)
1. Go to [sim.ai](https://sim.ai) → Settings → Copilot and generate a Copilot API key
2. Set `COPILOT_API_KEY` in your self-hosted environment to that value
3. Host Sim on a publicly available DNS and set `NEXT_PUBLIC_APP_URL` and `BETTER_AUTH_URL` to that value (e.g., using ngrok)
</Callout>
## Modes
<Cards>
<Card title="Ask">
<div className="flex items-start gap-3">
<span className="mt-0.5 inline-flex h-8 w-8 items-center justify-center rounded-md border border-border/50 bg-muted/60">
<MessageCircle className="h-4 w-4 text-muted-foreground" />
</span>
<div>
<p className="m-0 text-sm">
Q&A mode for explanations, guidance, and suggestions without making changes to your workflow.
</p>
</div>
</div>
</Card>
<Card title="Agent">
<div className="flex items-start gap-3">
<span className="mt-0.5 inline-flex h-8 w-8 items-center justify-center rounded-md border border-border/50 bg-muted/60">
<Package className="h-4 w-4 text-muted-foreground" />
</span>
<div>
<p className="m-0 text-sm">
Build-and-edit mode. Copilot proposes specific edits (add blocks, wire variables, tweak settings) and applies them when you approve.
</p>
</div>
</div>
</Card>
</Cards>
## Depth Levels
<Cards>
<Card title="Fast">
<div className="flex items-start gap-3">
<span className="mt-0.5 inline-flex h-8 w-8 items-center justify-center rounded-md border border-border/50 bg-muted/60">
<Zap className="h-4 w-4 text-muted-foreground" />
</span>
<div>
<p className="m-0 text-sm">Quickest and cheapest. Best for small edits, simple workflows, and minor tweaks.</p>
</div>
</div>
</Card>
<Card title="Auto">
<div className="flex items-start gap-3">
<span className="mt-0.5 inline-flex h-8 w-8 items-center justify-center rounded-md border border-border/50 bg-muted/60">
<InfinityIcon className="h-4 w-4 text-muted-foreground" />
</span>
<div>
<p className="m-0 text-sm">Balanced speed and reasoning. Recommended default for most tasks.</p>
</div>
</div>
</Card>
<Card title="Pro">
<div className="flex items-start gap-3">
<span className="mt-0.5 inline-flex h-8 w-8 items-center justify-center rounded-md border border-border/50 bg-muted/60">
<Brain className="h-4 w-4 text-muted-foreground" />
</span>
<div>
<p className="m-0 text-sm">More reasoning for larger workflows and complex edits while staying performant.</p>
</div>
</div>
</Card>
<Card title="Max">
<div className="flex items-start gap-3">
<span className="mt-0.5 inline-flex h-8 w-8 items-center justify-center rounded-md border border-border/50 bg-muted/60">
<BrainCircuit className="h-4 w-4 text-muted-foreground" />
</span>
<div>
<p className="m-0 text-sm">Maximum reasoning for deep planning, debugging, and complex architectural changes.</p>
</div>
</div>
</Card>
</Cards>

View File

@@ -0,0 +1,4 @@
{
"title": "Copilot",
"pages": ["index"]
}

View File

@@ -12,6 +12,8 @@
"connections",
"---Execution---",
"execution",
"---Copilot---",
"copilot",
"---Advanced---",
"./variables/index",
"yaml",

View File

@@ -71,19 +71,16 @@ Read records from an Airtable table
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token |
| `baseId` | string | Yes | ID of the Airtable base |
| `tableId` | string | Yes | ID of the table |
| `maxRecords` | number | No | Maximum number of records to return |
| `filterFormula` | string | No | Formula to filter records \(e.g., |
| `filterFormula` | string | No | Formula to filter records \(e.g., "\(\{Field Name\} = \'Value\'\)"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Retrieved record data |
| `record` | json | Single record data |
| `metadata` | json | Operation metadata |
| `records` | json | Array of retrieved Airtable records |
### `airtable_get_record`
@@ -93,7 +90,6 @@ Retrieve a single record from an Airtable table by its ID
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token |
| `baseId` | string | Yes | ID of the Airtable base |
| `tableId` | string | Yes | ID or name of the table |
| `recordId` | string | Yes | ID of the record to retrieve |
@@ -102,9 +98,8 @@ Retrieve a single record from an Airtable table by its ID
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Retrieved record data |
| `record` | json | Single record data |
| `metadata` | json | Operation metadata |
| `record` | json | Retrieved Airtable record with id, createdTime, and fields |
| `metadata` | json | Operation metadata including record count |
### `airtable_create_records`
@@ -114,17 +109,16 @@ Write new records to an Airtable table
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token |
| `baseId` | string | Yes | ID of the Airtable base |
| `tableId` | string | Yes | ID or name of the table |
| `records` | json | Yes | Array of records to create, each with a `fields` object |
| `fields` | string | No | No description |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Retrieved record data |
| `record` | json | Single record data |
| `metadata` | json | Operation metadata |
| `records` | json | Array of created Airtable records |
### `airtable_update_record`
@@ -134,7 +128,6 @@ Update an existing record in an Airtable table by ID
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token |
| `baseId` | string | Yes | ID of the Airtable base |
| `tableId` | string | Yes | ID or name of the table |
| `recordId` | string | Yes | ID of the record to update |
@@ -144,9 +137,8 @@ Update an existing record in an Airtable table by ID
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Retrieved record data |
| `record` | json | Single record data |
| `metadata` | json | Operation metadata |
| `record` | json | Updated Airtable record with id, createdTime, and fields |
| `metadata` | json | Operation metadata including record count and updated field names |
### `airtable_update_multiple_records`
@@ -156,17 +148,15 @@ Update multiple existing records in an Airtable table
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token |
| `baseId` | string | Yes | ID of the Airtable base |
| `tableId` | string | Yes | ID or name of the table |
| `records` | json | Yes | Array of records to update, each with an `id` and a `fields` object |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Retrieved record data |
| `record` | json | Single record data |
| `metadata` | json | Operation metadata |
| `records` | json | Array of updated Airtable records |

View File

@@ -71,10 +71,7 @@ Search for academic papers on ArXiv by keywords, authors, titles, or other field
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `papers` | json | Found papers data |
| `totalResults` | number | Total results count |
| `paper` | json | Paper details |
| `authorPapers` | json | Author papers list |
| `papers` | json | Array of papers matching the search query |
### `arxiv_get_paper`
@@ -84,16 +81,13 @@ Get detailed information about a specific ArXiv paper by its ID.
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `paperId` | string | Yes | ArXiv paper ID \(e.g., |
| `paperId` | string | Yes | ArXiv paper ID \(e.g., "1706.03762"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `papers` | json | Found papers data |
| `totalResults` | number | Total results count |
| `paper` | json | Paper details |
| `authorPapers` | json | Author papers list |
| `paper` | json | Detailed information about the requested ArXiv paper |
### `arxiv_get_author_papers`
@@ -110,10 +104,7 @@ Search for papers by a specific author on ArXiv.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `papers` | json | Found papers data |
| `totalResults` | number | Total results count |
| `paper` | json | Paper details |
| `authorPapers` | json | Author papers list |
| `authorPapers` | json | Array of papers authored by the specified author |

View File

@@ -73,6 +73,7 @@ Runs a browser automation task using BrowserUse
| --------- | ---- | -------- | ----------- |
| `task` | string | Yes | What should the browser agent do |
| `variables` | json | No | Optional variables to use as secrets \(format: \{key: value\}\) |
| `format` | string | No | No description |
| `save_browser_data` | boolean | No | Whether to save browser data |
| `model` | string | No | LLM model to use \(default: gpt-4o\) |
| `apiKey` | string | Yes | API key for BrowserUse API |
@@ -83,7 +84,7 @@ Runs a browser automation task using BrowserUse
| --------- | ---- | ----------- |
| `id` | string | Task execution identifier |
| `success` | boolean | Task completion status |
| `output` | any | Task output data |
| `output` | json | Task output data |
| `steps` | json | Execution steps taken |

View File

@@ -220,7 +220,8 @@ Populate Clay with data from a JSON file. Enables direct communication and notif
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | any | Response data |
| `success` | boolean | Operation success status |
| `output` | json | Clay populate operation results including response data from Clay webhook |

View File

@@ -57,7 +57,6 @@ Retrieve content from Confluence pages using the Confluence API.
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Confluence |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to retrieve |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
@@ -66,11 +65,10 @@ Retrieve content from Confluence pages using the Confluence API.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp |
| `pageId` | string | Page identifier |
| `content` | string | Page content |
| `ts` | string | Timestamp of retrieval |
| `pageId` | string | Confluence page ID |
| `content` | string | Page content with HTML tags stripped |
| `title` | string | Page title |
| `success` | boolean | Operation success status |
### `confluence_update`
@@ -80,7 +78,6 @@ Update a Confluence page using the Confluence API.
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Confluence |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to update |
| `title` | string | No | New title for the page |
@@ -92,11 +89,10 @@ Update a Confluence page using the Confluence API.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp |
| `pageId` | string | Page identifier |
| `content` | string | Page content |
| `title` | string | Page title |
| `success` | boolean | Operation success status |
| `ts` | string | Timestamp of update |
| `pageId` | string | Confluence page ID |
| `title` | string | Updated page title |
| `success` | boolean | Update operation success status |

View File

@@ -80,8 +80,8 @@ Send a message to a Discord channel
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Message content |
| `data` | any | Response data |
| `message` | string | Success or error message |
| `data` | object | Discord message data |
### `discord_get_messages`
@@ -99,8 +99,8 @@ Retrieve messages from a Discord channel
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Message content |
| `data` | any | Response data |
| `message` | string | Success or error message |
| `messages` | array | Array of Discord messages with full metadata |
### `discord_get_server`
@@ -117,8 +117,8 @@ Retrieve information about a Discord server (guild)
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Message content |
| `data` | any | Response data |
| `message` | string | Success or error message |
| `data` | object | Discord server \(guild\) information |
### `discord_get_user`
@@ -135,8 +135,8 @@ Retrieve information about a Discord user
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Message content |
| `data` | any | Response data |
| `message` | string | Success or error message |
| `data` | object | Discord user information |

View File

@@ -62,7 +62,7 @@ Convert TTS using ElevenLabs voices
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `audioUrl` | string | Generated audio URL |
| `audioUrl` | string | The URL of the generated audio |

View File

@@ -68,11 +68,7 @@ Search the web using Exa AI. Returns relevant search results with titles, URLs,
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `similarLinks` | json | Similar links found |
| `answer` | string | Generated answer |
| `citations` | json | Answer citations |
| `research` | json | Research findings |
| `results` | array | Search results with titles, URLs, and text snippets |
### `exa_get_contents`
@@ -91,11 +87,7 @@ Retrieve the contents of webpages using Exa AI. Returns the title, text content,
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `similarLinks` | json | Similar links found |
| `answer` | string | Generated answer |
| `citations` | json | Answer citations |
| `research` | json | Research findings |
| `results` | array | Retrieved content from URLs with title, text, and summaries |
### `exa_find_similar_links`
@@ -114,11 +106,7 @@ Find webpages similar to a given URL using Exa AI. Returns a list of similar lin
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `similarLinks` | json | Similar links found |
| `answer` | string | Generated answer |
| `citations` | json | Answer citations |
| `research` | json | Research findings |
| `similarLinks` | array | Similar links found with titles, URLs, and text snippets |
### `exa_answer`
@@ -136,11 +124,8 @@ Get an AI-generated answer to a question with citations from the web using Exa A
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `similarLinks` | json | Similar links found |
| `answer` | string | Generated answer |
| `citations` | json | Answer citations |
| `research` | json | Research findings |
| `answer` | string | AI-generated answer to the question |
| `citations` | array | Sources and citations for the answer |
### `exa_research`
@@ -158,11 +143,7 @@ Perform comprehensive research using AI to generate detailed reports with citati
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `similarLinks` | json | Similar links found |
| `answer` | string | Generated answer |
| `citations` | json | Answer citations |
| `research` | json | Research findings |
| `research` | array | Comprehensive research findings with citations and summaries |

View File

@@ -71,8 +71,8 @@ Parse one or more uploaded files or files from URLs (text, PDF, CSV, images, etc
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `files` | json | Parsed file data |
| `combinedContent` | string | Combined file content |
| `files` | array | Array of parsed files |
| `combinedContent` | string | Combined content of all parsed files |

View File

@@ -81,14 +81,9 @@ Extract structured content from web pages with comprehensive metadata support. C
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `markdown` | string | Page content markdown |
| `html` | any | Raw HTML content |
| `metadata` | json | Page metadata |
| `data` | json | Search results data |
| `warning` | any | Warning messages |
| `pages` | json | Crawled pages data |
| `total` | number | Total pages found |
| `creditsUsed` | number | Credits consumed |
| `markdown` | string | Page content in markdown format |
| `html` | string | Raw HTML content of the page |
| `metadata` | object | Page metadata including SEO and Open Graph information |
### `firecrawl_search`
@@ -105,14 +100,7 @@ Search for information on the web using Firecrawl
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `markdown` | string | Page content markdown |
| `html` | any | Raw HTML content |
| `metadata` | json | Page metadata |
| `data` | json | Search results data |
| `warning` | any | Warning messages |
| `pages` | json | Crawled pages data |
| `total` | number | Total pages found |
| `creditsUsed` | number | Credits consumed |
| `data` | array | Search results data |
### `firecrawl_crawl`
@@ -131,14 +119,7 @@ Crawl entire websites and extract structured content from all accessible pages
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `markdown` | string | Page content markdown |
| `html` | any | Raw HTML content |
| `metadata` | json | Page metadata |
| `data` | json | Search results data |
| `warning` | any | Warning messages |
| `pages` | json | Crawled pages data |
| `total` | number | Total pages found |
| `creditsUsed` | number | Credits consumed |
| `pages` | array | Array of crawled pages with their content and metadata |

View File

@@ -0,0 +1,32 @@
---
title: Webhook
description: Receive webhooks from any service
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="generic_webhook"
color="#10B981"
icon={true}
iconSvg={`<svg className="block-icon"
fill='currentColor'
viewBox='0 0 24 24'
xmlns='http://www.w3.org/2000/svg'
>
<path d='M17.974 7A4.967 4.967 0 0 0 18 6.5a5.5 5.5 0 1 0-8.672 4.491L7.18 15.114A2.428 2.428 0 0 0 6.496 15 2.5 2.5 0 1 0 9 17.496a2.36 2.36 0 0 0-.93-1.925l2.576-4.943-.41-.241A4.5 4.5 0 1 1 17 6.5a4.8 4.8 0 0 1-.022.452zM6.503 18.999a1.5 1.5 0 1 1 1.496-1.503A1.518 1.518 0 0 1 6.503 19zM18.5 12a5.735 5.735 0 0 0-1.453.157l-2.744-3.941A2.414 2.414 0 0 0 15 6.5a2.544 2.544 0 1 0-1.518 2.284l3.17 4.557.36-.13A4.267 4.267 0 0 1 18.5 13a4.5 4.5 0 1 1-.008 9h-.006a4.684 4.684 0 0 1-3.12-1.355l-.703.71A5.653 5.653 0 0 0 18.49 23h.011a5.5 5.5 0 0 0 0-11zM11 6.5A1.5 1.5 0 1 1 12.5 8 1.509 1.509 0 0 1 11 6.5zM18.5 20a2.5 2.5 0 1 0-2.447-3h-5.05l-.003.497A4.546 4.546 0 0 1 6.5 22 4.526 4.526 0 0 1 2 17.5a4.596 4.596 0 0 1 3.148-4.37l-.296-.954A5.606 5.606 0 0 0 1 17.5 5.532 5.532 0 0 0 6.5 23a5.573 5.573 0 0 0 5.478-5h4.08a2.487 2.487 0 0 0 2.442 2zm0-4a1.5 1.5 0 1 1-1.5 1.5 1.509 1.509 0 0 1 1.5-1.5z' />
<path fill='none' d='M0 0h24v24H0z' />
</svg>`}
/>
## Notes
- Category: `triggers`
- Type: `generic_webhook`

View File

@@ -1,6 +1,6 @@
---
title: GitHub
description: Interact with GitHub
description: Interact with GitHub or trigger workflows from GitHub events
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -35,7 +35,7 @@ In Sim, the GitHub integration enables your agents to interact directly with Git
## Usage Instructions
Access GitHub repositories, pull requests, and comments through the GitHub API. Automate code reviews, PR management, and repository interactions within your workflow.
Access GitHub repositories, pull requests, and comments through the GitHub API. Automate code reviews, PR management, and repository interactions within your workflow. Trigger workflows from GitHub events like push, pull requests, and issues.
@@ -58,8 +58,8 @@ Fetch PR details including diff and files changed
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Response metadata |
| `content` | string | Human-readable PR summary |
| `metadata` | object | Detailed PR metadata including file changes |
### `github_comment`
@@ -85,8 +85,8 @@ Create comments on GitHub PRs
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Response metadata |
| `content` | string | Human-readable comment confirmation |
| `metadata` | object | Comment metadata |
### `github_repo_info`
@@ -104,8 +104,8 @@ Retrieve comprehensive GitHub repository metadata including stars, forks, issues
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Response metadata |
| `content` | string | Human-readable repository summary |
| `metadata` | object | Repository metadata |
### `github_latest_commit`
@@ -117,15 +117,15 @@ Retrieve the latest commit from a GitHub repository
| --------- | ---- | -------- | ----------- |
| `owner` | string | Yes | Repository owner \(user or organization\) |
| `repo` | string | Yes | Repository name |
| `branch` | string | No | Branch name \(defaults to the repository |
| `branch` | string | No | Branch name \(defaults to the repository's default branch\) |
| `apiKey` | string | Yes | GitHub API token |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Response metadata |
| `content` | string | Human-readable commit summary |
| `metadata` | object | Commit metadata |

View File

@@ -1,6 +1,6 @@
---
title: Gmail
description: Send Gmail
description: Send Gmail or trigger workflows from Gmail events
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -51,7 +51,7 @@ In Sim, the Gmail integration enables your agents to send, read, and search emai
## Usage Instructions
Integrate Gmail functionality to send email messages within your workflow. Automate email communications and process email content using OAuth authentication.
Comprehensive Gmail integration with OAuth authentication. Send email messages, read email content, and trigger workflows from Gmail events like new emails and label changes.
@@ -65,17 +65,18 @@ Send emails using Gmail
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Gmail API |
| `to` | string | Yes | Recipient email address |
| `subject` | string | Yes | Email subject |
| `body` | string | Yes | Email body content |
| `cc` | string | No | CC recipients \(comma-separated\) |
| `bcc` | string | No | BCC recipients \(comma-separated\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Email metadata |
| `content` | string | Success message |
| `metadata` | object | Email metadata |
### `gmail_draft`
@@ -85,17 +86,18 @@ Draft emails using Gmail
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Gmail API |
| `to` | string | Yes | Recipient email address |
| `subject` | string | Yes | Email subject |
| `body` | string | Yes | Email body content |
| `cc` | string | No | CC recipients \(comma-separated\) |
| `bcc` | string | No | BCC recipients \(comma-separated\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Email metadata |
| `content` | string | Success message |
| `metadata` | object | Draft metadata |
### `gmail_read`
@@ -105,18 +107,19 @@ Read emails from Gmail
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Gmail API |
| `messageId` | string | No | ID of the message to read |
| `folder` | string | No | Folder/label to read emails from |
| `unreadOnly` | boolean | No | Only retrieve unread messages |
| `maxResults` | number | No | Maximum number of messages to retrieve \(default: 1, max: 10\) |
| `includeAttachments` | boolean | No | Download and include email attachments |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Email metadata |
| `content` | string | Text content of the email |
| `metadata` | json | Metadata of the email |
| `attachments` | file[] | Attachments of the email |
### `gmail_search`
@@ -126,7 +129,6 @@ Search emails in Gmail
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Gmail API |
| `query` | string | Yes | Search query for emails |
| `maxResults` | number | No | Maximum number of results to return |
@@ -134,8 +136,8 @@ Search emails in Gmail
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Response content |
| `metadata` | json | Email metadata |
| `content` | string | Search results summary |
| `metadata` | object | Search metadata |

View File

@@ -104,7 +104,6 @@ Create a new event in Google Calendar
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Google Calendar API |
| `calendarId` | string | No | Calendar ID \(defaults to primary\) |
| `summary` | string | Yes | Event title/summary |
| `description` | string | No | Event description |
@@ -119,8 +118,8 @@ Create a new event in Google Calendar
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Operation response content |
| `metadata` | json | Event metadata |
| `content` | string | Event creation confirmation message |
| `metadata` | json | Created event metadata including ID, status, and details |
### `google_calendar_list`
@@ -130,7 +129,6 @@ List events from Google Calendar
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Google Calendar API |
| `calendarId` | string | No | Calendar ID \(defaults to primary\) |
| `timeMin` | string | No | Lower bound for events \(RFC3339 timestamp, e.g., 2025-06-03T00:00:00Z\) |
| `timeMax` | string | No | Upper bound for events \(RFC3339 timestamp, e.g., 2025-06-04T00:00:00Z\) |
@@ -141,8 +139,8 @@ List events from Google Calendar
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Operation response content |
| `metadata` | json | Event metadata |
| `content` | string | Summary of found events count |
| `metadata` | json | List of events with pagination tokens and event details |
### `google_calendar_get`
@@ -152,7 +150,6 @@ Get a specific event from Google Calendar
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Google Calendar API |
| `calendarId` | string | No | Calendar ID \(defaults to primary\) |
| `eventId` | string | Yes | Event ID to retrieve |
@@ -160,8 +157,8 @@ Get a specific event from Google Calendar
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Operation response content |
| `metadata` | json | Event metadata |
| `content` | string | Event retrieval confirmation message |
| `metadata` | json | Event details including ID, status, times, and attendees |
### `google_calendar_quick_add`
@@ -171,9 +168,8 @@ Create events from natural language text
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Google Calendar API |
| `calendarId` | string | No | Calendar ID \(defaults to primary\) |
| `text` | string | Yes | Natural language text describing the event \(e.g., |
| `text` | string | Yes | Natural language text describing the event \(e.g., "Meeting with John tomorrow at 3pm"\) |
| `attendees` | array | No | Array of attendee email addresses \(comma-separated string also accepted\) |
| `sendUpdates` | string | No | How to send updates to attendees: all, externalOnly, or none |
@@ -181,8 +177,8 @@ Create events from natural language text
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Operation response content |
| `metadata` | json | Event metadata |
| `content` | string | Event creation confirmation message from natural language |
| `metadata` | json | Created event metadata including parsed details |
### `google_calendar_invite`
@@ -192,7 +188,6 @@ Invite attendees to an existing Google Calendar event
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Google Calendar API |
| `calendarId` | string | No | Calendar ID \(defaults to primary\) |
| `eventId` | string | Yes | Event ID to invite attendees to |
| `attendees` | array | Yes | Array of attendee email addresses to invite |
@@ -203,8 +198,8 @@ Invite attendees to an existing Google Calendar event
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Operation response content |
| `metadata` | json | Event metadata |
| `content` | string | Attendee invitation confirmation message with email delivery status |
| `metadata` | json | Updated event metadata including attendee list and details |

View File

@@ -95,16 +95,14 @@ Read content from a Google Docs document
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Docs API |
| `documentId` | string | Yes | The ID of the document to read |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Document content |
| `metadata` | json | Document metadata |
| `updatedContent` | boolean | Content update status |
| `content` | string | Extracted document text content |
| `metadata` | json | Document metadata including ID, title, and URL |
### `google_docs_write`
@@ -114,7 +112,6 @@ Write or update content in a Google Docs document
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Docs API |
| `documentId` | string | Yes | The ID of the document to write to |
| `content` | string | Yes | The content to write to the document |
@@ -122,9 +119,8 @@ Write or update content in a Google Docs document
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Document content |
| `metadata` | json | Document metadata |
| `updatedContent` | boolean | Content update status |
| `updatedContent` | boolean | Indicates if document content was updated successfully |
| `metadata` | json | Updated document metadata including ID, title, and URL |
### `google_docs_create`
@@ -134,7 +130,6 @@ Create a new Google Docs document
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Docs API |
| `title` | string | Yes | The title of the document to create |
| `content` | string | No | The content of the document to create |
| `folderSelector` | string | No | Select the folder to create the document in |
@@ -144,9 +139,7 @@ Create a new Google Docs document
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Document content |
| `metadata` | json | Document metadata |
| `updatedContent` | boolean | Content update status |
| `metadata` | json | Created document metadata including ID, title, and URL |

View File

@@ -87,7 +87,6 @@ Upload a file to Google Drive
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Drive API |
| `fileName` | string | Yes | The name of the file to upload |
| `content` | string | Yes | The content of the file to upload |
| `mimeType` | string | No | The MIME type of the file to upload |
@@ -98,8 +97,7 @@ Upload a file to Google Drive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | File data |
| `files` | json | Files list |
| `file` | json | Uploaded file metadata including ID, name, and links |
### `google_drive_create_folder`
@@ -109,7 +107,6 @@ Create a new folder in Google Drive
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Drive API |
| `fileName` | string | Yes | Name of the folder to create |
| `folderSelector` | string | No | Select the parent folder to create the folder in |
| `folderId` | string | No | ID of the parent folder \(internal use\) |
@@ -118,8 +115,7 @@ Create a new folder in Google Drive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | File data |
| `files` | json | Files list |
| `file` | json | Created folder metadata including ID, name, and parent information |
### `google_drive_list`
@@ -129,7 +125,6 @@ List files and folders in Google Drive
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Drive API |
| `folderSelector` | string | No | Select the folder to list files from |
| `folderId` | string | No | The ID of the folder to list files from \(internal use\) |
| `query` | string | No | A query to filter the files |
@@ -140,8 +135,7 @@ List files and folders in Google Drive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | File data |
| `files` | json | Files list |
| `files` | json | Array of file metadata objects from the specified folder |

View File

@@ -81,8 +81,7 @@ Search the web with the Custom Search API
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | json | Search result items |
| `searchInformation` | json | Search metadata |
| `items` | array | Array of search results from Google |

View File

@@ -110,7 +110,6 @@ Read data from a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Sheets API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to read from |
| `range` | string | No | The range of cells to read from |
@@ -118,13 +117,8 @@ Read data from a Google Sheets spreadsheet
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `tableRange` | string | Table range |
| `data` | json | Sheet data including range and cell values |
| `metadata` | json | Spreadsheet metadata including ID and URL |
### `google_sheets_write`
@@ -134,7 +128,6 @@ Write data to a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Sheets API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to write to |
| `range` | string | No | The range of cells to write to |
| `values` | array | Yes | The data to write to the spreadsheet |
@@ -145,13 +138,11 @@ Write data to a Google Sheets spreadsheet
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `tableRange` | string | Table range |
| `updatedRange` | string | Range of cells that were updated |
| `updatedRows` | number | Number of rows updated |
| `updatedColumns` | number | Number of columns updated |
| `updatedCells` | number | Number of cells updated |
| `metadata` | json | Spreadsheet metadata including ID and URL |
### `google_sheets_update`
@@ -161,7 +152,6 @@ Update data in a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Sheets API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to update |
| `range` | string | No | The range of cells to update |
| `values` | array | Yes | The data to update in the spreadsheet |
@@ -172,13 +162,11 @@ Update data in a Google Sheets spreadsheet
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `tableRange` | string | Table range |
| `updatedRange` | string | Range of cells that were updated |
| `updatedRows` | number | Number of rows updated |
| `updatedColumns` | number | Number of columns updated |
| `updatedCells` | number | Number of cells updated |
| `metadata` | json | Spreadsheet metadata including ID and URL |
### `google_sheets_append`
@@ -188,7 +176,6 @@ Append data to the end of a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Google Sheets API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to append to |
| `range` | string | No | The range of cells to append after |
| `values` | array | Yes | The data to append to the spreadsheet |
@@ -200,13 +187,12 @@ Append data to the end of a Google Sheets spreadsheet
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `tableRange` | string | Table range |
| `tableRange` | string | Range of the table where data was appended |
| `updatedRange` | string | Range of cells that were updated |
| `updatedRows` | number | Number of rows updated |
| `updatedColumns` | number | Number of columns updated |
| `updatedCells` | number | Number of cells updated |
| `metadata` | json | Spreadsheet metadata including ID and URL |

View File

@@ -92,9 +92,8 @@ Generate completions using Hugging Face Inference API
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Generated response |
| `model` | string | Model used |
| `usage` | json | Token usage stats |
| `success` | boolean | Operation success status |
| `output` | object | Chat completion results |

View File

@@ -57,7 +57,7 @@ Returns companies matching a set of criteria using Hunter.io AI-powered search.
| --------- | ---- | -------- | ----------- |
| `query` | string | No | Natural language search query for companies |
| `domain` | string | No | Company domain names to filter by |
| `headcount` | string | No | Company size filter \(e.g., |
| `headcount` | string | No | Company size filter \(e.g., "1-10", "11-50"\) |
| `company_type` | string | No | Type of organization |
| `technology` | string | No | Technology used by companies |
| `apiKey` | string | Yes | Hunter.io API Key |
@@ -66,15 +66,7 @@ Returns companies matching a set of criteria using Hunter.io AI-powered search.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `emails` | json | Email addresses found |
| `email` | string | Found email address |
| `score` | number | Confidence score |
| `result` | string | Verification result |
| `status` | string | Status message |
| `total` | number | Total results count |
| `personal_emails` | number | Personal emails count |
| `generic_emails` | number | Generic emails count |
| `results` | array | Array of companies matching the search criteria, each containing domain, name, headcount, technologies, and email_count |
### `hunter_domain_search`
@@ -96,15 +88,26 @@ Returns all the email addresses found using one given domain name, with sources.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `emails` | json | Email addresses found |
| `email` | string | Found email address |
| `score` | number | Confidence score |
| `result` | string | Verification result |
| `status` | string | Status message |
| `total` | number | Total results count |
| `personal_emails` | number | Personal emails count |
| `generic_emails` | number | Generic emails count |
| `domain` | string | The searched domain name |
| `disposable` | boolean | Whether the domain accepts disposable email addresses |
| `webmail` | boolean | Whether the domain is a webmail provider |
| `accept_all` | boolean | Whether the domain accepts all email addresses |
| `pattern` | string | The email pattern used by the organization |
| `organization` | string | The organization name |
| `description` | string | Description of the organization |
| `industry` | string | Industry of the organization |
| `twitter` | string | Twitter profile of the organization |
| `facebook` | string | Facebook profile of the organization |
| `linkedin` | string | LinkedIn profile of the organization |
| `instagram` | string | Instagram profile of the organization |
| `youtube` | string | YouTube channel of the organization |
| `technologies` | array | Array of technologies used by the organization |
| `country` | string | Country where the organization is located |
| `state` | string | State where the organization is located |
| `city` | string | City where the organization is located |
| `postal_code` | string | Postal code of the organization |
| `street` | string | Street address of the organization |
| `emails` | array | Array of email addresses found for the domain, each containing value, type, confidence, sources, and person details |
### `hunter_email_finder`
@@ -115,8 +118,8 @@ Finds the most likely email address for a person given their name and company do
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Company domain name |
| `first_name` | string | Yes | Person |
| `last_name` | string | Yes | Person |
| `first_name` | string | Yes | Person's first name |
| `last_name` | string | Yes | Person's last name |
| `company` | string | No | Company name |
| `apiKey` | string | Yes | Hunter.io API Key |
@@ -124,15 +127,10 @@ Finds the most likely email address for a person given their name and company do
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `emails` | json | Email addresses found |
| `email` | string | Found email address |
| `score` | number | Confidence score |
| `result` | string | Verification result |
| `status` | string | Status message |
| `total` | number | Total results count |
| `personal_emails` | number | Personal emails count |
| `generic_emails` | number | Generic emails count |
| `email` | string | The found email address |
| `score` | number | Confidence score for the found email address |
| `sources` | array | Array of sources where the email was found, each containing domain, uri, extracted_on, last_seen_on, and still_on_page |
| `verification` | object | Verification information containing date and status |
### `hunter_email_verifier`
@@ -149,15 +147,20 @@ Verifies the deliverability of an email address and provides detailed verificati
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `emails` | json | Email addresses found |
| `email` | string | Found email address |
| `score` | number | Confidence score |
| `result` | string | Verification result |
| `status` | string | Status message |
| `total` | number | Total results count |
| `personal_emails` | number | Personal emails count |
| `generic_emails` | number | Generic emails count |
| `result` | string | Deliverability result: deliverable, undeliverable, or risky |
| `score` | number | Confidence score for the verification result |
| `email` | string | The verified email address |
| `regexp` | boolean | Whether the email follows a valid regex pattern |
| `gibberish` | boolean | Whether the email appears to be gibberish |
| `disposable` | boolean | Whether the email is from a disposable email provider |
| `webmail` | boolean | Whether the email is from a webmail provider |
| `mx_records` | boolean | Whether MX records exist for the domain |
| `smtp_server` | boolean | Whether the SMTP server is reachable |
| `smtp_check` | boolean | Whether the SMTP check was successful |
| `accept_all` | boolean | Whether the domain accepts all email addresses |
| `block` | boolean | Whether the email is blocked |
| `status` | string | Verification status: valid, invalid, accept_all, webmail, disposable, or unknown |
| `sources` | array | Array of sources where the email was found |
### `hunter_companies_find`
@@ -174,15 +177,8 @@ Enriches company data using domain name.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `emails` | json | Email addresses found |
| `email` | string | Found email address |
| `score` | number | Confidence score |
| `result` | string | Verification result |
| `status` | string | Status message |
| `total` | number | Total results count |
| `personal_emails` | number | Personal emails count |
| `generic_emails` | number | Generic emails count |
| `person` | object | Person information \(undefined for companies_find tool\) |
| `company` | object | Company information including name, domain, industry, size, country, linkedin, and twitter |
### `hunter_email_count`
@@ -201,15 +197,11 @@ Returns the total number of email addresses found for a domain or company.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `emails` | json | Email addresses found |
| `email` | string | Found email address |
| `score` | number | Confidence score |
| `result` | string | Verification result |
| `status` | string | Status message |
| `total` | number | Total results count |
| `personal_emails` | number | Personal emails count |
| `generic_emails` | number | Generic emails count |
| `total` | number | Total number of email addresses found |
| `personal_emails` | number | Number of personal email addresses found |
| `generic_emails` | number | Number of generic email addresses found |
| `department` | object | Breakdown of email addresses by department \(executive, it, finance, management, sales, legal, support, hr, marketing, communication\) |
| `seniority` | object | Breakdown of email addresses by seniority level \(junior, senior, executive\) |

View File

@@ -73,9 +73,8 @@ Generate images using OpenAI
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Generation response |
| `image` | string | Generated image URL |
| `metadata` | json | Generation metadata |
| `success` | boolean | Operation success status |
| `output` | object | Generated image data |

View File

@@ -87,7 +87,7 @@ Extract and process web content into clean, LLM-friendly text using Jina AI Read
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Extracted content |
| `content` | string | The extracted content from the URL, processed into clean, LLM-friendly text |

View File

@@ -57,7 +57,6 @@ Retrieve detailed information about a specific Jira issue
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Jira |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `projectId` | string | No | Jira project ID to retrieve issues from. If not provided, all issues will be retrieved. |
| `issueKey` | string | Yes | Jira issue key to retrieve \(e.g., PROJ-123\) |
@@ -67,14 +66,8 @@ Retrieve detailed information about a specific Jira issue
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp |
| `issueKey` | string | Issue key |
| `summary` | string | Issue summary |
| `description` | string | Issue description |
| `created` | string | Creation date |
| `updated` | string | Update date |
| `success` | boolean | Operation success |
| `url` | string | Issue URL |
| `success` | boolean | Operation success status |
| `output` | object | Jira issue details with issue key, summary, description, created and updated timestamps |
### `jira_update`
@@ -84,7 +77,6 @@ Update a Jira issue
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Jira |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `projectId` | string | No | Jira project ID to update issues in. If not provided, all issues will be retrieved. |
| `issueKey` | string | Yes | Jira issue key to update |
@@ -99,14 +91,8 @@ Update a Jira issue
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp |
| `issueKey` | string | Issue key |
| `summary` | string | Issue summary |
| `description` | string | Issue description |
| `created` | string | Creation date |
| `updated` | string | Update date |
| `success` | boolean | Operation success |
| `url` | string | Issue URL |
| `success` | boolean | Operation success status |
| `output` | object | Updated Jira issue details with timestamp, issue key, summary, and success status |
### `jira_write`
@@ -116,7 +102,6 @@ Write a Jira issue
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Jira |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `projectId` | string | Yes | Project ID for the issue |
| `summary` | string | Yes | Summary for the issue |
@@ -130,14 +115,8 @@ Write a Jira issue
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp |
| `issueKey` | string | Issue key |
| `summary` | string | Issue summary |
| `description` | string | Issue description |
| `created` | string | Creation date |
| `updated` | string | Update date |
| `success` | boolean | Operation success |
| `url` | string | Issue URL |
| `success` | boolean | Operation success status |
| `output` | object | Created Jira issue details with timestamp, issue key, summary, success status, and URL |
### `jira_bulk_read`
@@ -147,7 +126,6 @@ Retrieve multiple Jira issues in bulk
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Jira |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `projectId` | string | Yes | Jira project ID |
| `cloudId` | string | No | Jira cloud ID |
@@ -156,14 +134,8 @@ Retrieve multiple Jira issues in bulk
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp |
| `issueKey` | string | Issue key |
| `summary` | string | Issue summary |
| `description` | string | Issue description |
| `created` | string | Creation date |
| `updated` | string | Update date |
| `success` | boolean | Operation success |
| `url` | string | Issue URL |
| `success` | boolean | Operation success status |
| `output` | array | Array of Jira issues with summary, description, created and updated timestamps |

View File

@@ -72,9 +72,7 @@ Search for similar content in a knowledge base using vector similarity
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `query` | string | Query used |
| `totalResults` | number | Total results count |
| `results` | array | Array of search results from the knowledge base |
### `knowledge_upload_chunk`
@@ -92,9 +90,7 @@ Upload a new chunk to a document in a knowledge base
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `query` | string | Query used |
| `totalResults` | number | Total results count |
| `data` | object | Information about the uploaded chunk |
### `knowledge_create_document`
@@ -120,9 +116,7 @@ Create a new document in a knowledge base
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results |
| `query` | string | Query used |
| `totalResults` | number | Total results count |
| `data` | object | Information about the created document |

View File

@@ -63,8 +63,7 @@ Fetch and filter issues from Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `issues` | json | Issues list |
| `issue` | json | Single issue data |
| `issues` | array | Array of issues from the specified Linear team and project, each containing id, title, description, state, teamId, and projectId |
### `linear_create_issue`
@@ -83,8 +82,7 @@ Create a new issue in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `issues` | json | Issues list |
| `issue` | json | Single issue data |
| `issue` | object | The created issue containing id, title, description, state, teamId, and projectId |

View File

@@ -58,16 +58,16 @@ Search the web for information using Linkup
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `q` | string | Yes | The search query |
| `depth` | string | Yes | Search depth \(has to either be |
| `outputType` | string | Yes | Type of output to return \(has to either be |
| `depth` | string | Yes | Search depth \(has to either be "standard" or "deep"\) |
| `outputType` | string | Yes | Type of output to return \(has to either be "sourcedAnswer" or "searchResults"\) |
| `apiKey` | string | Yes | Enter your Linkup API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `answer` | string | Generated answer |
| `sources` | json | Source references |
| `answer` | string | The sourced answer to the search query |
| `sources` | array | Array of sources used to compile the answer, each containing name, url, and snippet |

View File

@@ -66,9 +66,8 @@ Add memories to Mem0 for persistent storage and retrieval
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ids` | any | Memory identifiers |
| `memories` | any | Memory data |
| `searchResults` | any | Search results |
| `ids` | array | Array of memory IDs that were created |
| `memories` | array | Array of memory objects that were created |
### `mem0_search_memories`
@@ -87,9 +86,8 @@ Search for memories in Mem0 using semantic search
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ids` | any | Memory identifiers |
| `memories` | any | Memory data |
| `searchResults` | any | Search results |
| `searchResults` | array | Array of search results with memory data, each containing id, data, and score |
| `ids` | array | Array of memory IDs found in the search results |
### `mem0_get_memories`
@@ -110,9 +108,8 @@ Retrieve memories from Mem0 by ID or filter criteria
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ids` | any | Memory identifiers |
| `memories` | any | Memory data |
| `searchResults` | any | Search results |
| `memories` | array | Array of retrieved memory objects |
| `ids` | array | Array of memory IDs that were retrieved |

View File

@@ -57,8 +57,9 @@ Add a new memory to the database or append to existing memory with the same ID.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `memories` | any | Memory data |
| `id` | string | Memory identifier |
| `success` | boolean | Whether the memory was added successfully |
| `memories` | array | Array of memory objects including the new or updated memory |
| `error` | string | Error message if operation failed |
### `memory_get`
@@ -74,8 +75,10 @@ Retrieve a specific memory by its ID
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `memories` | any | Memory data |
| `id` | string | Memory identifier |
| `success` | boolean | Whether the memory was retrieved successfully |
| `memories` | array | Array of memory data for the requested ID |
| `message` | string | Success or error message |
| `error` | string | Error message if operation failed |
### `memory_get_all`
@@ -90,8 +93,10 @@ Retrieve all memories from the database
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `memories` | any | Memory data |
| `id` | string | Memory identifier |
| `success` | boolean | Whether all memories were retrieved successfully |
| `memories` | array | Array of all memory objects with keys, types, and data |
| `message` | string | Success or error message |
| `error` | string | Error message if operation failed |
### `memory_delete`
@@ -107,8 +112,9 @@ Delete a specific memory by its ID
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `memories` | any | Memory data |
| `id` | string | Memory identifier |
| `success` | boolean | Whether the memory was deleted successfully |
| `message` | string | Success or error message |
| `error` | string | Error message if operation failed |

View File

@@ -11,6 +11,7 @@
"exa",
"file",
"firecrawl",
"generic_webhook",
"github",
"gmail",
"google_calendar",

View File

@@ -108,7 +108,6 @@ Read data from a Microsoft Excel spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Excel API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to read from |
| `range` | string | No | The range of cells to read from |
@@ -116,14 +115,7 @@ Read data from a Microsoft Excel spreadsheet
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `index` | number | Row index |
| `values` | json | Table values |
| `data` | object | Range data from the spreadsheet |
### `microsoft_excel_write`
@@ -133,7 +125,6 @@ Write data to a Microsoft Excel spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Excel API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to write to |
| `range` | string | No | The range of cells to write to |
| `values` | array | Yes | The data to write to the spreadsheet |
@@ -144,14 +135,11 @@ Write data to a Microsoft Excel spreadsheet
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `index` | number | Row index |
| `values` | json | Table values |
| `updatedRange` | string | The range that was updated |
| `updatedRows` | number | Number of rows that were updated |
| `updatedColumns` | number | Number of columns that were updated |
| `updatedCells` | number | Number of cells that were updated |
| `metadata` | object | Spreadsheet metadata |
### `microsoft_excel_table_add`
@@ -161,7 +149,6 @@ Add new rows to a Microsoft Excel table
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Excel API |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet containing the table |
| `tableName` | string | Yes | The name of the table to add rows to |
| `values` | array | Yes | The data to add to the table \(array of arrays or array of objects\) |
@@ -170,14 +157,9 @@ Add new rows to a Microsoft Excel table
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Sheet data |
| `metadata` | json | Operation metadata |
| `updatedRange` | string | Updated range |
| `updatedRows` | number | Updated rows count |
| `updatedColumns` | number | Updated columns count |
| `updatedCells` | number | Updated cells count |
| `index` | number | Row index |
| `values` | json | Table values |
| `index` | number | Index of the first row that was added |
| `values` | array | Array of rows that were added to the table |
| `metadata` | object | Spreadsheet metadata |

View File

@@ -136,7 +136,6 @@ Read tasks from Microsoft Planner - get all user tasks or all tasks from a speci
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Planner API |
| `planId` | string | No | The ID of the plan to get tasks from \(if not provided, gets all user tasks\) |
| `taskId` | string | No | The ID of the task to get |
@@ -144,8 +143,9 @@ Read tasks from Microsoft Planner - get all user tasks or all tasks from a speci
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `task` | json | The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees. |
| `metadata` | json | Additional metadata about the operation, such as timestamps, request status, or other relevant information. |
| `success` | boolean | Whether tasks were retrieved successfully |
| `tasks` | array | Array of task objects with filtered properties |
| `metadata` | object | Metadata including planId, userId, and planUrl |
### `microsoft_planner_create_task`
@@ -155,7 +155,6 @@ Create a new task in Microsoft Planner
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Planner API |
| `planId` | string | Yes | The ID of the plan where the task will be created |
| `title` | string | Yes | The title of the task |
| `description` | string | No | The description of the task |
@@ -167,8 +166,9 @@ Create a new task in Microsoft Planner
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `task` | json | The Microsoft Planner task object, including details such as id, title, description, status, due date, and assignees. |
| `metadata` | json | Additional metadata about the operation, such as timestamps, request status, or other relevant information. |
| `success` | boolean | Whether the task was created successfully |
| `task` | object | The created task object with all properties |
| `metadata` | object | Metadata including planId, taskId, and taskUrl |

View File

@@ -112,16 +112,19 @@ Read content from a Microsoft Teams chat
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Teams API |
| `chatId` | string | Yes | The ID of the chat to read from |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Message content |
| `metadata` | json | Message metadata |
| `updatedContent` | boolean | Content update status |
| `success` | boolean | Teams chat read operation success status |
| `messageCount` | number | Number of messages retrieved from chat |
| `chatId` | string | ID of the chat that was read from |
| `messages` | array | Array of chat message objects |
| `attachmentCount` | number | Total number of attachments found |
| `attachmentTypes` | array | Types of attachments found |
| `content` | string | Formatted content of chat messages |
### `microsoft_teams_write_chat`
@@ -131,7 +134,6 @@ Write or update content in a Microsoft Teams chat
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Teams API |
| `chatId` | string | Yes | The ID of the chat to write to |
| `content` | string | Yes | The content to write to the message |
@@ -139,9 +141,12 @@ Write or update content in a Microsoft Teams chat
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Message content |
| `metadata` | json | Message metadata |
| `updatedContent` | boolean | Content update status |
| `success` | boolean | Teams chat message send success status |
| `messageId` | string | Unique identifier for the sent message |
| `chatId` | string | ID of the chat where message was sent |
| `createdTime` | string | Timestamp when message was created |
| `url` | string | Web URL to the message |
| `updatedContent` | boolean | Whether content was successfully updated |
### `microsoft_teams_read_channel`
@@ -151,7 +156,6 @@ Read content from a Microsoft Teams channel
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Teams API |
| `teamId` | string | Yes | The ID of the team to read from |
| `channelId` | string | Yes | The ID of the channel to read from |
@@ -159,9 +163,14 @@ Read content from a Microsoft Teams channel
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Message content |
| `metadata` | json | Message metadata |
| `updatedContent` | boolean | Content update status |
| `success` | boolean | Teams channel read operation success status |
| `messageCount` | number | Number of messages retrieved from channel |
| `teamId` | string | ID of the team that was read from |
| `channelId` | string | ID of the channel that was read from |
| `messages` | array | Array of channel message objects |
| `attachmentCount` | number | Total number of attachments found |
| `attachmentTypes` | array | Types of attachments found |
| `content` | string | Formatted content of channel messages |
### `microsoft_teams_write_channel`
@@ -171,7 +180,6 @@ Write or send a message to a Microsoft Teams channel
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Microsoft Teams API |
| `teamId` | string | Yes | The ID of the team to write to |
| `channelId` | string | Yes | The ID of the channel to write to |
| `content` | string | Yes | The content to write to the channel |
@@ -180,9 +188,13 @@ Write or send a message to a Microsoft Teams channel
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Message content |
| `metadata` | json | Message metadata |
| `updatedContent` | boolean | Content update status |
| `success` | boolean | Teams channel message send success status |
| `messageId` | string | Unique identifier for the sent message |
| `teamId` | string | ID of the team where message was sent |
| `channelId` | string | ID of the channel where message was sent |
| `createdTime` | string | Timestamp when message was created |
| `url` | string | Web URL to the message |
| `updatedContent` | boolean | Whether content was successfully updated |

View File

@@ -106,8 +106,9 @@ Parse PDF documents using Mistral OCR API
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Extracted content |
| `metadata` | json | Processing metadata |
| `success` | boolean | Whether the PDF was parsed successfully |
| `content` | string | Extracted content in the requested format \(markdown, text, or JSON\) |
| `metadata` | object | Processing metadata including jobId, fileType, pageCount, and usage info |

View File

@@ -59,15 +59,14 @@ Read content from a Notion page
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `pageId` | string | Yes | The ID of the Notion page to read |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Page content in markdown format with headers, paragraphs, lists, and todos |
| `metadata` | object | Page metadata including title, URL, and timestamps |
### `notion_read_database`
@@ -77,15 +76,14 @@ Read database information and structure from Notion
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `databaseId` | string | Yes | The ID of the Notion database to read |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Database information including title, properties schema, and metadata |
| `metadata` | object | Database metadata including title, ID, URL, timestamps, and properties schema |
### `notion_write`
@@ -95,7 +93,6 @@ Append content to a Notion page
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `pageId` | string | Yes | The ID of the Notion page to append content to |
| `content` | string | Yes | The content to append to the page |
@@ -103,8 +100,7 @@ Append content to a Notion page
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Success message confirming content was appended to page |
### `notion_create_page`
@@ -114,7 +110,6 @@ Create a new page in Notion
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `parentId` | string | Yes | ID of the parent page |
| `title` | string | No | Title of the new page |
| `content` | string | No | Optional content to add to the page upon creation |
@@ -123,8 +118,8 @@ Create a new page in Notion
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Success message confirming page creation |
| `metadata` | object | Page metadata including title, page ID, URL, and timestamps |
### `notion_query_database`
@@ -134,7 +129,6 @@ Query and filter Notion database entries with advanced filtering
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `databaseId` | string | Yes | The ID of the database to query |
| `filter` | string | No | Filter conditions as JSON \(optional\) |
| `sorts` | string | No | Sort criteria as JSON array \(optional\) |
@@ -144,8 +138,8 @@ Query and filter Notion database entries with advanced filtering
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Formatted list of database entries with their properties |
| `metadata` | object | Query metadata including total results count, pagination info, and raw results array |
### `notion_search`
@@ -155,7 +149,6 @@ Search across all pages and databases in Notion workspace
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `query` | string | No | Search terms \(leave empty to get all pages\) |
| `filterType` | string | No | Filter by object type: page, database, or leave empty for all |
| `pageSize` | number | No | Number of results to return \(default: 100, max: 100\) |
@@ -164,8 +157,8 @@ Search across all pages and databases in Notion workspace
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Formatted list of search results including pages and databases |
| `metadata` | object | Search metadata including total results count, pagination info, and raw results array |
### `notion_create_database`
@@ -175,17 +168,16 @@ Create a new database in Notion with custom properties
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Notion OAuth access token |
| `parentId` | string | Yes | ID of the parent page where the database will be created |
| `title` | string | Yes | Title for the new database |
| `properties` | string | No | Database properties as JSON object \(optional, will create a default |
| `properties` | string | No | Database properties as JSON object \(optional, will create a default "Name" property if empty\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Page content |
| `metadata` | any | Page metadata |
| `content` | string | Success message with database details and properties list |
| `metadata` | object | Database metadata including ID, title, URL, creation time, and properties schema |

View File

@@ -65,7 +65,6 @@ Upload a file to OneDrive
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the OneDrive API |
| `fileName` | string | Yes | The name of the file to upload |
| `content` | string | Yes | The content of the file to upload |
| `folderSelector` | string | No | Select the folder to upload the file to |
@@ -75,8 +74,8 @@ Upload a file to OneDrive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | The OneDrive file object, including details such as id, name, size, and more. |
| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. |
| `success` | boolean | Whether the file was uploaded successfully |
| `file` | object | The uploaded file object with metadata including id, name, webViewLink, webContentLink, and timestamps |
### `onedrive_create_folder`
@@ -86,7 +85,6 @@ Create a new folder in OneDrive
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the OneDrive API |
| `folderName` | string | Yes | Name of the folder to create |
| `folderSelector` | string | No | Select the parent folder to create the folder in |
| `folderId` | string | No | ID of the parent folder \(internal use\) |
@@ -95,8 +93,8 @@ Create a new folder in OneDrive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | The OneDrive file object, including details such as id, name, size, and more. |
| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. |
| `success` | boolean | Whether the folder was created successfully |
| `file` | object | The created folder object with metadata including id, name, webViewLink, and timestamps |
### `onedrive_list`
@@ -106,7 +104,6 @@ List files and folders in OneDrive
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the OneDrive API |
| `folderSelector` | string | No | Select the folder to list files from |
| `folderId` | string | No | The ID of the folder to list files from \(internal use\) |
| `query` | string | No | A query to filter the files |
@@ -116,8 +113,9 @@ List files and folders in OneDrive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | The OneDrive file object, including details such as id, name, size, and more. |
| `files` | json | An array of OneDrive file objects, each containing details such as id, name, size, and more. |
| `success` | boolean | Whether files were listed successfully |
| `files` | array | Array of file and folder objects with metadata |
| `nextPageToken` | string | Token for retrieving the next page of results \(optional\) |

View File

@@ -66,9 +66,8 @@ Generate embeddings from text using OpenAI
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `embeddings` | json | Generated embeddings |
| `model` | string | Model used |
| `usage` | json | Token usage |
| `success` | boolean | Operation success status |
| `output` | object | Embeddings generation results |

View File

@@ -154,7 +154,6 @@ Send emails using Outlook
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Outlook API |
| `to` | string | Yes | Recipient email address |
| `subject` | string | Yes | Email subject |
| `body` | string | Yes | Email body content |
@@ -167,8 +166,10 @@ Send emails using Outlook
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Response message |
| `results` | json | Email results |
| `success` | boolean | Email send success status |
| `status` | string | Delivery status of the email |
| `timestamp` | string | Timestamp when email was sent |
| `message` | string | Success or error message |
### `outlook_draft`
@@ -178,17 +179,22 @@ Draft emails using Outlook
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Outlook API |
| `to` | string | Yes | Recipient email address |
| `subject` | string | Yes | Email subject |
| `body` | string | Yes | Email body content |
| `cc` | string | No | CC recipients \(comma-separated\) |
| `bcc` | string | No | BCC recipients \(comma-separated\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Response message |
| `results` | json | Email results |
| `success` | boolean | Email draft creation success status |
| `messageId` | string | Unique identifier for the drafted email |
| `status` | string | Draft status of the email |
| `subject` | string | Subject of the drafted email |
| `timestamp` | string | Timestamp when draft was created |
| `message` | string | Success or error message |
### `outlook_read`
@@ -198,7 +204,6 @@ Read emails from Outlook
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | OAuth access token for Outlook |
| `folder` | string | No | Folder ID to read emails from \(default: Inbox\) |
| `maxResults` | number | No | Maximum number of emails to retrieve \(default: 1, max: 10\) |
@@ -206,8 +211,10 @@ Read emails from Outlook
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Response message |
| `results` | json | Email results |
| `success` | boolean | Email read operation success status |
| `messageCount` | number | Number of emails retrieved |
| `messages` | array | Array of email message objects |
| `message` | string | Success or status message |

View File

@@ -62,9 +62,8 @@ Generate completions using Perplexity AI chat models
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Generated response |
| `model` | string | Model used |
| `usage` | json | Token usage |
| `success` | boolean | Operation success status |
| `output` | object | Chat completion results |

View File

@@ -67,12 +67,10 @@ Generate embeddings from text using Pinecone
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `model` | any | Model information |
| `vector_type` | any | Vector type |
| `usage` | any | Usage statistics |
| `data` | array | Generated embeddings data with values and vector type |
| `model` | string | Model used for generating embeddings |
| `vector_type` | string | Type of vector generated \(dense/sparse\) |
| `usage` | object | Usage statistics for embeddings generation |
### `pinecone_upsert_text`
@@ -91,12 +89,8 @@ Insert or update text records in a Pinecone index
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `model` | any | Model information |
| `vector_type` | any | Vector type |
| `usage` | any | Usage statistics |
| `statusText` | string | Status of the upsert operation |
| `upsertedCount` | number | Number of records successfully upserted |
### `pinecone_search_text`
@@ -119,12 +113,7 @@ Search for similar text in a Pinecone index
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `model` | any | Model information |
| `vector_type` | any | Vector type |
| `usage` | any | Usage statistics |
| `matches` | array | Search results with ID, score, and metadata |
### `pinecone_search_vector`
@@ -147,12 +136,8 @@ Search for similar vectors in a Pinecone index
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `model` | any | Model information |
| `vector_type` | any | Vector type |
| `usage` | any | Usage statistics |
| `matches` | array | Vector search results with ID, score, values, and metadata |
| `namespace` | string | Namespace where the search was performed |
### `pinecone_fetch`
@@ -171,12 +156,7 @@ Fetch vectors by ID from a Pinecone index
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `model` | any | Model information |
| `vector_type` | any | Vector type |
| `usage` | any | Usage statistics |
| `matches` | array | Fetched vectors with ID, values, metadata, and score |

View File

@@ -126,10 +126,8 @@ Insert or update points in a Qdrant collection
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `status` | any | Operation status |
| `status` | string | Status of the upsert operation |
| `data` | object | Result data from the upsert operation |
### `qdrant_search_vector`
@@ -152,10 +150,8 @@ Search for similar vectors in a Qdrant collection
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `status` | any | Operation status |
| `data` | array | Vector search results with ID, score, payload, and optional vector data |
| `status` | string | Status of the search operation |
### `qdrant_fetch_points`
@@ -176,10 +172,8 @@ Fetch points by ID from a Qdrant collection
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | any | Search matches |
| `upsertedCount` | any | Upserted count |
| `data` | any | Response data |
| `status` | any | Operation status |
| `data` | array | Fetched points with ID, payload, and optional vector data |
| `status` | string | Status of the fetch operation |

View File

@@ -53,20 +53,17 @@ Fetch posts from a subreddit with different sorting options
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Reddit API |
| `subreddit` | string | Yes | The name of the subreddit to fetch posts from \(without the r/ prefix\) |
| `sort` | string | No | Sort method for posts: |
| `sort` | string | No | Sort method for posts: "hot", "new", "top", or "rising" \(default: "hot"\) |
| `limit` | number | No | Maximum number of posts to return \(default: 10, max: 100\) |
| `time` | string | No | Time filter for |
| `time` | string | No | Time filter for "top" sorted posts: "day", "week", "month", "year", or "all" \(default: "day"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `subreddit` | string | Subreddit name |
| `posts` | json | Posts data |
| `post` | json | Single post data |
| `comments` | json | Comments data |
| `subreddit` | string | Name of the subreddit where posts were fetched from |
| `posts` | array | Array of posts with title, author, URL, score, comments count, and metadata |
### `reddit_get_comments`
@@ -76,20 +73,16 @@ Fetch comments from a specific Reddit post
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | Access token for Reddit API |
| `postId` | string | Yes | The ID of the Reddit post to fetch comments from |
| `subreddit` | string | Yes | The subreddit where the post is located \(without the r/ prefix\) |
| `sort` | string | No | Sort method for comments: |
| `sort` | string | No | Sort method for comments: "confidence", "top", "new", "controversial", "old", "random", "qa" \(default: "confidence"\) |
| `limit` | number | No | Maximum number of comments to return \(default: 50, max: 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `subreddit` | string | Subreddit name |
| `posts` | json | Posts data |
| `post` | json | Single post data |
| `comments` | json | Comments data |
| `post` | object | Post information including ID, title, author, content, and metadata |

View File

@@ -84,8 +84,8 @@ Retrieve an object from an AWS S3 bucket
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `url` | string | Presigned URL |
| `metadata` | json | Object metadata |
| `url` | string | Pre-signed URL for downloading the S3 object |
| `metadata` | object | File metadata including type, size, name, and last modified date |

View File

@@ -103,7 +103,7 @@ A powerful web search tool that provides access to Google search results through
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `searchResults` | json | Search results data |
| `searchResults` | array | Search results with titles, links, snippets, and type-specific metadata \(date for news, rating for places, imageUrl for images\) |

View File

@@ -75,7 +75,6 @@ Create a new page in a SharePoint site
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the SharePoint API |
| `siteId` | string | No | The ID of the SharePoint site \(internal use\) |
| `siteSelector` | string | No | Select the SharePoint site |
| `pageName` | string | Yes | The name of the page to create |
@@ -86,7 +85,7 @@ Create a new page in a SharePoint site
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. |
| `page` | object | Created SharePoint page information |
### `sharepoint_read_page`
@@ -96,7 +95,6 @@ Read a specific page from a SharePoint site
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the SharePoint API |
| `siteSelector` | string | No | Select the SharePoint site |
| `siteId` | string | No | The ID of the SharePoint site \(internal use\) |
| `pageId` | string | No | The ID of the page to read |
@@ -107,7 +105,7 @@ Read a specific page from a SharePoint site
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. |
| `page` | object | Information about the SharePoint page |
### `sharepoint_list_sites`
@@ -117,7 +115,6 @@ List details of all SharePoint sites
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the SharePoint API |
| `siteSelector` | string | No | Select the SharePoint site |
| `groupId` | string | No | The group ID for accessing a group team site |
@@ -125,7 +122,7 @@ List details of all SharePoint sites
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sites` | json | An array of SharePoint site objects, each containing details such as id, name, and more. |
| `site` | object | Information about the current SharePoint site |

View File

@@ -1,6 +1,6 @@
---
title: Slack
description: Send messages to Slack
description: Send messages to Slack or trigger workflows from Slack events
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -64,7 +64,7 @@ This allows for powerful automation scenarios such as sending notifications, ale
## Usage Instructions
Comprehensive Slack integration with OAuth authentication. Send formatted messages using Slack's mrkdwn syntax.
Comprehensive Slack integration with OAuth authentication. Send formatted messages using Slack's mrkdwn syntax or trigger workflows from Slack events like mentions and messages.
@@ -80,7 +80,6 @@ Send messages to Slack channels or users through the Slack API. Supports Slack m
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `accessToken` | string | No | OAuth access token or bot token for Slack API |
| `channel` | string | Yes | Target Slack channel \(e.g., #general\) |
| `text` | string | Yes | Message text to send \(supports Slack mrkdwn formatting\) |
@@ -89,10 +88,7 @@ Send messages to Slack channels or users through the Slack API. Supports Slack m
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Message timestamp |
| `channel` | string | Channel identifier |
| `canvas_id` | string | Canvas identifier |
| `title` | string | Canvas title |
| `messages` | json | Message data |
| `channel` | string | Channel ID where message was sent |
### `slack_canvas`
@@ -104,7 +100,6 @@ Create and share Slack canvases in channels. Canvases are collaborative document
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `accessToken` | string | No | OAuth access token or bot token for Slack API |
| `channel` | string | Yes | Target Slack channel \(e.g., #general\) |
| `title` | string | Yes | Title of the canvas |
| `content` | string | Yes | Canvas content in markdown format |
@@ -114,11 +109,9 @@ Create and share Slack canvases in channels. Canvases are collaborative document
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Message timestamp |
| `channel` | string | Channel identifier |
| `canvas_id` | string | Canvas identifier |
| `title` | string | Canvas title |
| `messages` | json | Message data |
| `canvas_id` | string | ID of the created canvas |
| `channel` | string | Channel where canvas was created |
| `title` | string | Title of the canvas |
### `slack_message_reader`
@@ -130,7 +123,6 @@ Read the latest messages from Slack channels. Retrieve conversation history with
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `accessToken` | string | No | OAuth access token or bot token for Slack API |
| `channel` | string | Yes | Slack channel to read messages from \(e.g., #general\) |
| `limit` | number | No | Number of messages to retrieve \(default: 10, max: 100\) |
| `oldest` | string | No | Start of time range \(timestamp\) |
@@ -140,11 +132,7 @@ Read the latest messages from Slack channels. Retrieve conversation history with
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Message timestamp |
| `channel` | string | Channel identifier |
| `canvas_id` | string | Canvas identifier |
| `title` | string | Canvas title |
| `messages` | json | Message data |
| `messages` | array | Array of message objects from the channel |

View File

@@ -214,7 +214,7 @@ Extract structured data from a webpage using Stagehand
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Extracted data |
| `data` | object | Extracted structured data matching the provided schema |

View File

@@ -212,6 +212,7 @@ Run an autonomous web agent to complete tasks and extract structured data
| `startUrl` | string | Yes | URL of the webpage to start the agent on |
| `task` | string | Yes | The task to complete or goal to achieve on the website |
| `variables` | json | No | Optional variables to substitute in the task \(format: \{key: value\}\). Reference in task using %key% |
| `format` | string | No | No description |
| `apiKey` | string | Yes | OpenAI API key for agent execution \(required by Stagehand\) |
| `outputSchema` | json | No | Optional JSON schema defining the structure of data the agent should return |
@@ -219,8 +220,7 @@ Run an autonomous web agent to complete tasks and extract structured data
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `agentResult` | json | Agent execution result |
| `structuredOutput` | any | Structured output data |
| `agentResult` | object | Result from the Stagehand agent execution |

View File

@@ -92,7 +92,7 @@ Query data from a Supabase table
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Your Supabase project ID \(e.g., jdrkgepadsdopsntdlom\) |
| `table` | string | Yes | The name of the Supabase table to query |
| `filter` | string | No | PostgREST filter \(e.g., |
| `filter` | string | No | PostgREST filter \(e.g., "id=eq.123"\) |
| `orderBy` | string | No | Column to order by \(add DESC for descending\) |
| `limit` | number | No | Maximum number of rows to return |
| `apiKey` | string | Yes | Your Supabase service role secret key |
@@ -101,8 +101,8 @@ Query data from a Supabase table
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation message |
| `results` | json | Query results |
| `message` | string | Operation status message |
| `results` | array | Array of records returned from the query |
### `supabase_insert`
@@ -121,8 +121,8 @@ Insert data into a Supabase table
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation message |
| `results` | json | Query results |
| `message` | string | Operation status message |
| `results` | array | Array of inserted records |
### `supabase_get_row`
@@ -134,15 +134,15 @@ Get a single row from a Supabase table based on filter criteria
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Your Supabase project ID \(e.g., jdrkgepadsdopsntdlom\) |
| `table` | string | Yes | The name of the Supabase table to query |
| `filter` | string | Yes | PostgREST filter to find the specific row \(e.g., |
| `filter` | string | Yes | PostgREST filter to find the specific row \(e.g., "id=eq.123"\) |
| `apiKey` | string | Yes | Your Supabase service role secret key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation message |
| `results` | json | Query results |
| `message` | string | Operation status message |
| `results` | array | Array containing the row data if found, empty array if not found |
### `supabase_update`
@@ -154,7 +154,7 @@ Update rows in a Supabase table based on filter criteria
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Your Supabase project ID \(e.g., jdrkgepadsdopsntdlom\) |
| `table` | string | Yes | The name of the Supabase table to update |
| `filter` | string | Yes | PostgREST filter to identify rows to update \(e.g., |
| `filter` | string | Yes | PostgREST filter to identify rows to update \(e.g., "id=eq.123"\) |
| `data` | object | Yes | Data to update in the matching rows |
| `apiKey` | string | Yes | Your Supabase service role secret key |
@@ -162,8 +162,8 @@ Update rows in a Supabase table based on filter criteria
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation message |
| `results` | json | Query results |
| `message` | string | Operation status message |
| `results` | array | Array of updated records |
### `supabase_delete`
@@ -175,15 +175,35 @@ Delete rows from a Supabase table based on filter criteria
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Your Supabase project ID \(e.g., jdrkgepadsdopsntdlom\) |
| `table` | string | Yes | The name of the Supabase table to delete from |
| `filter` | string | Yes | PostgREST filter to identify rows to delete \(e.g., |
| `filter` | string | Yes | PostgREST filter to identify rows to delete \(e.g., "id=eq.123"\) |
| `apiKey` | string | Yes | Your Supabase service role secret key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation message |
| `results` | json | Query results |
| `message` | string | Operation status message |
| `results` | array | Array of deleted records |
### `supabase_upsert`
Insert or update data in a Supabase table (upsert operation)
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Your Supabase project ID \(e.g., jdrkgepadsdopsntdlom\) |
| `table` | string | Yes | The name of the Supabase table to upsert data into |
| `data` | any | Yes | The data to upsert \(insert or update\) |
| `apiKey` | string | Yes | Your Supabase service role secret key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
| `results` | array | Array of upserted records |

View File

@@ -80,12 +80,8 @@ Perform AI-powered web searches using Tavily
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results data |
| `answer` | any | Search answer |
| `query` | string | Query used |
| `content` | string | Extracted content |
| `title` | string | Page title |
| `url` | string | Source URL |
| `query` | string | The search query that was executed |
| `results` | array | results output from the tool |
### `tavily_extract`
@@ -103,12 +99,7 @@ Extract raw content from multiple web pages simultaneously using Tavily
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | json | Search results data |
| `answer` | any | Search answer |
| `query` | string | Query used |
| `content` | string | Extracted content |
| `title` | string | Page title |
| `url` | string | Source URL |
| `results` | array | The URL that was extracted |

View File

@@ -1,6 +1,6 @@
---
title: Telegram
description: Send a message through Telegram
description: Send messages through Telegram or trigger workflows from Telegram events
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -67,7 +67,7 @@ In Sim, the Telegram integration enables your agents to leverage these powerful
## Usage Instructions
Send messages to any Telegram channel using your Bot API key. Integrate automated notifications and alerts into your workflow to keep your team informed.
Send messages to any Telegram channel using your Bot API key or trigger workflows from Telegram bot messages. Integrate automated notifications and alerts into your workflow to keep your team informed.
@@ -89,8 +89,12 @@ Send messages to Telegram channels or users through the Telegram Bot API. Enable
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ok` | boolean | Success status |
| `result` | json | Message result |
| `success` | boolean | Telegram message send success status |
| `messageId` | number | Unique Telegram message identifier |
| `chatId` | string | Target chat ID where message was sent |
| `text` | string | Text content of the sent message |
| `timestamp` | number | Unix timestamp when message was sent |
| `from` | object | Information about the bot that sent the message |

View File

@@ -69,7 +69,7 @@ Processes a provided thought/instruction, making it available for subsequent ste
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `acknowledgedThought` | string | Acknowledged thought process |
| `acknowledgedThought` | string | The thought that was processed and acknowledged |

View File

@@ -67,7 +67,7 @@ Convert text between languages while preserving meaning, nuance, and formatting.
| --------- | ---- | ----------- |
| `content` | string | Translated text |
| `model` | string | Model used |
| `tokens` | any | Token usage |
| `tokens` | json | Token usage |
### `anthropic_chat`
@@ -85,7 +85,7 @@ Convert text between languages while preserving meaning, nuance, and formatting.
| --------- | ---- | ----------- |
| `content` | string | Translated text |
| `model` | string | Model used |
| `tokens` | any | Token usage |
| `tokens` | json | Token usage |

View File

@@ -58,10 +58,11 @@ Send text messages to single or multiple recipients using the Twilio API.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Send success status |
| `messageId` | any | Message identifier |
| `status` | any | Delivery status |
| `error` | any | Error information |
| `success` | boolean | SMS send success status |
| `messageId` | string | Unique Twilio message identifier \(SID\) |
| `status` | string | Message delivery status from Twilio |
| `fromNumber` | string | Phone number message was sent from |
| `toNumber` | string | Phone number message was sent to |

View File

@@ -95,9 +95,9 @@ Download files uploaded in Typeform responses
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `total_items` | number | Total response count |
| `page_count` | number | Total page count |
| `items` | json | Response items |
| `fileUrl` | string | Direct download URL for the uploaded file |
| `contentType` | string | MIME type of the uploaded file |
| `filename` | string | Original filename of the uploaded file |
### `typeform_insights`
@@ -114,9 +114,7 @@ Retrieve insights and analytics for Typeform forms
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `total_items` | number | Total response count |
| `page_count` | number | Total page count |
| `items` | json | Response items |
| `fields` | array | Number of users who dropped off at this field |

View File

@@ -70,9 +70,10 @@ Process and analyze images using advanced vision models. Capable of understandin
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Analysis result |
| `model` | any | Model used |
| `tokens` | any | Token usage |
| `content` | string | The analyzed content and description of the image |
| `model` | string | The vision model that was used for analysis |
| `tokens` | number | Total tokens used for the analysis |
| `usage` | object | Detailed token usage breakdown |

View File

@@ -56,21 +56,14 @@ Read content from a Wealthbox note
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Wealthbox API |
| `noteId` | string | No | The ID of the note to read |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `note` | any | Note data |
| `notes` | any | Notes list |
| `contact` | any | Contact data |
| `contacts` | any | Contacts list |
| `task` | any | Task data |
| `tasks` | any | Tasks list |
| `metadata` | json | Operation metadata |
| `success` | any | Success status |
| `success` | boolean | Operation success status |
| `output` | object | Note data and metadata |
### `wealthbox_write_note`
@@ -80,7 +73,6 @@ Create or update a Wealthbox note
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Wealthbox API |
| `content` | string | Yes | The main body of the note |
| `contactId` | string | No | ID of contact to link to this note |
@@ -88,14 +80,8 @@ Create or update a Wealthbox note
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `note` | any | Note data |
| `notes` | any | Notes list |
| `contact` | any | Contact data |
| `contacts` | any | Contacts list |
| `task` | any | Task data |
| `tasks` | any | Tasks list |
| `metadata` | json | Operation metadata |
| `success` | any | Success status |
| `success` | boolean | Operation success status |
| `output` | object | Created or updated note data and metadata |
### `wealthbox_read_contact`
@@ -105,21 +91,14 @@ Read content from a Wealthbox contact
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Wealthbox API |
| `contactId` | string | No | The ID of the contact to read |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `note` | any | Note data |
| `notes` | any | Notes list |
| `contact` | any | Contact data |
| `contacts` | any | Contacts list |
| `task` | any | Task data |
| `tasks` | any | Tasks list |
| `metadata` | json | Operation metadata |
| `success` | any | Success status |
| `success` | boolean | Operation success status |
| `output` | object | Contact data and metadata |
### `wealthbox_write_contact`
@@ -129,7 +108,6 @@ Create a new Wealthbox contact
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Wealthbox API |
| `firstName` | string | Yes | The first name of the contact |
| `lastName` | string | Yes | The last name of the contact |
| `emailAddress` | string | No | The email address of the contact |
@@ -139,14 +117,8 @@ Create a new Wealthbox contact
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `note` | any | Note data |
| `notes` | any | Notes list |
| `contact` | any | Contact data |
| `contacts` | any | Contacts list |
| `task` | any | Task data |
| `tasks` | any | Tasks list |
| `metadata` | json | Operation metadata |
| `success` | any | Success status |
| `success` | boolean | Operation success status |
| `output` | object | Created or updated contact data and metadata |
### `wealthbox_read_task`
@@ -156,21 +128,14 @@ Read content from a Wealthbox task
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Wealthbox API |
| `taskId` | string | No | The ID of the task to read |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `note` | any | Note data |
| `notes` | any | Notes list |
| `contact` | any | Contact data |
| `contacts` | any | Contacts list |
| `task` | any | Task data |
| `tasks` | any | Tasks list |
| `metadata` | json | Operation metadata |
| `success` | any | Success status |
| `success` | boolean | Operation success status |
| `output` | object | Task data and metadata |
### `wealthbox_write_task`
@@ -180,9 +145,8 @@ Create or update a Wealthbox task
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | The access token for the Wealthbox API |
| `title` | string | Yes | The name/title of the task |
| `dueDate` | string | Yes | The due date and time of the task \(format: |
| `dueDate` | string | Yes | The due date and time of the task \(format: "YYYY-MM-DD HH:MM AM/PM -HHMM", e.g., "2015-05-24 11:00 AM -0400"\) |
| `contactId` | string | No | ID of contact to link to this task |
| `description` | string | No | Description or notes about the task |
@@ -190,14 +154,8 @@ Create or update a Wealthbox task
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `note` | any | Note data |
| `notes` | any | Notes list |
| `contact` | any | Contact data |
| `contacts` | any | Contacts list |
| `task` | any | Task data |
| `tasks` | any | Tasks list |
| `metadata` | json | Operation metadata |
| `success` | any | Success status |
| `success` | boolean | Operation success status |
| `output` | object | Created or updated task data and metadata |

View File

@@ -54,15 +54,16 @@ Send WhatsApp messages
| `phoneNumber` | string | Yes | Recipient phone number with country code |
| `message` | string | Yes | Message content to send |
| `phoneNumberId` | string | Yes | WhatsApp Business Phone Number ID |
| `accessToken` | string | Yes | WhatsApp Business API Access Token |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Send success status |
| `messageId` | any | Message identifier |
| `error` | any | Error information |
| `success` | boolean | WhatsApp message send success status |
| `messageId` | string | Unique WhatsApp message identifier |
| `phoneNumber` | string | Recipient phone number |
| `status` | string | Message delivery status |
| `timestamp` | string | Message send timestamp |

View File

@@ -74,11 +74,7 @@ Get a summary and metadata for a specific Wikipedia page.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `summary` | json | Page summary data |
| `searchResults` | json | Search results data |
| `totalHits` | number | Total search hits |
| `content` | json | Page content data |
| `randomPage` | json | Random page data |
| `summary` | object | Wikipedia page summary and metadata |
### `wikipedia_search`
@@ -95,11 +91,7 @@ Search for Wikipedia pages by title or content.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `summary` | json | Page summary data |
| `searchResults` | json | Search results data |
| `totalHits` | number | Total search hits |
| `content` | json | Page content data |
| `randomPage` | json | Random page data |
| `searchResults` | array | Array of matching Wikipedia pages |
### `wikipedia_content`
@@ -115,11 +107,7 @@ Get the full HTML content of a Wikipedia page.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `summary` | json | Page summary data |
| `searchResults` | json | Search results data |
| `totalHits` | number | Total search hits |
| `content` | json | Page content data |
| `randomPage` | json | Random page data |
| `content` | object | Full HTML content and metadata of the Wikipedia page |
### `wikipedia_random`
@@ -134,11 +122,7 @@ Get a random Wikipedia page.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `summary` | json | Page summary data |
| `searchResults` | json | Search results data |
| `totalHits` | number | Total search hits |
| `content` | json | Page content data |
| `randomPage` | json | Random page data |
| `randomPage` | object | Random Wikipedia page data |

View File

@@ -50,7 +50,6 @@ Post new tweets, reply to tweets, or create polls on X (Twitter)
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | X OAuth access token |
| `text` | string | Yes | The text content of your tweet |
| `replyTo` | string | No | ID of the tweet to reply to |
| `mediaIds` | array | No | Array of media IDs to attach to the tweet |
@@ -60,14 +59,7 @@ Post new tweets, reply to tweets, or create polls on X (Twitter)
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | json | Tweet data |
| `replies` | any | Tweet replies |
| `context` | any | Tweet context |
| `tweets` | json | Tweets data |
| `includes` | any | Additional data |
| `meta` | json | Response metadata |
| `user` | json | User profile data |
| `recentTweets` | any | Recent tweets data |
| `tweet` | object | The newly created tweet data |
### `x_read`
@@ -77,7 +69,6 @@ Read tweet details, including replies and conversation context
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | X OAuth access token |
| `tweetId` | string | Yes | ID of the tweet to read |
| `includeReplies` | boolean | No | Whether to include replies to the tweet |
@@ -85,14 +76,7 @@ Read tweet details, including replies and conversation context
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | json | Tweet data |
| `replies` | any | Tweet replies |
| `context` | any | Tweet context |
| `tweets` | json | Tweets data |
| `includes` | any | Additional data |
| `meta` | json | Response metadata |
| `user` | json | User profile data |
| `recentTweets` | any | Recent tweets data |
| `tweet` | object | The main tweet data |
### `x_search`
@@ -102,7 +86,6 @@ Search for tweets using keywords, hashtags, or advanced queries
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | X OAuth access token |
| `query` | string | Yes | Search query \(supports X search operators\) |
| `maxResults` | number | No | Maximum number of results to return \(default: 10, max: 100\) |
| `startTime` | string | No | Start time for search \(ISO 8601 format\) |
@@ -113,14 +96,7 @@ Search for tweets using keywords, hashtags, or advanced queries
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | json | Tweet data |
| `replies` | any | Tweet replies |
| `context` | any | Tweet context |
| `tweets` | json | Tweets data |
| `includes` | any | Additional data |
| `meta` | json | Response metadata |
| `user` | json | User profile data |
| `recentTweets` | any | Recent tweets data |
| `tweets` | array | Array of tweets matching the search query |
### `x_user`
@@ -130,21 +106,13 @@ Get user profile information
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessToken` | string | Yes | X OAuth access token |
| `username` | string | Yes | Username to look up \(without @ symbol\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | json | Tweet data |
| `replies` | any | Tweet replies |
| `context` | any | Tweet context |
| `tweets` | json | Tweets data |
| `includes` | any | Additional data |
| `meta` | json | Response metadata |
| `user` | json | User profile data |
| `recentTweets` | any | Recent tweets data |
| `user` | object | X user profile information |

View File

@@ -62,8 +62,7 @@ Search for videos on YouTube using the YouTube Data API.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | json | The items returned by the YouTube search |
| `totalResults` | number | The total number of results returned by the YouTube search |
| `items` | array | Array of YouTube videos matching the search query |

View File

@@ -50,7 +50,7 @@ Choose your input method from the dropdown:
<video autoPlay loop muted playsInline className="w-full -mb-2 rounded-lg" src="/chat-input.mp4"></video>
</div>
<p className="text-sm text-gray-600">Chat with your workflow and access both input text and conversation ID for context-aware responses.</p>
<p className="text-sm text-gray-600">Chat with your workflow and access input text, conversation ID, and uploaded files for context-aware responses.</p>
</div>
</Tab>
</Tabs>
@@ -60,13 +60,15 @@ Choose your input method from the dropdown:
In Chat mode, access user input and conversation context through special variables:
```yaml
# Reference the chat input and conversation ID in your workflow
# Reference the chat input, conversation ID, and files in your workflow
user_message: "<start.input>"
conversation_id: "<start.conversationId>"
uploaded_files: "<start.files>"
```
- **`<start.input>`** - Contains the user's message text
- **`<start.conversationId>`** - Unique identifier for the conversation thread
- **`<start.files>`** - Array of files uploaded by the user (if any)
## API Execution

View File

@@ -9,7 +9,7 @@ export default function AuthLayout({ children }: { children: React.ReactNode })
const brand = useBrandConfig()
return (
<main className='relative flex min-h-screen flex-col bg-[#0C0C0C] font-geist-sans text-white'>
<main className='relative flex min-h-screen flex-col bg-[var(--brand-background-hex)] font-geist-sans text-white'>
{/* Background pattern */}
<GridPattern
x={-5}

View File

@@ -94,10 +94,10 @@ describe('LoginPage', () => {
const emailInput = screen.getByPlaceholderText(/enter your email/i)
const passwordInput = screen.getByPlaceholderText(/enter your password/i)
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'password123' } })
expect(emailInput).toHaveValue('test@example.com')
expect(emailInput).toHaveValue('user@company.com')
expect(passwordInput).toHaveValue('password123')
})
@@ -117,7 +117,7 @@ describe('LoginPage', () => {
const submitButton = screen.getByRole('button', { name: /sign in/i })
await act(async () => {
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'password123' } })
fireEvent.click(submitButton)
})
@@ -140,14 +140,14 @@ describe('LoginPage', () => {
const passwordInput = screen.getByPlaceholderText(/enter your password/i)
const submitButton = screen.getByRole('button', { name: /sign in/i })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'password123' } })
fireEvent.click(submitButton)
await waitFor(() => {
expect(mockSignIn).toHaveBeenCalledWith(
{
email: 'test@example.com',
email: 'user@company.com',
password: 'password123',
callbackURL: '/workspace',
},
@@ -181,7 +181,7 @@ describe('LoginPage', () => {
const passwordInput = screen.getByPlaceholderText(/enter your password/i)
const submitButton = screen.getByRole('button', { name: /sign in/i })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'wrongpassword' } })
fireEvent.click(submitButton)
@@ -242,13 +242,13 @@ describe('LoginPage', () => {
const passwordInput = screen.getByPlaceholderText(/enter your password/i)
const submitButton = screen.getByRole('button', { name: /sign in/i })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'password123' } })
fireEvent.click(submitButton)
await waitFor(() => {
expect(mockSendOtp).toHaveBeenCalledWith({
email: 'test@example.com',
email: 'user@company.com',
type: 'email-verification',
})
expect(mockRouter.push).toHaveBeenCalledWith('/verify')

View File

@@ -15,25 +15,27 @@ import {
import { Input } from '@/components/ui/input'
import { Label } from '@/components/ui/label'
import { client } from '@/lib/auth-client'
import { quickValidateEmail } from '@/lib/email/validation'
import { createLogger } from '@/lib/logs/console/logger'
import { cn } from '@/lib/utils'
import { SocialLoginButtons } from '@/app/(auth)/components/social-login-buttons'
const logger = createLogger('LoginForm')
const EMAIL_VALIDATIONS = {
required: {
test: (value: string) => Boolean(value && typeof value === 'string'),
message: 'Email is required.',
},
notEmpty: {
test: (value: string) => value.trim().length > 0,
message: 'Email cannot be empty.',
},
basicFormat: {
regex: /^[^\s@]+@[^\s@]+\.[^\s@]+$/,
message: 'Please enter a valid email address.',
},
const validateEmailField = (emailValue: string): string[] => {
const errors: string[] = []
if (!emailValue || !emailValue.trim()) {
errors.push('Email is required.')
return errors
}
const validation = quickValidateEmail(emailValue.trim().toLowerCase())
if (!validation.isValid) {
errors.push(validation.reason || 'Please enter a valid email address.')
}
return errors
}
const PASSWORD_VALIDATIONS = {
@@ -68,27 +70,6 @@ const validateCallbackUrl = (url: string): boolean => {
}
}
// Validate email and return array of error messages
const validateEmail = (emailValue: string): string[] => {
const errors: string[] = []
if (!EMAIL_VALIDATIONS.required.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.required.message)
return errors // Return early for required field
}
if (!EMAIL_VALIDATIONS.notEmpty.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.notEmpty.message)
return errors // Return early for empty field
}
if (!EMAIL_VALIDATIONS.basicFormat.regex.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.basicFormat.message)
}
return errors
}
// Validate password and return array of error messages
const validatePassword = (passwordValue: string): string[] => {
const errors: string[] = []
@@ -182,7 +163,7 @@ export default function LoginPage({
setEmail(newEmail)
// Silently validate but don't show errors until submit
const errors = validateEmail(newEmail)
const errors = validateEmailField(newEmail)
setEmailErrors(errors)
setShowEmailValidationError(false)
}
@@ -205,7 +186,7 @@ export default function LoginPage({
const email = formData.get('email') as string
// Validate email on submit
const emailValidationErrors = validateEmail(email)
const emailValidationErrors = validateEmailField(email)
setEmailErrors(emailValidationErrors)
setShowEmailValidationError(emailValidationErrors.length > 0)
@@ -475,7 +456,7 @@ export default function LoginPage({
<Button
type='submit'
className='flex h-11 w-full items-center justify-center gap-2 bg-[#701ffc] font-medium text-base text-white shadow-[#701ffc]/20 shadow-lg transition-colors duration-200 hover:bg-[#802FFF]'
className='flex h-11 w-full items-center justify-center gap-2 bg-brand-primary font-medium text-base text-white shadow-[var(--brand-primary-hex)]/20 shadow-lg transition-colors duration-200 hover:bg-brand-primary-hover'
disabled={isLoading}
>
{isLoading ? 'Signing in...' : 'Sign In'}
@@ -487,7 +468,7 @@ export default function LoginPage({
<span className='text-neutral-400'>Don't have an account? </span>
<Link
href={isInviteFlow ? `/signup?invite_flow=true&callbackUrl=${callbackUrl}` : '/signup'}
className='font-medium text-[#9D54FF] underline-offset-4 transition hover:text-[#a66fff] hover:underline'
className='font-medium text-[var(--brand-accent-hex)] underline-offset-4 transition hover:text-[var(--brand-accent-hover-hex)] hover:underline'
>
Sign up
</Link>
@@ -516,7 +497,7 @@ export default function LoginPage({
placeholder='Enter your email'
required
type='email'
className='border-neutral-700/80 bg-neutral-900 text-white placeholder:text-white/60 focus:border-[#802FFF]/70 focus:ring-[#802FFF]/20'
className='border-neutral-700/80 bg-neutral-900 text-white placeholder:text-white/60 focus:border-[var(--brand-primary-hover-hex)]/70 focus:ring-[var(--brand-primary-hover-hex)]/20'
/>
</div>
{resetStatus.type && (
@@ -531,7 +512,7 @@ export default function LoginPage({
<Button
type='button'
onClick={handleForgotPassword}
className='h-11 w-full bg-[#701ffc] font-medium text-base text-white shadow-[#701ffc]/20 shadow-lg transition-colors duration-200 hover:bg-[#802FFF]'
className='h-11 w-full bg-[var(--brand-primary-hex)] font-medium text-base text-white shadow-[var(--brand-primary-hex)]/20 shadow-lg transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'
disabled={isSubmittingReset}
>
{isSubmittingReset ? 'Sending...' : 'Send Reset Link'}

View File

@@ -96,11 +96,11 @@ describe('SignupPage', () => {
const passwordInput = screen.getByPlaceholderText(/enter your password/i)
fireEvent.change(nameInput, { target: { value: 'John Doe' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
expect(nameInput).toHaveValue('John Doe')
expect(emailInput).toHaveValue('test@example.com')
expect(emailInput).toHaveValue('user@company.com')
expect(passwordInput).toHaveValue('Password123!')
})
@@ -118,7 +118,7 @@ describe('SignupPage', () => {
const submitButton = screen.getByRole('button', { name: /create account/i })
fireEvent.change(nameInput, { target: { value: 'John Doe' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
fireEvent.click(submitButton)
@@ -144,14 +144,14 @@ describe('SignupPage', () => {
// Use valid input that passes all validation rules
fireEvent.change(nameInput, { target: { value: 'John Doe' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
fireEvent.click(submitButton)
await waitFor(() => {
expect(mockSignUp).toHaveBeenCalledWith(
{
email: 'test@example.com',
email: 'user@company.com',
password: 'Password123!',
name: 'John Doe',
},
@@ -174,7 +174,7 @@ describe('SignupPage', () => {
// Use name with leading/trailing spaces which should fail validation
fireEvent.change(nameInput, { target: { value: ' John Doe ' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
fireEvent.click(submitButton)
@@ -206,15 +206,13 @@ describe('SignupPage', () => {
const submitButton = screen.getByRole('button', { name: /create account/i })
fireEvent.change(nameInput, { target: { value: 'John Doe' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
fireEvent.click(submitButton)
await waitFor(() => {
expect(mockSendOtp).toHaveBeenCalledWith({
email: 'test@example.com',
type: 'email-verification',
})
// With sendVerificationOnSignUp: true, OTP is sent automatically by Better Auth
// No manual OTP sending in the component anymore
expect(mockRouter.push).toHaveBeenCalledWith('/verify?fromSignup=true')
})
})
@@ -267,7 +265,7 @@ describe('SignupPage', () => {
const submitButton = screen.getByRole('button', { name: /create account/i })
fireEvent.change(nameInput, { target: { value: longName } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'ValidPass123!' } })
fireEvent.click(submitButton)
@@ -295,7 +293,7 @@ describe('SignupPage', () => {
const submitButton = screen.getByRole('button', { name: /create account/i })
fireEvent.change(nameInput, { target: { value: exactLengthName } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'ValidPass123!' } })
fireEvent.click(submitButton)
@@ -308,7 +306,7 @@ describe('SignupPage', () => {
await waitFor(() => {
expect(mockSignUp).toHaveBeenCalledWith(
{
email: 'test@example.com',
email: 'user@company.com',
password: 'ValidPass123!',
name: exactLengthName,
},
@@ -343,7 +341,7 @@ describe('SignupPage', () => {
await act(async () => {
fireEvent.change(nameInput, { target: { value: 'John Doe' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
fireEvent.click(submitButton)
})
@@ -385,12 +383,12 @@ describe('SignupPage', () => {
const submitButton = screen.getByRole('button', { name: /create account/i })
fireEvent.change(nameInput, { target: { value: 'John Doe' } })
fireEvent.change(emailInput, { target: { value: 'test@example.com' } })
fireEvent.change(emailInput, { target: { value: 'user@company.com' } })
fireEvent.change(passwordInput, { target: { value: 'Password123!' } })
fireEvent.click(submitButton)
await waitFor(() => {
expect(mockRouter.push).toHaveBeenCalledWith('/invite/123')
expect(mockRouter.push).toHaveBeenCalledWith('/verify?fromSignup=true')
})
})

View File

@@ -8,9 +8,13 @@ import { Button } from '@/components/ui/button'
import { Input } from '@/components/ui/input'
import { Label } from '@/components/ui/label'
import { client } from '@/lib/auth-client'
import { quickValidateEmail } from '@/lib/email/validation'
import { createLogger } from '@/lib/logs/console/logger'
import { cn } from '@/lib/utils'
import { SocialLoginButtons } from '@/app/(auth)/components/social-login-buttons'
const logger = createLogger('SignupForm')
const PASSWORD_VALIDATIONS = {
minLength: { regex: /.{8,}/, message: 'Password must be at least 8 characters long.' },
uppercase: {
@@ -51,31 +55,20 @@ const NAME_VALIDATIONS = {
},
}
const EMAIL_VALIDATIONS = {
required: {
test: (value: string) => Boolean(value && typeof value === 'string'),
message: 'Email is required.',
},
notEmpty: {
test: (value: string) => value.trim().length > 0,
message: 'Email cannot be empty.',
},
maxLength: {
test: (value: string) => value.length <= 254,
message: 'Email must be less than 254 characters.',
},
basicFormat: {
regex: /^[^\s@]+@[^\s@]+\.[^\s@]+$/,
message: 'Please enter a valid email address.',
},
noSpaces: {
regex: /^[^\s]*$/,
message: 'Email cannot contain spaces.',
},
validStart: {
regex: /^[a-zA-Z0-9]/,
message: 'Email must start with a letter or number.',
},
const validateEmailField = (emailValue: string): string[] => {
const errors: string[] = []
if (!emailValue || !emailValue.trim()) {
errors.push('Email is required.')
return errors
}
const validation = quickValidateEmail(emailValue.trim().toLowerCase())
if (!validation.isValid) {
errors.push(validation.reason || 'Please enter a valid email address.')
}
return errors
}
function SignupFormContent({
@@ -188,39 +181,6 @@ function SignupFormContent({
return errors
}
// Validate email and return array of error messages
const validateEmail = (emailValue: string): string[] => {
const errors: string[] = []
if (!EMAIL_VALIDATIONS.required.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.required.message)
return errors // Return early for required field
}
if (!EMAIL_VALIDATIONS.notEmpty.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.notEmpty.message)
return errors // Return early for empty field
}
if (!EMAIL_VALIDATIONS.maxLength.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.maxLength.message)
}
if (!EMAIL_VALIDATIONS.noSpaces.regex.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.noSpaces.message)
}
if (!EMAIL_VALIDATIONS.validStart.regex.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.validStart.message)
}
if (!EMAIL_VALIDATIONS.basicFormat.regex.test(emailValue)) {
errors.push(EMAIL_VALIDATIONS.basicFormat.message)
}
return errors
}
const handlePasswordChange = (e: React.ChangeEvent<HTMLInputElement>) => {
const newPassword = e.target.value
setPassword(newPassword)
@@ -246,7 +206,7 @@ function SignupFormContent({
setEmail(newEmail)
// Silently validate but don't show errors until submit
const errors = validateEmail(newEmail)
const errors = validateEmailField(newEmail)
setEmailErrors(errors)
setShowEmailValidationError(false)
@@ -271,7 +231,7 @@ function SignupFormContent({
setShowNameValidationError(nameValidationErrors.length > 0)
// Validate email on submit
const emailValidationErrors = validateEmail(emailValue)
const emailValidationErrors = validateEmailField(emailValue)
setEmailErrors(emailValidationErrors)
setShowEmailValidationError(emailValidationErrors.length > 0)
@@ -324,7 +284,7 @@ function SignupFormContent({
},
{
onError: (ctx) => {
console.error('Signup error:', ctx.error)
logger.error('Signup error:', ctx.error)
const errorMessage: string[] = ['Failed to create account']
if (ctx.error.code?.includes('USER_ALREADY_EXISTS')) {
@@ -370,30 +330,37 @@ function SignupFormContent({
return
}
// Handle invitation flow redirect
if (isInviteFlow && redirectUrl) {
router.push(redirectUrl)
return
// For new signups, always require verification
if (typeof window !== 'undefined') {
sessionStorage.setItem('verificationEmail', emailValue)
localStorage.setItem('has_logged_in_before', 'true')
// Set cookie flag for middleware check
document.cookie = 'requiresEmailVerification=true; path=/; max-age=900; SameSite=Lax' // 15 min expiry
document.cookie = 'has_logged_in_before=true; path=/; max-age=31536000; SameSite=Lax'
// Store invitation flow state if applicable
if (isInviteFlow && redirectUrl) {
sessionStorage.setItem('inviteRedirectUrl', redirectUrl)
sessionStorage.setItem('isInviteFlow', 'true')
}
}
// Send verification OTP manually
try {
await client.emailOtp.sendVerificationOtp({
email: emailValue,
type: 'email-verification',
})
} catch (err) {
console.error('Failed to send verification OTP:', err)
}
if (typeof window !== 'undefined') {
sessionStorage.setItem('verificationEmail', emailValue)
localStorage.setItem('has_logged_in_before', 'true')
document.cookie = 'has_logged_in_before=true; path=/; max-age=31536000; SameSite=Lax' // 1 year expiry
} catch (otpError) {
logger.error('Failed to send OTP:', otpError)
// Continue anyway - user can use resend button
}
// Always redirect to verification for new signups
router.push('/verify?fromSignup=true')
} catch (error) {
console.error('Signup error:', error)
logger.error('Signup error:', error)
setIsLoading(false)
}
}
@@ -521,7 +488,7 @@ function SignupFormContent({
<Button
type='submit'
className='flex h-11 w-full items-center justify-center gap-2 bg-[#701ffc] font-medium text-base text-white shadow-[#701ffc]/20 shadow-lg transition-colors duration-200 hover:bg-[#802FFF]'
className='flex h-11 w-full items-center justify-center gap-2 bg-[var(--brand-primary-hex)] font-medium text-base text-white shadow-[var(--brand-primary-hex)]/20 shadow-lg transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'
disabled={isLoading}
>
{isLoading ? 'Creating account...' : 'Create Account'}
@@ -533,7 +500,7 @@ function SignupFormContent({
<span className='text-neutral-400'>Already have an account? </span>
<Link
href={isInviteFlow ? `/login?invite_flow=true&callbackUrl=${redirectUrl}` : '/login'}
className='font-medium text-[#9D54FF] underline-offset-4 transition hover:text-[#a66fff] hover:underline'
className='font-medium text-[var(--brand-accent-hex)] underline-offset-4 transition hover:text-[var(--brand-accent-hover-hex)] hover:underline'
>
Sign in
</Link>

View File

@@ -121,10 +121,14 @@ export function useVerification({
if (response && !response.error) {
setIsVerified(true)
// Clear email from sessionStorage after successful verification
// Clear verification requirements and session storage
if (typeof window !== 'undefined') {
sessionStorage.removeItem('verificationEmail')
// Clear the verification requirement flag
document.cookie =
'requiresEmailVerification=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT'
// Also clear invite-related items
if (isInviteFlow) {
sessionStorage.removeItem('inviteRedirectUrl')
@@ -223,6 +227,11 @@ export function useVerification({
// Auto-verify and redirect in development/docker environments
if (isDevOrDocker || !hasResendKey) {
setIsVerified(true)
// Clear verification requirement cookie (same as manual verification)
document.cookie =
'requiresEmailVerification=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT'
const timeoutId = setTimeout(() => {
router.push('/workspace')
}, 1000)

View File

@@ -124,7 +124,7 @@ function VerificationForm({
<Button
onClick={verifyCode}
className='h-11 w-full bg-[#701ffc] font-medium text-base text-white shadow-[#701ffc]/20 shadow-lg transition-colors duration-200 hover:bg-[#802FFF]'
className='h-11 w-full bg-[var(--brand-primary-hex)] font-medium text-base text-white shadow-[var(--brand-primary-hex)]/20 shadow-lg transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'
disabled={!isOtpComplete || isLoading}
>
{isLoading ? 'Verifying...' : 'Verify Email'}
@@ -140,7 +140,7 @@ function VerificationForm({
</span>
) : (
<button
className='font-medium text-[#9D54FF] underline-offset-4 transition hover:text-[#a66fff] hover:underline'
className='font-medium text-[var(--brand-accent-hex)] underline-offset-4 transition hover:text-[var(--brand-accent-hover-hex)] hover:underline'
onClick={handleResend}
disabled={isLoading || isResendDisabled}
>

View File

@@ -35,7 +35,7 @@ export const BlogCard = ({
}: BlogCardProps) => {
return (
<Link href={href}>
<div className='flex flex-col rounded-3xl border border-[#606060]/40 bg-[#101010] p-8 transition-all duration-500 hover:bg-[#202020]'>
<div className='flex flex-col rounded-3xl border border-[#606060]/40 bg-[#101010] p-8 transition-all duration-500 hover:bg-[var(--surface-elevated)]'>
{image ? (
<Image
src={image}

View File

@@ -245,7 +245,7 @@ export default function NavClient({
target='_blank'
rel='noopener noreferrer'
>
<Button className='h-[43px] bg-[#701ffc] px-6 py-2 font-geist-sans font-medium text-base text-neutral-100 transition-colors duration-200 hover:bg-[#802FFF]'>
<Button className='h-[43px] bg-[var(--brand-primary-hex)] px-6 py-2 font-geist-sans font-medium text-base text-neutral-100 transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'>
Contact
</Button>
</Link>
@@ -277,7 +277,7 @@ export default function NavClient({
>
<SheetContent
side='right'
className='flex h-full w-[280px] flex-col border-[#181818] border-l bg-[#0C0C0C] p-6 pt-6 text-white shadow-xl sm:w-[320px] [&>button]:hidden'
className='flex h-full w-[280px] flex-col border-[#181818] border-l bg-[var(--brand-background-hex)] p-6 pt-6 text-white shadow-xl sm:w-[320px] [&>button]:hidden'
onOpenAutoFocus={(e) => e.preventDefault()}
onCloseAutoFocus={(e) => e.preventDefault()}
>
@@ -311,7 +311,7 @@ export default function NavClient({
target='_blank'
rel='noopener noreferrer'
>
<Button className='w-full bg-[#701ffc] py-6 font-medium text-base text-white shadow-[#701ffc]/20 shadow-lg transition-colors duration-200 hover:bg-[#802FFF]'>
<Button className='w-full bg-[var(--brand-primary-hex)] py-6 font-medium text-base text-white shadow-[var(--brand-primary-hex)]/20 shadow-lg transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'>
Contact
</Button>
</Link>

View File

@@ -62,7 +62,7 @@ function Hero() {
<Button
variant={'secondary'}
onClick={handleNavigate}
className='animate-fade-in items-center bg-[#701ffc] px-7 py-6 font-[420] font-geist-sans text-lg text-neutral-100 tracking-normal shadow-[#701ffc]/30 shadow-lg hover:bg-[#802FFF]'
className='animate-fade-in items-center bg-[var(--brand-primary-hex)] px-7 py-6 font-[420] font-geist-sans text-lg text-neutral-100 tracking-normal shadow-[var(--brand-primary-hex)]/30 shadow-lg hover:bg-[var(--brand-primary-hover-hex)]'
aria-label='Start using the platform'
>
<div className='text-[1.15rem]'>Start now</div>
@@ -104,7 +104,7 @@ function Hero() {
className='aspect-[5/3] h-auto md:aspect-auto'
>
<g filter='url(#filter0_b_0_1)'>
<ellipse cx='300' cy='240' rx='290' ry='220' fill='#0C0C0C' />
<ellipse cx='300' cy='240' rx='290' ry='220' fill='var(--brand-background-hex)' />
</g>
<defs>
<filter

View File

@@ -151,7 +151,7 @@ export default function ContributorsPage() {
)
return (
<main className='relative min-h-screen bg-[#0C0C0C] font-geist-sans text-white'>
<main className='relative min-h-screen bg-[var(--brand-background-hex)] font-geist-sans text-white'>
{/* Grid pattern background */}
<div className='absolute inset-0 bottom-[400px] z-0'>
<GridPattern
@@ -239,7 +239,7 @@ export default function ContributorsPage() {
<div className='mb-6 grid grid-cols-1 gap-3 sm:mb-8 sm:grid-cols-2 sm:gap-4 lg:grid-cols-5'>
<div className='rounded-lg border border-[#606060]/20 bg-neutral-800/30 p-3 text-center sm:rounded-xl sm:p-4'>
<div className='mb-1 flex items-center justify-center sm:mb-2'>
<Star className='h-4 w-4 text-[#701ffc] sm:h-5 sm:w-5' />
<Star className='h-4 w-4 text-[var(--brand-primary-hex)] sm:h-5 sm:w-5' />
</div>
<div className='font-bold text-lg text-white sm:text-xl'>{repoStats.stars}</div>
<div className='text-neutral-400 text-xs'>Stars</div>
@@ -247,7 +247,7 @@ export default function ContributorsPage() {
<div className='rounded-lg border border-[#606060]/20 bg-neutral-800/30 p-3 text-center sm:rounded-xl sm:p-4'>
<div className='mb-1 flex items-center justify-center sm:mb-2'>
<GitFork className='h-4 w-4 text-[#701ffc] sm:h-5 sm:w-5' />
<GitFork className='h-4 w-4 text-[var(--brand-primary-hex)] sm:h-5 sm:w-5' />
</div>
<div className='font-bold text-lg text-white sm:text-xl'>{repoStats.forks}</div>
<div className='text-neutral-400 text-xs'>Forks</div>
@@ -255,7 +255,7 @@ export default function ContributorsPage() {
<div className='rounded-lg border border-[#606060]/20 bg-neutral-800/30 p-3 text-center sm:rounded-xl sm:p-4'>
<div className='mb-1 flex items-center justify-center sm:mb-2'>
<GitGraph className='h-4 w-4 text-[#701ffc] sm:h-5 sm:w-5' />
<GitGraph className='h-4 w-4 text-[var(--brand-primary-hex)] sm:h-5 sm:w-5' />
</div>
<div className='font-bold text-lg text-white sm:text-xl'>
{filteredContributors?.length || 0}
@@ -265,7 +265,7 @@ export default function ContributorsPage() {
<div className='rounded-lg border border-[#606060]/20 bg-neutral-800/30 p-3 text-center sm:rounded-xl sm:p-4'>
<div className='mb-1 flex items-center justify-center sm:mb-2'>
<MessageCircle className='h-4 w-4 text-[#701ffc] sm:h-5 sm:w-5' />
<MessageCircle className='h-4 w-4 text-[var(--brand-primary-hex)] sm:h-5 sm:w-5' />
</div>
<div className='font-bold text-lg text-white sm:text-xl'>
{repoStats.openIssues}
@@ -275,7 +275,7 @@ export default function ContributorsPage() {
<div className='rounded-lg border border-[#606060]/20 bg-neutral-800/30 p-3 text-center sm:rounded-xl sm:p-4'>
<div className='mb-1 flex items-center justify-center sm:mb-2'>
<GitPullRequest className='h-4 w-4 text-[#701ffc] sm:h-5 sm:w-5' />
<GitPullRequest className='h-4 w-4 text-[var(--brand-primary-hex)] sm:h-5 sm:w-5' />
</div>
<div className='font-bold text-lg text-white sm:text-xl'>{repoStats.openPRs}</div>
<div className='text-neutral-400 text-xs'>Pull Requests</div>
@@ -291,8 +291,8 @@ export default function ContributorsPage() {
<AreaChart data={timelineData} className='-mx-2 sm:-mx-5 mt-1 sm:mt-2'>
<defs>
<linearGradient id='commits' x1='0' y1='0' x2='0' y2='1'>
<stop offset='5%' stopColor='#701ffc' stopOpacity={0.3} />
<stop offset='95%' stopColor='#701ffc' stopOpacity={0} />
<stop offset='5%' stopColor='var(--brand-primary-hex)' stopOpacity={0.3} />
<stop offset='95%' stopColor='var(--brand-primary-hex)' stopOpacity={0} />
</linearGradient>
</defs>
<XAxis
@@ -320,7 +320,7 @@ export default function ContributorsPage() {
<div className='rounded-lg border border-[#606060]/30 bg-[#0f0f0f] p-2 shadow-lg backdrop-blur-sm sm:p-3'>
<div className='grid gap-1 sm:gap-2'>
<div className='flex items-center gap-1 sm:gap-2'>
<GitGraph className='h-3 w-3 text-[#701ffc] sm:h-4 sm:w-4' />
<GitGraph className='h-3 w-3 text-[var(--brand-primary-hex)] sm:h-4 sm:w-4' />
<span className='text-neutral-400 text-xs sm:text-sm'>
Commits:
</span>
@@ -338,7 +338,7 @@ export default function ContributorsPage() {
<Area
type='monotone'
dataKey='commits'
stroke='#701ffc'
stroke='var(--brand-primary-hex)'
strokeWidth={2}
fill='url(#commits)'
/>
@@ -393,7 +393,7 @@ export default function ContributorsPage() {
animate={{ opacity: 1, y: 0 }}
style={{ animationDelay: `${index * 50}ms` }}
>
<Avatar className='h-12 w-12 ring-2 ring-[#606060]/30 transition-transform group-hover:scale-105 group-hover:ring-[#701ffc]/60 sm:h-16 sm:w-16'>
<Avatar className='h-12 w-12 ring-2 ring-[#606060]/30 transition-transform group-hover:scale-105 group-hover:ring-[var(--brand-primary-hex)]/60 sm:h-16 sm:w-16'>
<AvatarImage
src={contributor.avatar_url}
alt={contributor.login}
@@ -405,13 +405,13 @@ export default function ContributorsPage() {
</Avatar>
<div className='mt-2 text-center sm:mt-3'>
<span className='block font-medium text-white text-xs transition-colors group-hover:text-[#701ffc] sm:text-sm'>
<span className='block font-medium text-white text-xs transition-colors group-hover:text-[var(--brand-primary-hex)] sm:text-sm'>
{contributor.login.length > 12
? `${contributor.login.slice(0, 12)}...`
: contributor.login}
</span>
<div className='mt-1 flex items-center justify-center gap-1 sm:mt-2'>
<GitGraph className='h-2 w-2 text-neutral-400 transition-colors group-hover:text-[#701ffc] sm:h-3 sm:w-3' />
<GitGraph className='h-2 w-2 text-neutral-400 transition-colors group-hover:text-[var(--brand-primary-hex)] sm:h-3 sm:w-3' />
<span className='font-medium text-neutral-300 text-xs transition-colors group-hover:text-white sm:text-sm'>
{contributor.contributions}
</span>
@@ -508,7 +508,7 @@ export default function ContributorsPage() {
/>
<Bar
dataKey='contributions'
className='fill-[#701ffc]'
className='fill-[var(--brand-primary-hex)]'
radius={[4, 4, 0, 0]}
/>
</BarChart>
@@ -532,7 +532,7 @@ export default function ContributorsPage() {
>
<div className='relative p-6 sm:p-8 md:p-12 lg:p-16'>
<div className='text-center'>
<div className='mb-4 inline-flex items-center rounded-full border border-[#701ffc]/20 bg-[#701ffc]/10 px-3 py-1 font-medium text-[#701ffc] text-xs sm:mb-6 sm:px-4 sm:py-2 sm:text-sm'>
<div className='mb-4 inline-flex items-center rounded-full border border-[var(--brand-primary-hex)]/20 bg-[var(--brand-primary-hex)]/10 px-3 py-1 font-medium text-[var(--brand-primary-hex)] text-xs sm:mb-6 sm:px-4 sm:py-2 sm:text-sm'>
<Github className='mr-1 h-3 w-3 sm:mr-2 sm:h-4 sm:w-4' />
Apache-2.0 Licensed
</div>
@@ -550,7 +550,7 @@ export default function ContributorsPage() {
<Button
asChild
size='lg'
className='bg-[#701ffc] text-white transition-colors duration-500 hover:bg-[#802FFF]'
className='bg-[var(--brand-primary-hex)] text-white transition-colors duration-500 hover:bg-[var(--brand-primary-hover-hex)]'
>
<a
href='https://github.com/simstudioai/sim/blob/main/.github/CONTRIBUTING.md'

View File

@@ -12,7 +12,7 @@ export default function Landing() {
}
return (
<main className='relative min-h-screen bg-[#0C0C0C] font-geist-sans'>
<main className='relative min-h-screen bg-[var(--brand-background-hex)] font-geist-sans'>
<NavWrapper onOpenTypeformLink={handleOpenTypeformLink} />
<Hero />

View File

@@ -11,7 +11,7 @@ export default function PrivacyPolicy() {
}
return (
<main className='relative min-h-screen overflow-hidden bg-[#0C0C0C] text-white'>
<main className='relative min-h-screen overflow-hidden bg-[var(--brand-background-hex)] text-white'>
{/* Grid pattern background - only covers content area */}
<div className='absolute inset-0 bottom-[400px] z-0 overflow-hidden'>
<GridPattern
@@ -42,7 +42,7 @@ export default function PrivacyPolicy() {
className='h-full w-full'
>
<g filter='url(#filter0_b_privacy)'>
<rect width='600' height='1600' rx='0' fill='#0C0C0C' />
<rect width='600' height='1600' rx='0' fill='var(--brand-background-hex)' />
</g>
<defs>
<filter
@@ -391,7 +391,7 @@ export default function PrivacyPolicy() {
Privacy & Terms web page:{' '}
<Link
href='https://policies.google.com/privacy?hl=en'
className='text-[#B5A1D4] hover:text-[#701ffc]'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
target='_blank'
rel='noopener noreferrer'
>
@@ -569,7 +569,7 @@ export default function PrivacyPolicy() {
Please note that we may ask you to verify your identity before responding to such
requests.
</p>
<p className='mb-4 border-[#701ffc] border-l-4 bg-[#701ffc]/10 p-3'>
<p className='mb-4 border-[var(--brand-primary-hex)] border-l-4 bg-[var(--brand-primary-hex)]/10 p-3'>
You have the right to complain to a Data Protection Authority about our collection
and use of your Personal Information. For more information, please contact your
local data protection authority in the European Economic Area (EEA).
@@ -661,7 +661,7 @@ export default function PrivacyPolicy() {
policy (if any). Before beginning your inquiry, email us at{' '}
<Link
href='mailto:security@sim.ai'
className='text-[#B5A1D4] hover:text-[#701ffc]'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
>
security@sim.ai
</Link>{' '}
@@ -686,7 +686,7 @@ export default function PrivacyPolicy() {
To report any security flaws, send an email to{' '}
<Link
href='mailto:security@sim.ai'
className='text-[#B5A1D4] hover:text-[#701ffc]'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
>
security@sim.ai
</Link>
@@ -726,7 +726,7 @@ export default function PrivacyPolicy() {
If you have any questions about this Privacy Policy, please contact us at:{' '}
<Link
href='mailto:privacy@sim.ai'
className='text-[#B5A1D4] hover:text-[#701ffc]'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
>
privacy@sim.ai
</Link>

View File

@@ -11,7 +11,7 @@ export default function TermsOfService() {
}
return (
<main className='relative min-h-screen overflow-hidden bg-[#0C0C0C] text-white'>
<main className='relative min-h-screen overflow-hidden bg-[var(--brand-background-hex)] text-white'>
{/* Grid pattern background */}
<div className='absolute inset-0 bottom-[400px] z-0 overflow-hidden'>
<GridPattern
@@ -42,7 +42,7 @@ export default function TermsOfService() {
className='h-full w-full'
>
<g filter='url(#filter0_b_terms)'>
<rect width='600' height='1600' rx='0' fill='#0C0C0C' />
<rect width='600' height='1600' rx='0' fill='var(--brand-background-hex)' />
</g>
<defs>
<filter
@@ -268,7 +268,7 @@ export default function TermsOfService() {
Arbitration Agreement. The arbitration will be conducted by JAMS, an established
alternative dispute resolution provider.
</p>
<p className='mb-4 border-[#701ffc] border-l-4 bg-[#701ffc]/10 p-3'>
<p className='mb-4 border-[var(--brand-primary-hex)] border-l-4 bg-[var(--brand-primary-hex)]/10 p-3'>
YOU AND COMPANY AGREE THAT EACH OF US MAY BRING CLAIMS AGAINST THE OTHER ONLY ON
AN INDIVIDUAL BASIS AND NOT ON A CLASS, REPRESENTATIVE, OR COLLECTIVE BASIS. ONLY
INDIVIDUAL RELIEF IS AVAILABLE, AND DISPUTES OF MORE THAN ONE CUSTOMER OR USER
@@ -277,7 +277,10 @@ export default function TermsOfService() {
<p className='mb-4'>
You have the right to opt out of the provisions of this Arbitration Agreement by
sending a timely written notice of your decision to opt out to:{' '}
<Link href='mailto:legal@sim.ai' className='text-[#B5A1D4] hover:text-[#701ffc]'>
<Link
href='mailto:legal@sim.ai'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
>
legal@sim.ai{' '}
</Link>
within 30 days after first becoming subject to this Arbitration Agreement.
@@ -330,7 +333,7 @@ export default function TermsOfService() {
Our Copyright Agent can be reached at:{' '}
<Link
href='mailto:copyright@sim.ai'
className='text-[#B5A1D4] hover:text-[#701ffc]'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
>
copyright@sim.ai
</Link>
@@ -341,7 +344,10 @@ export default function TermsOfService() {
<h2 className='mb-4 font-semibold text-2xl text-white'>12. Contact Us</h2>
<p>
If you have any questions about these Terms, please contact us at:{' '}
<Link href='mailto:legal@sim.ai' className='text-[#B5A1D4] hover:text-[#701ffc]'>
<Link
href='mailto:legal@sim.ai'
className='text-[#B5A1D4] hover:text-[var(--brand-primary-hex)]'
>
legal@sim.ai
</Link>
</p>

View File

@@ -99,6 +99,7 @@ export const sampleWorkflowState = {
horizontalHandles: true,
isWide: false,
advancedMode: false,
triggerMode: false,
height: 95,
},
'agent-id': {
@@ -127,6 +128,7 @@ export const sampleWorkflowState = {
horizontalHandles: true,
isWide: false,
advancedMode: false,
triggerMode: false,
height: 680,
},
},
@@ -712,6 +714,7 @@ export function mockFileSystem(
}
return Promise.reject(new Error('File not found'))
}),
mkdir: vi.fn().mockResolvedValue(undefined),
}))
}
@@ -761,14 +764,15 @@ export function createStorageProviderMocks(options: StorageProviderMockOptions =
getStorageProvider: vi.fn().mockReturnValue(provider),
isUsingCloudStorage: vi.fn().mockReturnValue(isCloudEnabled),
uploadFile: vi.fn().mockResolvedValue({
path: '/api/files/serve/test-key',
key: 'test-key',
path: '/api/files/serve/test-key.txt',
key: 'test-key.txt',
name: 'test.txt',
size: 100,
type: 'text/plain',
}),
downloadFile: vi.fn().mockResolvedValue(Buffer.from('test content')),
deleteFile: vi.fn().mockResolvedValue(undefined),
getPresignedUrl: vi.fn().mockResolvedValue(presignedUrl),
}))
if (provider === 's3') {
@@ -1235,14 +1239,15 @@ export function setupFileApiMocks(
getStorageProvider: vi.fn().mockReturnValue('local'),
isUsingCloudStorage: vi.fn().mockReturnValue(cloudEnabled),
uploadFile: vi.fn().mockResolvedValue({
path: '/api/files/serve/test-key',
key: 'test-key',
path: '/api/files/serve/test-key.txt',
key: 'test-key.txt',
name: 'test.txt',
size: 100,
type: 'text/plain',
}),
downloadFile: vi.fn().mockResolvedValue(Buffer.from('test content')),
deleteFile: vi.fn().mockResolvedValue(undefined),
getPresignedUrl: vi.fn().mockResolvedValue('https://example.com/presigned-url'),
}))
}
@@ -1347,8 +1352,8 @@ export function mockUploadUtils(
const {
isCloudStorage = false,
uploadResult = {
path: '/api/files/serve/test-key',
key: 'test-key',
path: '/api/files/serve/test-key.txt',
key: 'test-key.txt',
name: 'test.txt',
size: 100,
type: 'text/plain',

View File

@@ -125,7 +125,7 @@ describe('OAuth Credentials API Route', () => {
})
expect(data.credentials[1]).toMatchObject({
id: 'credential-2',
provider: 'google-email',
provider: 'google-default',
isDefault: true,
})
})
@@ -158,7 +158,7 @@ describe('OAuth Credentials API Route', () => {
const data = await response.json()
expect(response.status).toBe(400)
expect(data.error).toBe('Provider is required')
expect(data.error).toBe('Provider or credentialId is required')
expect(mockLogger.warn).toHaveBeenCalled()
})

View File

@@ -1,12 +1,13 @@
import { and, eq } from 'drizzle-orm'
import { jwtDecode } from 'jwt-decode'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { createLogger } from '@/lib/logs/console/logger'
import type { OAuthService } from '@/lib/oauth/oauth'
import { parseProvider } from '@/lib/oauth/oauth'
import { getUserEntityPermissions } from '@/lib/permissions/utils'
import { db } from '@/db'
import { account, user } from '@/db/schema'
import { account, user, workflow } from '@/db/schema'
export const dynamic = 'force-dynamic'
@@ -25,36 +26,96 @@ export async function GET(request: NextRequest) {
const requestId = crypto.randomUUID().slice(0, 8)
try {
// Get the session
const session = await getSession()
// Get query params
const { searchParams } = new URL(request.url)
const providerParam = searchParams.get('provider') as OAuthService | null
const workflowId = searchParams.get('workflowId')
const credentialId = searchParams.get('credentialId')
// Check if the user is authenticated
if (!session?.user?.id) {
// Authenticate requester (supports session, API key, internal JWT)
const authResult = await checkHybridAuth(request)
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthenticated credentials request rejected`)
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
}
const requesterUserId = authResult.userId
// Get the provider from the query params
const { searchParams } = new URL(request.url)
const provider = searchParams.get('provider') as OAuthService | null
// Resolve effective user id: workflow owner if workflowId provided (with access check); else requester
let effectiveUserId: string
if (workflowId) {
// Load workflow owner and workspace for access control
const rows = await db
.select({ userId: workflow.userId, workspaceId: workflow.workspaceId })
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!provider) {
logger.warn(`[${requestId}] Missing provider parameter`)
return NextResponse.json({ error: 'Provider is required' }, { status: 400 })
if (!rows.length) {
logger.warn(`[${requestId}] Workflow not found for credentials request`, { workflowId })
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const wf = rows[0]
if (requesterUserId !== wf.userId) {
if (!wf.workspaceId) {
logger.warn(
`[${requestId}] Forbidden - workflow has no workspace and requester is not owner`,
{
requesterUserId,
}
)
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
const perm = await getUserEntityPermissions(requesterUserId, 'workspace', wf.workspaceId)
if (perm === null) {
logger.warn(`[${requestId}] Forbidden credentials request - no workspace access`, {
requesterUserId,
workspaceId: wf.workspaceId,
})
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
}
effectiveUserId = wf.userId
} else {
effectiveUserId = requesterUserId
}
// Parse the provider to get base provider and feature type
const { baseProvider } = parseProvider(provider)
if (!providerParam && !credentialId) {
logger.warn(`[${requestId}] Missing provider parameter`)
return NextResponse.json({ error: 'Provider or credentialId is required' }, { status: 400 })
}
// Get all accounts for this user and provider
const accounts = await db
.select()
.from(account)
.where(and(eq(account.userId, session.user.id), eq(account.providerId, provider)))
// Parse the provider to get base provider and feature type (if provider is present)
const { baseProvider } = parseProvider(providerParam || 'google-default')
let accountsData
if (credentialId) {
// Foreign-aware lookup for a specific credential by id
// If workflowId is provided and requester has access (checked above), allow fetching by id only
if (workflowId) {
accountsData = await db.select().from(account).where(eq(account.id, credentialId))
} else {
// Fallback: constrain to requester's own credentials when not in a workflow context
accountsData = await db
.select()
.from(account)
.where(and(eq(account.userId, effectiveUserId), eq(account.id, credentialId)))
}
} else {
// Fetch all credentials for provider and effective user
accountsData = await db
.select()
.from(account)
.where(and(eq(account.userId, effectiveUserId), eq(account.providerId, providerParam!)))
}
// Transform accounts into credentials
const credentials = await Promise.all(
accounts.map(async (acc) => {
accountsData.map(async (acc) => {
// Extract the feature type from providerId (e.g., 'google-default' -> 'default')
const [_, featureType = 'default'] = acc.providerId.split('-')
@@ -109,7 +170,7 @@ export async function GET(request: NextRequest) {
return {
id: acc.id,
name: displayName,
provider,
provider: acc.providerId,
lastUsed: acc.updatedAt.toISOString(),
isDefault: featureType === 'default',
}

View File

@@ -10,6 +10,8 @@ describe('OAuth Token API Routes', () => {
const mockGetUserId = vi.fn()
const mockGetCredential = vi.fn()
const mockRefreshTokenIfNeeded = vi.fn()
const mockAuthorizeCredentialUse = vi.fn()
const mockCheckHybridAuth = vi.fn()
const mockLogger = {
info: vi.fn(),
@@ -37,6 +39,14 @@ describe('OAuth Token API Routes', () => {
vi.doMock('@/lib/logs/console/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.doMock('@/lib/auth/credential-access', () => ({
authorizeCredentialUse: mockAuthorizeCredentialUse,
}))
vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: mockCheckHybridAuth,
}))
})
afterEach(() => {
@@ -48,7 +58,12 @@ describe('OAuth Token API Routes', () => {
*/
describe('POST handler', () => {
it('should return access token successfully', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'owner-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
@@ -78,13 +93,18 @@ describe('OAuth Token API Routes', () => {
expect(data).toHaveProperty('accessToken', 'fresh-token')
// Verify mocks were called correctly
expect(mockGetUserId).toHaveBeenCalledWith(mockRequestId, undefined)
expect(mockGetCredential).toHaveBeenCalledWith(mockRequestId, 'credential-id', 'test-user-id')
expect(mockAuthorizeCredentialUse).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalled()
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
})
it('should handle workflowId for server-side authentication', async () => {
mockGetUserId.mockResolvedValueOnce('workflow-owner-id')
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'internal_jwt',
requesterUserId: 'workflow-owner-id',
credentialOwnerUserId: 'workflow-owner-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
@@ -110,12 +130,8 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(200)
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockGetUserId).toHaveBeenCalledWith(mockRequestId, 'workflow-id')
expect(mockGetCredential).toHaveBeenCalledWith(
mockRequestId,
'credential-id',
'workflow-owner-id'
)
expect(mockAuthorizeCredentialUse).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalled()
})
it('should handle missing credentialId', async () => {
@@ -132,7 +148,10 @@ describe('OAuth Token API Routes', () => {
})
it('should handle authentication failure', async () => {
mockGetUserId.mockResolvedValueOnce(undefined)
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: false,
error: 'Authentication required',
})
const req = createMockRequest('POST', {
credentialId: 'credential-id',
@@ -143,12 +162,12 @@ describe('OAuth Token API Routes', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'User not authenticated')
expect(response.status).toBe(403)
expect(data).toHaveProperty('error')
})
it('should handle workflow not found', async () => {
mockGetUserId.mockResolvedValueOnce(undefined)
mockAuthorizeCredentialUse.mockResolvedValueOnce({ ok: false, error: 'Workflow not found' })
const req = createMockRequest('POST', {
credentialId: 'credential-id',
@@ -160,12 +179,16 @@ describe('OAuth Token API Routes', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(404)
expect(data).toHaveProperty('error', 'Workflow not found')
expect(response.status).toBe(403)
})
it('should handle credential not found', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'owner-user-id',
})
mockGetCredential.mockResolvedValueOnce(undefined)
const req = createMockRequest('POST', {
@@ -177,12 +200,17 @@ describe('OAuth Token API Routes', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(404)
expect(data).toHaveProperty('error', 'Credential not found')
expect(response.status).toBe(401)
expect(data).toHaveProperty('error')
})
it('should handle token refresh failure', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'owner-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
@@ -211,7 +239,11 @@ describe('OAuth Token API Routes', () => {
*/
describe('GET handler', () => {
it('should return access token successfully', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockCheckHybridAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
@@ -236,7 +268,7 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(200)
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockGetUserId).toHaveBeenCalledWith(mockRequestId)
expect(mockCheckHybridAuth).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalledWith(mockRequestId, 'credential-id', 'test-user-id')
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
})
@@ -255,7 +287,10 @@ describe('OAuth Token API Routes', () => {
})
it('should handle authentication failure', async () => {
mockGetUserId.mockResolvedValueOnce(undefined)
mockCheckHybridAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
const req = new Request(
'http://localhost:3000/api/auth/oauth/token?credentialId=credential-id'
@@ -267,11 +302,15 @@ describe('OAuth Token API Routes', () => {
const data = await response.json()
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'User not authenticated')
expect(data).toHaveProperty('error')
})
it('should handle credential not found', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockCheckHybridAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce(undefined)
const req = new Request(
@@ -284,11 +323,15 @@ describe('OAuth Token API Routes', () => {
const data = await response.json()
expect(response.status).toBe(404)
expect(data).toHaveProperty('error', 'Credential not found')
expect(data).toHaveProperty('error')
})
it('should handle missing access token', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockCheckHybridAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: null,
@@ -306,12 +349,15 @@ describe('OAuth Token API Routes', () => {
const data = await response.json()
expect(response.status).toBe(400)
expect(data).toHaveProperty('error', 'No access token available')
expect(mockLogger.warn).toHaveBeenCalled()
expect(data).toHaveProperty('error')
})
it('should handle token refresh failure', async () => {
mockGetUserId.mockResolvedValueOnce('test-user-id')
mockCheckHybridAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
@@ -331,7 +377,7 @@ describe('OAuth Token API Routes', () => {
const data = await response.json()
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'Failed to refresh access token')
expect(data).toHaveProperty('error')
})
})
})

View File

@@ -1,6 +1,8 @@
import { type NextRequest, NextResponse } from 'next/server'
import { authorizeCredentialUse } from '@/lib/auth/credential-access'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { createLogger } from '@/lib/logs/console/logger'
import { getCredential, getUserId, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { getCredential, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
export const dynamic = 'force-dynamic'
@@ -26,23 +28,18 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
}
// Determine the user ID based on the context
const userId = await getUserId(requestId, workflowId)
if (!userId) {
return NextResponse.json(
{ error: workflowId ? 'Workflow not found' : 'User not authenticated' },
{ status: workflowId ? 404 : 401 }
)
// We already have workflowId from the parsed body; avoid forcing hybrid auth to re-read it
const authz = await authorizeCredentialUse(request, {
credentialId,
workflowId,
requireWorkflowIdForInternal: false,
})
if (!authz.ok || !authz.credentialOwnerUserId) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status: 403 })
}
// Get the credential from the database
const credential = await getCredential(requestId, credentialId, userId)
if (!credential) {
logger.error(`[${requestId}] Credential not found: ${credentialId}`)
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
// Fetch the credential as the owner to enforce ownership scoping
const credential = await getCredential(requestId, credentialId, authz.credentialOwnerUserId)
try {
// Refresh the token if needed
@@ -75,27 +72,24 @@ export async function GET(request: NextRequest) {
}
// For GET requests, we only support session-based authentication
const userId = await getUserId(requestId)
if (!userId) {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
if (!auth.success || auth.authType !== 'session' || !auth.userId) {
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
}
// Get the credential from the database
const credential = await getCredential(requestId, credentialId, userId)
const credential = await getCredential(requestId, credentialId, auth.userId)
if (!credential) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
// Check if the access token is valid
if (!credential.accessToken) {
logger.warn(`[${requestId}] No access token available for credential`)
return NextResponse.json({ error: 'No access token available' }, { status: 400 })
}
try {
// Refresh the token if needed
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
return NextResponse.json({ accessToken }, { status: 200 })
} catch (_error) {

View File

@@ -1,4 +1,4 @@
import { and, eq } from 'drizzle-orm'
import { and, desc, eq } from 'drizzle-orm'
import { getSession } from '@/lib/auth'
import { createLogger } from '@/lib/logs/console/logger'
import { refreshOAuthToken } from '@/lib/oauth/oauth'
@@ -70,7 +70,8 @@ export async function getOAuthToken(userId: string, providerId: string): Promise
})
.from(account)
.where(and(eq(account.userId, userId), eq(account.providerId, providerId)))
.orderBy(account.createdAt)
// Always use the most recently updated credential for this provider
.orderBy(desc(account.updatedAt))
.limit(1)
if (connections.length === 0) {
@@ -80,19 +81,13 @@ export async function getOAuthToken(userId: string, providerId: string): Promise
const credential = connections[0]
// Check if we have a valid access token
if (!credential.accessToken) {
logger.warn(`Access token is null for user ${userId}, provider ${providerId}`)
return null
}
// Check if the token is expired and needs refreshing
// Determine whether we should refresh: missing token OR expired token
const now = new Date()
const tokenExpiry = credential.accessTokenExpiresAt
// Only refresh if we have an expiration time AND it's expired AND we have a refresh token
const needsRefresh = tokenExpiry && tokenExpiry < now && !!credential.refreshToken
const shouldAttemptRefresh =
!!credential.refreshToken && (!credential.accessToken || (tokenExpiry && tokenExpiry < now))
if (needsRefresh) {
if (shouldAttemptRefresh) {
logger.info(
`Access token expired for user ${userId}, provider ${providerId}. Attempting to refresh.`
)
@@ -141,6 +136,13 @@ export async function getOAuthToken(userId: string, providerId: string): Promise
}
}
if (!credential.accessToken) {
logger.warn(
`Access token is null and no refresh attempted or available for user ${userId}, provider ${providerId}`
)
return null
}
logger.info(`Found valid OAuth token for user ${userId}, provider ${providerId}`)
return credential.accessToken
}
@@ -164,19 +166,21 @@ export async function refreshAccessTokenIfNeeded(
return null
}
// Check if we need to refresh the token
// Decide if we should refresh: token missing OR expired
const expiresAt = credential.accessTokenExpiresAt
const now = new Date()
// Only refresh if we have an expiration time AND it's expired
// If no expiration time is set (newly created credentials), assume token is valid
const needsRefresh = expiresAt && expiresAt <= now
const shouldRefresh =
!!credential.refreshToken && (!credential.accessToken || (expiresAt && expiresAt <= now))
const accessToken = credential.accessToken
if (needsRefresh && credential.refreshToken) {
if (shouldRefresh) {
logger.info(`[${requestId}] Token expired, attempting to refresh for credential`)
try {
const refreshedToken = await refreshOAuthToken(credential.providerId, credential.refreshToken)
const refreshedToken = await refreshOAuthToken(
credential.providerId,
credential.refreshToken!
)
if (!refreshedToken) {
logger.error(`[${requestId}] Failed to refresh token for credential: ${credentialId}`, {
@@ -217,6 +221,7 @@ export async function refreshAccessTokenIfNeeded(
return null
}
} else if (!accessToken) {
// We have no access token and either no refresh token or not eligible to refresh
logger.error(`[${requestId}] Missing access token for credential`)
return null
}
@@ -233,21 +238,20 @@ export async function refreshTokenIfNeeded(
credential: any,
credentialId: string
): Promise<{ accessToken: string; refreshed: boolean }> {
// Check if we need to refresh the token
// Decide if we should refresh: token missing OR expired
const expiresAt = credential.accessTokenExpiresAt
const now = new Date()
// Only refresh if we have an expiration time AND it's expired
// If no expiration time is set (newly created credentials), assume token is valid
const needsRefresh = expiresAt && expiresAt <= now
const shouldRefresh =
!!credential.refreshToken && (!credential.accessToken || (expiresAt && expiresAt <= now))
// If token is still valid, return it directly
if (!needsRefresh || !credential.refreshToken) {
// If token appears valid and present, return it directly
if (!shouldRefresh) {
logger.info(`[${requestId}] Access token is valid`)
return { accessToken: credential.accessToken, refreshed: false }
}
try {
const refreshResult = await refreshOAuthToken(credential.providerId, credential.refreshToken)
const refreshResult = await refreshOAuthToken(credential.providerId, credential.refreshToken!)
if (!refreshResult) {
logger.error(`[${requestId}] Failed to refresh token for credential`)

View File

@@ -2,8 +2,8 @@ import crypto from 'crypto'
import { eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { env } from '@/lib/env'
import { isProd } from '@/lib/environment'
import { checkInternalApiKey } from '@/lib/copilot/utils'
import { isBillingEnabled } from '@/lib/environment'
import { createLogger } from '@/lib/logs/console/logger'
import { db } from '@/db'
import { userStats } from '@/db/schema'
@@ -11,34 +11,14 @@ import { calculateCost } from '@/providers/utils'
const logger = createLogger('billing-update-cost')
// Schema for the request body
const UpdateCostSchema = z.object({
userId: z.string().min(1, 'User ID is required'),
input: z.number().min(0, 'Input tokens must be a non-negative number'),
output: z.number().min(0, 'Output tokens must be a non-negative number'),
model: z.string().min(1, 'Model is required'),
multiplier: z.number().min(0),
})
// Authentication function (reused from copilot/methods route)
function checkInternalApiKey(req: NextRequest) {
const apiKey = req.headers.get('x-api-key')
const expectedApiKey = env.INTERNAL_API_SECRET
if (!expectedApiKey) {
return { success: false, error: 'Internal API key not configured' }
}
if (!apiKey) {
return { success: false, error: 'API key required' }
}
if (apiKey !== expectedApiKey) {
return { success: false, error: 'Invalid API key' }
}
return { success: true }
}
/**
* POST /api/billing/update-cost
* Update user cost based on token usage with internal API key auth
@@ -50,6 +30,19 @@ export async function POST(req: NextRequest) {
try {
logger.info(`[${requestId}] Update cost request started`)
if (!isBillingEnabled) {
logger.debug(`[${requestId}] Billing is disabled, skipping cost update`)
return NextResponse.json({
success: true,
message: 'Billing disabled, cost update skipped',
data: {
billingEnabled: false,
processedAt: new Date().toISOString(),
requestId,
},
})
}
// Check authentication (internal API key)
const authResult = checkInternalApiKey(req)
if (!authResult.success) {
@@ -82,27 +75,27 @@ export async function POST(req: NextRequest) {
)
}
const { userId, input, output, model } = validation.data
const { userId, input, output, model, multiplier } = validation.data
logger.info(`[${requestId}] Processing cost update`, {
userId,
input,
output,
model,
multiplier,
})
const finalPromptTokens = input
const finalCompletionTokens = output
const totalTokens = input + output
// Calculate cost using COPILOT_COST_MULTIPLIER (only in production, like normal executions)
const copilotMultiplier = isProd ? env.COPILOT_COST_MULTIPLIER || 1 : 1
// Calculate cost using provided multiplier (required)
const costResult = calculateCost(
model,
finalPromptTokens,
finalCompletionTokens,
false,
copilotMultiplier
multiplier
)
logger.info(`[${requestId}] Cost calculation result`, {
@@ -111,7 +104,7 @@ export async function POST(req: NextRequest) {
promptTokens: finalPromptTokens,
completionTokens: finalCompletionTokens,
totalTokens: totalTokens,
copilotMultiplier,
multiplier,
costResult,
})
@@ -134,6 +127,10 @@ export async function POST(req: NextRequest) {
totalTokensUsed: totalTokens,
totalCost: costToStore.toString(),
currentPeriodCost: costToStore.toString(),
// Copilot usage tracking
totalCopilotCost: costToStore.toString(),
totalCopilotTokens: totalTokens,
totalCopilotCalls: 1,
lastActive: new Date(),
})
@@ -148,6 +145,10 @@ export async function POST(req: NextRequest) {
totalTokensUsed: sql`total_tokens_used + ${totalTokens}`,
totalCost: sql`total_cost + ${costToStore}`,
currentPeriodCost: sql`current_period_cost + ${costToStore}`,
// Copilot usage tracking increments
totalCopilotCost: sql`total_copilot_cost + ${costToStore}`,
totalCopilotTokens: sql`total_copilot_tokens + ${totalTokens}`,
totalCopilotCalls: sql`total_copilot_calls + 1`,
totalApiCalls: sql`total_api_calls`,
lastActive: new Date(),
}

View File

@@ -245,6 +245,11 @@ describe('Chat API Route', () => {
NODE_ENV: 'development',
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
},
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string'
? value.toLowerCase() === 'true' || value === '1'
: Boolean(value),
getEnv: (variable: string) => process.env[variable],
}))
const validData = {
@@ -287,6 +292,9 @@ describe('Chat API Route', () => {
NODE_ENV: 'development',
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
},
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
getEnv: (variable: string) => process.env[variable],
}))
const validData = {

View File

@@ -14,8 +14,6 @@ import { chat } from '@/db/schema'
const logger = createLogger('ChatAPI')
export const dynamic = 'force-dynamic'
const chatSchema = z.object({
workflowId: z.string().min(1, 'Workflow ID is required'),
subdomain: z
@@ -150,7 +148,7 @@ export async function POST(request: NextRequest) {
// Merge customizations with the additional fields
const mergedCustomizations = {
...(customizations || {}),
primaryColor: customizations?.primaryColor || '#802FFF',
primaryColor: customizations?.primaryColor || 'var(--brand-primary-hover-hex)',
welcomeMessage: customizations?.welcomeMessage || 'Hi there! How can I help you today?',
}

View File

@@ -2,9 +2,6 @@ import { eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { createLogger } from '@/lib/logs/console/logger'
export const dynamic = 'force-dynamic'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
import { db } from '@/db'
import { chat } from '@/db/schema'

View File

@@ -0,0 +1,70 @@
import { createCipheriv, createHash, createHmac, randomBytes } from 'crypto'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { env } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
import { generateApiKey } from '@/lib/utils'
import { db } from '@/db'
import { copilotApiKeys } from '@/db/schema'
const logger = createLogger('CopilotApiKeysGenerate')
function deriveKey(keyString: string): Buffer {
return createHash('sha256').update(keyString, 'utf8').digest()
}
function encryptRandomIv(plaintext: string, keyString: string): string {
const key = deriveKey(keyString)
const iv = randomBytes(16)
const cipher = createCipheriv('aes-256-gcm', key, iv)
let encrypted = cipher.update(plaintext, 'utf8', 'hex')
encrypted += cipher.final('hex')
const authTag = cipher.getAuthTag().toString('hex')
return `${iv.toString('hex')}:${encrypted}:${authTag}`
}
function computeLookup(plaintext: string, keyString: string): string {
// Deterministic, constant-time comparable MAC: HMAC-SHA256(DB_KEY, plaintext)
return createHmac('sha256', Buffer.from(keyString, 'utf8'))
.update(plaintext, 'utf8')
.digest('hex')
}
export async function POST(req: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
if (!env.AGENT_API_DB_ENCRYPTION_KEY) {
logger.error('AGENT_API_DB_ENCRYPTION_KEY is not set')
return NextResponse.json({ error: 'Server not configured' }, { status: 500 })
}
const userId = session.user.id
// Generate and prefix the key (strip the generic sim_ prefix from the random part)
const rawKey = generateApiKey().replace(/^sim_/, '')
const plaintextKey = `sk-sim-copilot-${rawKey}`
// Encrypt with random IV for confidentiality
const dbEncrypted = encryptRandomIv(plaintextKey, env.AGENT_API_DB_ENCRYPTION_KEY)
// Compute deterministic lookup value for O(1) search
const lookup = computeLookup(plaintextKey, env.AGENT_API_DB_ENCRYPTION_KEY)
const [inserted] = await db
.insert(copilotApiKeys)
.values({ userId, apiKeyEncrypted: dbEncrypted, apiKeyLookup: lookup })
.returning({ id: copilotApiKeys.id })
return NextResponse.json(
{ success: true, key: { id: inserted.id, apiKey: plaintextKey } },
{ status: 201 }
)
} catch (error) {
logger.error('Failed to generate copilot API key', { error })
return NextResponse.json({ error: 'Failed to generate copilot API key' }, { status: 500 })
}
}

View File

@@ -0,0 +1,85 @@
import { createDecipheriv, createHash } from 'crypto'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { env } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
import { db } from '@/db'
import { copilotApiKeys } from '@/db/schema'
const logger = createLogger('CopilotApiKeys')
function deriveKey(keyString: string): Buffer {
return createHash('sha256').update(keyString, 'utf8').digest()
}
function decryptWithKey(encryptedValue: string, keyString: string): string {
const parts = encryptedValue.split(':')
if (parts.length !== 3) {
throw new Error('Invalid encrypted value format')
}
const [ivHex, encryptedHex, authTagHex] = parts
const key = deriveKey(keyString)
const iv = Buffer.from(ivHex, 'hex')
const decipher = createDecipheriv('aes-256-gcm', key, iv)
decipher.setAuthTag(Buffer.from(authTagHex, 'hex'))
let decrypted = decipher.update(encryptedHex, 'hex', 'utf8')
decrypted += decipher.final('utf8')
return decrypted
}
export async function GET(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
if (!env.AGENT_API_DB_ENCRYPTION_KEY) {
logger.error('AGENT_API_DB_ENCRYPTION_KEY is not set')
return NextResponse.json({ error: 'Server not configured' }, { status: 500 })
}
const userId = session.user.id
const rows = await db
.select({ id: copilotApiKeys.id, apiKeyEncrypted: copilotApiKeys.apiKeyEncrypted })
.from(copilotApiKeys)
.where(eq(copilotApiKeys.userId, userId))
const keys = rows.map((row) => ({
id: row.id,
apiKey: decryptWithKey(row.apiKeyEncrypted, env.AGENT_API_DB_ENCRYPTION_KEY as string),
}))
return NextResponse.json({ keys }, { status: 200 })
} catch (error) {
logger.error('Failed to get copilot API keys', { error })
return NextResponse.json({ error: 'Failed to get keys' }, { status: 500 })
}
}
export async function DELETE(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
const url = new URL(request.url)
const id = url.searchParams.get('id')
if (!id) {
return NextResponse.json({ error: 'id is required' }, { status: 400 })
}
await db
.delete(copilotApiKeys)
.where(and(eq(copilotApiKeys.userId, userId), eq(copilotApiKeys.id, id)))
return NextResponse.json({ success: true }, { status: 200 })
} catch (error) {
logger.error('Failed to delete copilot API key', { error })
return NextResponse.json({ error: 'Failed to delete key' }, { status: 500 })
}
}

View File

@@ -0,0 +1,79 @@
import { createHmac } from 'crypto'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { env } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
import { db } from '@/db'
import { copilotApiKeys, userStats } from '@/db/schema'
const logger = createLogger('CopilotApiKeysValidate')
function computeLookup(plaintext: string, keyString: string): string {
// Deterministic MAC: HMAC-SHA256(DB_KEY, plaintext)
return createHmac('sha256', Buffer.from(keyString, 'utf8'))
.update(plaintext, 'utf8')
.digest('hex')
}
export async function POST(req: NextRequest) {
try {
if (!env.AGENT_API_DB_ENCRYPTION_KEY) {
logger.error('AGENT_API_DB_ENCRYPTION_KEY is not set')
return NextResponse.json({ error: 'Server not configured' }, { status: 500 })
}
const body = await req.json().catch(() => null)
const apiKey = typeof body?.apiKey === 'string' ? body.apiKey : undefined
if (!apiKey) {
return new NextResponse(null, { status: 401 })
}
const lookup = computeLookup(apiKey, env.AGENT_API_DB_ENCRYPTION_KEY)
// Find matching API key and its user
const rows = await db
.select({ id: copilotApiKeys.id, userId: copilotApiKeys.userId })
.from(copilotApiKeys)
.where(eq(copilotApiKeys.apiKeyLookup, lookup))
.limit(1)
if (rows.length === 0) {
return new NextResponse(null, { status: 401 })
}
const { userId } = rows[0]
// Check usage for the associated user
const usage = await db
.select({
currentPeriodCost: userStats.currentPeriodCost,
totalCost: userStats.totalCost,
currentUsageLimit: userStats.currentUsageLimit,
})
.from(userStats)
.where(eq(userStats.userId, userId))
.limit(1)
if (usage.length > 0) {
const currentUsage = Number.parseFloat(
(usage[0].currentPeriodCost?.toString() as string) ||
(usage[0].totalCost as unknown as string) ||
'0'
)
const limit = Number.parseFloat((usage[0].currentUsageLimit as unknown as string) || '0')
if (!Number.isNaN(limit) && limit > 0 && currentUsage >= limit) {
// Usage exceeded
logger.info('[API VALIDATION] Usage exceeded', { userId, currentUsage, limit })
return new NextResponse(null, { status: 402 })
}
}
// Valid and within usage limits
return new NextResponse(null, { status: 200 })
} catch (error) {
logger.error('Error validating copilot API key', { error })
return NextResponse.json({ error: 'Failed to validate key' }, { status: 500 })
}
}

View File

@@ -104,7 +104,8 @@ describe('Copilot Chat API Route', () => {
vi.doMock('@/lib/env', () => ({
env: {
SIM_AGENT_API_URL: 'http://localhost:8000',
SIM_AGENT_API_KEY: 'test-sim-agent-key',
COPILOT_API_KEY: 'test-sim-agent-key',
BETTER_AUTH_URL: 'http://localhost:3000',
},
}))
@@ -223,6 +224,9 @@ describe('Copilot Chat API Route', () => {
stream: true,
streamToolCalls: true,
mode: 'agent',
provider: 'openai',
depth: 0,
origin: 'http://localhost:3000',
}),
})
)
@@ -284,6 +288,9 @@ describe('Copilot Chat API Route', () => {
stream: true,
streamToolCalls: true,
mode: 'agent',
provider: 'openai',
depth: 0,
origin: 'http://localhost:3000',
}),
})
)
@@ -337,6 +344,9 @@ describe('Copilot Chat API Route', () => {
stream: true,
streamToolCalls: true,
mode: 'agent',
provider: 'openai',
depth: 0,
origin: 'http://localhost:3000',
}),
})
)
@@ -430,6 +440,9 @@ describe('Copilot Chat API Route', () => {
stream: true,
streamToolCalls: true,
mode: 'ask',
provider: 'openai',
depth: 0,
origin: 'http://localhost:3000',
}),
})
)

View File

@@ -1,3 +1,4 @@
import { createCipheriv, createDecipheriv, createHash, randomBytes } from 'crypto'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -13,6 +14,7 @@ import { getCopilotModel } from '@/lib/copilot/config'
import { TITLE_GENERATION_SYSTEM_PROMPT, TITLE_GENERATION_USER_PROMPT } from '@/lib/copilot/prompts'
import { env } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/sim-agent'
import { downloadFile } from '@/lib/uploads'
import { downloadFromS3WithConfig } from '@/lib/uploads/s3/s3-client'
import { S3_COPILOT_CONFIG, USE_S3_STORAGE } from '@/lib/uploads/setup'
@@ -23,6 +25,46 @@ import { createAnthropicFileContent, isSupportedFileType } from './file-utils'
const logger = createLogger('CopilotChatAPI')
// Sim Agent API configuration
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
function getRequestOrigin(_req: NextRequest): string {
try {
// Strictly use configured Better Auth URL
return env.BETTER_AUTH_URL || ''
} catch (_) {
return ''
}
}
function deriveKey(keyString: string): Buffer {
return createHash('sha256').update(keyString, 'utf8').digest()
}
function decryptWithKey(encryptedValue: string, keyString: string): string {
const [ivHex, encryptedHex, authTagHex] = encryptedValue.split(':')
if (!ivHex || !encryptedHex || !authTagHex) {
throw new Error('Invalid encrypted format')
}
const key = deriveKey(keyString)
const iv = Buffer.from(ivHex, 'hex')
const decipher = createDecipheriv('aes-256-gcm', key, iv)
decipher.setAuthTag(Buffer.from(authTagHex, 'hex'))
let decrypted = decipher.update(encryptedHex, 'hex', 'utf8')
decrypted += decipher.final('utf8')
return decrypted
}
function encryptWithKey(plaintext: string, keyString: string): string {
const key = deriveKey(keyString)
const iv = randomBytes(16)
const cipher = createCipheriv('aes-256-gcm', key, iv)
let encrypted = cipher.update(plaintext, 'utf8', 'hex')
encrypted += cipher.final('hex')
const authTag = cipher.getAuthTag().toString('hex')
return `${iv.toString('hex')}:${encrypted}:${authTag}`
}
// Schema for file attachments
const FileAttachmentSchema = z.object({
id: z.string(),
@@ -39,16 +81,16 @@ const ChatMessageSchema = z.object({
chatId: z.string().optional(),
workflowId: z.string().min(1, 'Workflow ID is required'),
mode: z.enum(['ask', 'agent']).optional().default('agent'),
depth: z.number().int().min(-2).max(3).optional().default(0),
prefetch: z.boolean().optional(),
createNewChat: z.boolean().optional().default(false),
stream: z.boolean().optional().default(true),
implicitFeedback: z.string().optional(),
fileAttachments: z.array(FileAttachmentSchema).optional(),
provider: z.string().optional().default('openai'),
conversationId: z.string().optional(),
})
// Sim Agent API configuration
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || 'http://localhost:8000'
const SIM_AGENT_API_KEY = env.SIM_AGENT_API_KEY
/**
* Generate a chat title using LLM
*/
@@ -156,12 +198,37 @@ export async function POST(req: NextRequest) {
chatId,
workflowId,
mode,
depth,
prefetch,
createNewChat,
stream,
implicitFeedback,
fileAttachments,
provider,
conversationId,
} = ChatMessageSchema.parse(body)
// Derive request origin for downstream service
const requestOrigin = getRequestOrigin(req)
if (!requestOrigin) {
logger.error(`[${tracker.requestId}] Missing required configuration: BETTER_AUTH_URL`)
return createInternalServerErrorResponse('Missing required configuration: BETTER_AUTH_URL')
}
// Consolidation mapping: map negative depths to base depth with prefetch=true
let effectiveDepth: number | undefined = typeof depth === 'number' ? depth : undefined
let effectivePrefetch: boolean | undefined = prefetch
if (typeof effectiveDepth === 'number') {
if (effectiveDepth === -2) {
effectiveDepth = 1
effectivePrefetch = true
} else if (effectiveDepth === -1) {
effectiveDepth = 0
effectivePrefetch = true
}
}
logger.info(`[${tracker.requestId}] Processing copilot chat request`, {
userId: authenticatedUserId,
workflowId,
@@ -171,6 +238,11 @@ export async function POST(req: NextRequest) {
createNewChat,
messageLength: message.length,
hasImplicitFeedback: !!implicitFeedback,
provider: provider || 'openai',
hasConversationId: !!conversationId,
depth,
prefetch,
origin: requestOrigin,
})
// Handle chat context
@@ -252,7 +324,7 @@ export async function POST(req: NextRequest) {
}
// Build messages array for sim agent with conversation history
const messages = []
const messages: any[] = []
// Add conversation history (need to rebuild these with file support if they had attachments)
for (const msg of conversationHistory) {
@@ -327,40 +399,74 @@ export async function POST(req: NextRequest) {
})
}
// Start title generation in parallel if this is a new chat with first message
if (actualChatId && !currentChat?.title && conversationHistory.length === 0) {
logger.info(`[${tracker.requestId}] Will start parallel title generation inside stream`)
// Determine provider and conversationId to use for this request
const providerToUse = provider || 'openai'
const effectiveConversationId =
(currentChat?.conversationId as string | undefined) || conversationId
// If we have a conversationId, only send the most recent user message; else send full history
const latestUserMessage =
[...messages].reverse().find((m) => m?.role === 'user') || messages[messages.length - 1]
const messagesForAgent = effectiveConversationId ? [latestUserMessage] : messages
const requestPayload = {
messages: messagesForAgent,
workflowId,
userId: authenticatedUserId,
stream: stream,
streamToolCalls: true,
mode: mode,
provider: providerToUse,
...(effectiveConversationId ? { conversationId: effectiveConversationId } : {}),
...(typeof effectiveDepth === 'number' ? { depth: effectiveDepth } : {}),
...(typeof effectivePrefetch === 'boolean' ? { prefetch: effectivePrefetch } : {}),
...(session?.user?.name && { userName: session.user.name }),
...(requestOrigin ? { origin: requestOrigin } : {}),
}
// Forward to sim agent API
logger.info(`[${tracker.requestId}] Sending request to sim agent API`, {
messageCount: messages.length,
endpoint: `${SIM_AGENT_API_URL}/api/chat-completion-streaming`,
})
// Log the payload being sent to the streaming endpoint
try {
logger.info(`[${tracker.requestId}] Sending payload to sim agent streaming endpoint`, {
url: `${SIM_AGENT_API_URL}/api/chat-completion-streaming`,
provider: providerToUse,
mode,
stream,
workflowId,
hasConversationId: !!effectiveConversationId,
depth: typeof effectiveDepth === 'number' ? effectiveDepth : undefined,
prefetch: typeof effectivePrefetch === 'boolean' ? effectivePrefetch : undefined,
messagesCount: requestPayload.messages.length,
...(requestOrigin ? { origin: requestOrigin } : {}),
})
// Full payload as JSON string
logger.info(
`[${tracker.requestId}] Full streaming payload: ${JSON.stringify(requestPayload)}`
)
} catch (e) {
logger.warn(`[${tracker.requestId}] Failed to log payload preview for streaming endpoint`, e)
}
const simAgentResponse = await fetch(`${SIM_AGENT_API_URL}/api/chat-completion-streaming`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(SIM_AGENT_API_KEY && { 'x-api-key': SIM_AGENT_API_KEY }),
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify({
messages,
workflowId,
userId: authenticatedUserId,
stream: stream,
streamToolCalls: true,
mode: mode,
...(session?.user?.name && { userName: session.user.name }),
}),
body: JSON.stringify(requestPayload),
})
if (!simAgentResponse.ok) {
const errorText = await simAgentResponse.text()
if (simAgentResponse.status === 401 || simAgentResponse.status === 402) {
// Rethrow status only; client will render appropriate assistant message
return new NextResponse(null, { status: simAgentResponse.status })
}
const errorText = await simAgentResponse.text().catch(() => '')
logger.error(`[${tracker.requestId}] Sim agent API error:`, {
status: simAgentResponse.status,
error: errorText,
})
return NextResponse.json(
{ error: `Sim agent API error: ${simAgentResponse.statusText}` },
{ status: simAgentResponse.status }
@@ -388,6 +494,14 @@ export async function POST(req: NextRequest) {
const toolCalls: any[] = []
let buffer = ''
let isFirstDone = true
let responseIdFromStart: string | undefined
let responseIdFromDone: string | undefined
// Track tool call progress to identify a safe done event
const announcedToolCallIds = new Set<string>()
const startedToolExecutionIds = new Set<string>()
const completedToolExecutionIds = new Set<string>()
let lastDoneResponseId: string | undefined
let lastSafeDoneResponseId: string | undefined
// Send chatId as first event
if (actualChatId) {
@@ -486,6 +600,13 @@ export async function POST(req: NextRequest) {
}
break
case 'reasoning':
// Treat like thinking: do not add to assistantContent to avoid leaking
logger.debug(
`[${tracker.requestId}] Reasoning chunk received (${(event.data || event.content || '').length} chars)`
)
break
case 'tool_call':
logger.info(
`[${tracker.requestId}] Tool call ${event.data?.partial ? '(partial)' : '(complete)'}:`,
@@ -498,6 +619,9 @@ export async function POST(req: NextRequest) {
)
if (!event.data?.partial) {
toolCalls.push(event.data)
if (event.data?.id) {
announcedToolCallIds.add(event.data.id)
}
}
break
@@ -507,6 +631,14 @@ export async function POST(req: NextRequest) {
toolName: event.toolName,
status: event.status,
})
if (event.toolCallId) {
if (event.status === 'completed') {
startedToolExecutionIds.add(event.toolCallId)
completedToolExecutionIds.add(event.toolCallId)
} else {
startedToolExecutionIds.add(event.toolCallId)
}
}
break
case 'tool_result':
@@ -517,6 +649,9 @@ export async function POST(req: NextRequest) {
result: `${JSON.stringify(event.result).substring(0, 200)}...`,
resultSize: JSON.stringify(event.result).length,
})
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_error':
@@ -526,9 +661,43 @@ export async function POST(req: NextRequest) {
error: event.error,
success: event.success,
})
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'start':
if (event.data?.responseId) {
responseIdFromStart = event.data.responseId
logger.info(
`[${tracker.requestId}] Received start event with responseId: ${responseIdFromStart}`
)
}
break
case 'done':
if (event.data?.responseId) {
responseIdFromDone = event.data.responseId
lastDoneResponseId = responseIdFromDone
logger.info(
`[${tracker.requestId}] Received done event with responseId: ${responseIdFromDone}`
)
// Mark this done as safe only if no tool call is currently in progress or pending
const announced = announcedToolCallIds.size
const completed = completedToolExecutionIds.size
const started = startedToolExecutionIds.size
const hasToolInProgress = announced > completed || started > completed
if (!hasToolInProgress) {
lastSafeDoneResponseId = responseIdFromDone
logger.info(
`[${tracker.requestId}] Marked done as SAFE (no tools in progress)`
)
} else {
logger.info(
`[${tracker.requestId}] Done received but tools are in progress (announced=${announced}, started=${started}, completed=${completed})`
)
}
}
if (isFirstDone) {
logger.info(
`[${tracker.requestId}] Initial AI response complete, tool count: ${toolCalls.length}`
@@ -622,12 +791,17 @@ export async function POST(req: NextRequest) {
)
}
// Persist only a safe conversationId to avoid continuing from a state that expects tool outputs
const previousConversationId = currentChat?.conversationId as string | undefined
const responseId = lastSafeDoneResponseId || previousConversationId || undefined
// Update chat in database immediately (without title)
await db
.update(copilotChats)
.set({
messages: updatedMessages,
updatedAt: new Date(),
...(responseId ? { conversationId: responseId } : {}),
})
.where(eq(copilotChats.id, actualChatId!))
@@ -635,6 +809,7 @@ export async function POST(req: NextRequest) {
messageCount: updatedMessages.length,
savedUserMessage: true,
savedAssistantMessage: assistantContent.trim().length > 0,
updatedConversationId: responseId || null,
})
}
} catch (error) {

View File

@@ -51,12 +51,6 @@ export async function POST(req: NextRequest) {
const body = await req.json()
const { chatId, messages } = UpdateMessagesSchema.parse(body)
logger.info(`[${tracker.requestId}] Updating chat messages`, {
userId,
chatId,
messageCount: messages.length,
})
// Verify that the chat belongs to the user
const [chat] = await db
.select()

View File

@@ -38,7 +38,7 @@ async function updateToolCallStatus(
try {
const key = `tool_call:${toolCallId}`
const timeout = 60000 // 1 minute timeout
const timeout = 600000 // 10 minutes timeout for user confirmation
const pollInterval = 100 // Poll every 100ms
const startTime = Date.now()
@@ -48,11 +48,6 @@ async function updateToolCallStatus(
while (Date.now() - startTime < timeout) {
const exists = await redis.exists(key)
if (exists) {
logger.info('Tool call found in Redis, updating status', {
toolCallId,
key,
pollDuration: Date.now() - startTime,
})
break
}
@@ -79,27 +74,8 @@ async function updateToolCallStatus(
timestamp: new Date().toISOString(),
}
// Log what we're about to update in Redis
logger.info('About to update Redis with tool call data', {
toolCallId,
key,
toolCallData,
serializedData: JSON.stringify(toolCallData),
providedStatus: status,
providedMessage: message,
messageIsUndefined: message === undefined,
messageIsNull: message === null,
})
await redis.set(key, JSON.stringify(toolCallData), 'EX', 86400) // Keep 24 hour expiry
logger.info('Tool call status updated in Redis', {
toolCallId,
key,
status,
message,
pollDuration: Date.now() - startTime,
})
return true
} catch (error) {
logger.error('Failed to update tool call status in Redis', {
@@ -131,13 +107,6 @@ export async function POST(req: NextRequest) {
const body = await req.json()
const { toolCallId, status, message } = ConfirmationSchema.parse(body)
logger.info(`[${tracker.requestId}] Tool call confirmation request`, {
userId: authenticatedUserId,
toolCallId,
status,
message,
})
// Update the tool call status in Redis
const updated = await updateToolCallStatus(toolCallId, status, message)
@@ -153,13 +122,6 @@ export async function POST(req: NextRequest) {
}
const duration = tracker.getDuration()
logger.info(`[${tracker.requestId}] Tool call confirmation completed`, {
userId: authenticatedUserId,
toolCallId,
status,
internalStatus: status,
duration,
})
return NextResponse.json({
success: true,

View File

@@ -60,6 +60,7 @@ describe('Copilot Methods API Route', () => {
vi.doMock('@/lib/env', () => ({
env: {
INTERNAL_API_SECRET: 'test-secret-key',
COPILOT_API_KEY: 'test-copilot-key',
},
}))
@@ -123,10 +124,8 @@ describe('Copilot Methods API Route', () => {
expect(response.status).toBe(401)
const responseData = await response.json()
expect(responseData).toEqual({
success: false,
error: 'Invalid API key',
})
expect(responseData.success).toBe(false)
expect(typeof responseData.error).toBe('string')
})
it('should return 401 when internal API key is not configured', async () => {
@@ -134,6 +133,7 @@ describe('Copilot Methods API Route', () => {
vi.doMock('@/lib/env', () => ({
env: {
INTERNAL_API_SECRET: undefined,
COPILOT_API_KEY: 'test-copilot-key',
},
}))
@@ -154,10 +154,9 @@ describe('Copilot Methods API Route', () => {
expect(response.status).toBe(401)
const responseData = await response.json()
expect(responseData).toEqual({
success: false,
error: 'Internal API key not configured',
})
expect(responseData.status).toBeUndefined()
expect(responseData.success).toBe(false)
expect(typeof responseData.error).toBe('string')
})
it('should return 400 for invalid request body - missing methodId', async () => {

Some files were not shown because too many files have changed in this diff Show More