Default Branch

46846bcace · fix: improve error handling for HumanFeedbackPending in flow execution (#4203) · Updated 2026-01-08 01:40:02 -05:00

Branches

9d33706fd5 · refactor: update allowed_paths default behavior in environment tools · Updated 2026-01-08 19:33:13 -05:00

0
4

95304fbd5d · dropping not useful info · Updated 2026-01-08 17:54:22 -05:00

0
7

d509dc74b0 · fix: improve error handling for HumanFeedbackPending in flow execution · Updated 2026-01-08 01:23:14 -05:00

2
2

d255f1908a · fix: loosen dependency constraints to fix OPIK integration conflict · Updated 2026-01-07 23:43:45 -05:00

1
1

2de21a075b · feat: generate agent card from server config or agent · Updated 2026-01-07 17:10:47 -05:00

3
35

fa09175b17 · feat: Add MCPTool.from_server() API for simplified MCP tool integration · Updated 2026-01-07 15:38:22 -05:00

3
1

180e77a8d0 · Merge branch 'main' into tm-account-for-thought-tokens-on-gemini · Updated 2026-01-07 11:44:06 -05:00

6
2

37b75aeb6a · fix: handle 'Action: None' in parser to prevent OutputParserError · Updated 2026-01-06 14:44:09 -05:00

9
1

2df8973658 · fix: use the internal token usage tracker when using litellm · Updated 2026-01-06 10:06:25 -05:00

10
1

83b07b9d23 · fix: prevent LLM observation hallucination by properly attributing tool results · Updated 2026-01-06 01:57:16 -05:00

10
1

0976c42c6b · fix: track token usage in litellm non-streaming and async calls · Updated 2026-01-03 14:28:08 -05:00

12
1

e022caae6c · Fix W293 lint errors in usage_metrics.py docstrings · Updated 2026-01-03 12:37:00 -05:00

12
11

4f13153483 · fix: handle None value for a2a_task_ids_by_endpoint in config · Updated 2026-01-01 03:17:38 -05:00

12
3

4f0b6f6427 · feat(a2a): add async execution support for A2A delegation · Updated 2025-12-30 02:54:43 -05:00

14
1

1c59ff8b96 · Fix output file writes outdated task result after guardrail execution · Updated 2025-12-29 12:37:07 -05:00

14
1

9c5b68f1c5 · feat: Add OpenAI Responses API integration · Updated 2025-12-27 01:19:28 -05:00

15
1

06c4974cb3 · fix: exclude empty stop parameter from LLM completion params · Updated 2025-12-24 02:12:18 -05:00

16
1

9460e5e182 · fix: use huggingface_hub InferenceClient for HuggingFace embeddings · Updated 2025-12-22 17:44:13 -05:00

17
1

b8eb7dd294 · fix import · Updated 2025-12-22 01:38:24 -05:00

17
3

029eedfddb · fix: preserve task outputs when mixing sync and async tasks (#4137) · Updated 2025-12-20 10:09:38 -05:00

17
1