fix(backend): remove dry-run markers from simulated block output text

The [DRY RUN] prefix and "simulated successfully — no real execution
occurred" message was being fed back to the LLM, causing the copilot to
become aware it was in dry-run mode and change its behavior. The output
text now looks identical to real execution output. The UI still shows
the "Simulated" badge via the is_dry_run=True flag on the response.
This commit is contained in:
Zamil Majdy
2026-03-27 12:54:56 +07:00
parent f42f0013df
commit 9f51796dbe

View File

@@ -298,12 +298,8 @@ def prepare_dry_run(block: Any, input_data: dict[str, Any]) -> dict[str, Any] |
async def simulate_mcp_block(
_block: Any,
input_data: dict[str, Any],
<<<<<<< HEAD
simulation_context: dict[str, Any] | None = None,
) -> AsyncGenerator[tuple[str, Any]]:
=======
) -> AsyncIterator[tuple[str, Any]]:
>>>>>>> 61c311aad9 (fix(backend): simplify dry-run special block handling per review feedback)
"""Simulate MCP tool execution using an LLM.
Unlike the generic ``simulate_block``, this builds a prompt grounded in
@@ -327,12 +323,8 @@ async def simulate_mcp_block(
async def simulate_block(
block: Any,
input_data: dict[str, Any],
<<<<<<< HEAD
simulation_context: dict[str, Any] | None = None,
) -> AsyncGenerator[tuple[str, Any], None]:
=======
) -> AsyncIterator[tuple[str, Any]]:
>>>>>>> 61c311aad9 (fix(backend): simplify dry-run special block handling per review feedback)
"""Simulate block execution using an LLM.
For MCPToolBlock, uses a specialised prompt grounded in the tool's schema.