Description
The current ReAct agent sample (samples/python/text_generation/react_sample.py) demonstrates functional agentic capabilities with tool calling (weather API, image generation). However, it lacks comprehensive error handling around tool execution. If a tool call fails—due to network timeouts, malformed API responses, invalid arguments, or any exception—the agent crashes immediately without feedback to the user or LLM.
This makes the sample unsuitable for real-world applications and misses a critical educational opportunity. This issue proposes enhancing the sample with robust error handling patterns, turning it into a reliable reference for production agentic workflows.
Current Gaps
- No try/except blocks around tool execution – any exception terminates the agent abruptly.
- No error recovery mechanism – the agent cannot inform the LLM about failures or adjust its reasoning.
- Unprotected JSON parsing –
json5.loads() on LLM-generated arguments can raise JSONDecodeError or KeyError without handling.
- Network failures unhandled –
requests.get() can timeout or fail with connection errors that crash the agent.
- No observation from errors – failed tools don't return error messages, preventing LLM from understanding what went wrong.
- Fragile output parsing – relies on string markers (
\nAction:, \nAction Input:) instead of OpenVINO's built-in StructuredOutputConfig, leading to brittle parsing.
- No input validation – tool arguments are not validated before use.
Proposed Solution
1. Unified Error Handling Wrapper
Create execute_tool(tool_name, tool_args) method that wraps all tool calls in try/except blocks. Catch exceptions when parsing JSON arguments (json5.loads()), making network requests (requests.get()), and executing tool logic. Return error messages as strings instead of raising exceptions.
2. Modify Agent Loop to Treat Errors as Observations
Update llm_with_tool() to call execute_tool() instead of directly invoking tool functions. Append the returned observation (whether success result or error message) to the conversation context and feed it back to the LLM for next reasoning step.
3. Input Validation
Add defensive checks in execute_tool() to validate that parsed JSON arguments contain required keys before accessing them. Return descriptive error messages if validation fails.
4. (Optional) Leverage StructuredOutputConfig
Configure StructuredOutputConfig in generation_config to enforce JSON schema validation at LLM output level. This eliminates reliance on brittle string parsing of \nAction: and \nAction Input: markers.
Why This Matters
- Production-ready patterns – developers need examples of how to handle tool failures gracefully.
- Educational value – teaches best practices in error handling, input validation, and recovery strategies.
- Leverages OpenVINO capabilities – demonstrates structured output features that are currently underutilized.
- Aligns with agentic toolkit goals – builds confidence in using OpenVINO GenAI for real-world agent applications.
Files to Update
samples/python/text_generation/react_sample.py – main implementation
tests/python_tests/samples/test_react_sample.py – add error scenario tests
Description
The current ReAct agent sample (
samples/python/text_generation/react_sample.py) demonstrates functional agentic capabilities with tool calling (weather API, image generation). However, it lacks comprehensive error handling around tool execution. If a tool call fails—due to network timeouts, malformed API responses, invalid arguments, or any exception—the agent crashes immediately without feedback to the user or LLM.This makes the sample unsuitable for real-world applications and misses a critical educational opportunity. This issue proposes enhancing the sample with robust error handling patterns, turning it into a reliable reference for production agentic workflows.
Current Gaps
json5.loads()on LLM-generated arguments can raiseJSONDecodeErrororKeyErrorwithout handling.requests.get()can timeout or fail with connection errors that crash the agent.\nAction:,\nAction Input:) instead of OpenVINO's built-inStructuredOutputConfig, leading to brittle parsing.Proposed Solution
1. Unified Error Handling Wrapper
Create
execute_tool(tool_name, tool_args)method that wraps all tool calls in try/except blocks. Catch exceptions when parsing JSON arguments (json5.loads()), making network requests (requests.get()), and executing tool logic. Return error messages as strings instead of raising exceptions.2. Modify Agent Loop to Treat Errors as Observations
Update
llm_with_tool()to callexecute_tool()instead of directly invoking tool functions. Append the returned observation (whether success result or error message) to the conversation context and feed it back to the LLM for next reasoning step.3. Input Validation
Add defensive checks in
execute_tool()to validate that parsed JSON arguments contain required keys before accessing them. Return descriptive error messages if validation fails.4. (Optional) Leverage StructuredOutputConfig
Configure
StructuredOutputConfigingeneration_configto enforce JSON schema validation at LLM output level. This eliminates reliance on brittle string parsing of\nAction:and\nAction Input:markers.Why This Matters
Files to Update
samples/python/text_generation/react_sample.py– main implementationtests/python_tests/samples/test_react_sample.py– add error scenario tests