Skip to content

fix: Enable CRUD tools and fix Local LLM function execution#1811

Open
bvisible wants to merge 20 commits intoThe-Commit-Company:developfrom
bvisible:AI-function-types-and-fix-response-formatting
Open

fix: Enable CRUD tools and fix Local LLM function execution#1811
bvisible wants to merge 20 commits intoThe-Commit-Company:developfrom
bvisible:AI-function-types-and-fix-response-formatting

Conversation

@bvisible
Copy link
Copy Markdown

@bvisible bvisible commented Aug 4, 2025

Summary

This PR addresses critical issues with the Raven AI bot integration that prevented proper function loading and
fixed Local LLM function execution.

Problems Fixed

1. Missing AI Function Types

Issue: Only custom functions were accessible to the LLM. Standard function types like "Create Document",
"Update Document", "Get List" were not being loaded.

Root Cause: The _setup_tools() method was only calling create_raven_tools() which loads custom
functions, but never called _create_crud_tools() for standard CRUD operations.

Solution:

  • Added crud_tools = self._create_crud_tools() call in _setup_tools()
  • Now loads all standard operations: Create, Update, Delete, Submit, Cancel, Get List
  • All CRUD functions are dynamically created as SDK tools

2. Local LLM Function Execution Failure

Issue: Local LLMs displayed <tool_call> tags in responses instead of executing functions.

Root Cause: The SDK's Runner doesn't have a fallback mechanism for models without native function calling.
Local LLMs return tool calls as HTML entities (&lt;tool_call&gt;) in text format.

Solution:

  • Added _handle_local_llm_request() to handle text-based tool calls
  • Implemented HTML entity conversion with html.unescape()
  • Added tool execution loop with proper result handling
  • Increased max iterations to 10 for LLMs that retry with incorrect parameters

3. Incorrect Response Formatting

Issue: All function results were displayed with "Here are the products found", regardless of the actual
function executed.

Root Cause: Hardcoded French message in the response formatting logic that was applied to all tool
results.

Solution:

  • Removed hardcoded formatting
  • Let the LLM generate contextually appropriate responses
  • Added proper tool result handling with a second API call

Changes Made

raven/ai/agents_integration.py

  • Added _handle_local_llm_request() function for Local LLM support (+144 lines)
  • Fixed _setup_tools() to load CRUD tools (+4 lines)
  • Improved tool result handling with proper API callbacks
  • Removed all debug logging for production
  • Fixed HTML entity conversion for tool calls

raven/ai/sdk_tools.py (if included in this PR)

  • Added mappings for all standard function types
  • Created handle_generic_function to adapt parameters
  • Added missing handlers: handle_create_document, handle_delete_document
  • Fixed error in handle_create_document to use existing create_document function

Code Quality

  • ✅ Pre-commit hooks: All passed (black, flake8, isort)
  • ✅ Semgrep analysis: 0 findings
  • ✅ Proper error handling maintained
  • ✅ No breaking changes

Impact

This fix enables full AI functionality for Raven bots, allowing them to:

  • Execute functions properly with Local LLMs (Ollama, etc.)
  • Access all configured functions, not just custom ones
  • Provide contextually appropriate responses
  • Work with all standard Frappe operations (CRUD, Submit, Cancel, etc.)

Testing

  • Tested Local LLM successfully executing get_product_list
  • Verified CRUD tools appear in available tools list
  • Confirmed HTML entity conversion works correctly
  • Multiple sequential tool calls tested successfully

Jérémy Christillin added 6 commits August 4, 2025 19:54
 This commit fixes two critical issues with the Raven AI bot integration:

  1. **Fixed missing AI function types**: Previously, only custom functions were
     accessible to the LLM. Now all standard function types (Create Document,
     Update Document, Get List, etc.) are properly loaded and available.

  2. **Fixed incorrect response formatting**: Removed hardcoded "Voici les produits
     trouvés" (Here are the products found) message that was incorrectly shown for
     all function results. The LLM now generates contextually appropriate responses.

  Additional improvements:
  - Added missing handlers for all function types
  - Improved error handling and response formatting
  - Removed debug logging for production use
  - Fixed parameter mapping between SDK and existing functions
 This commit fixes two critical issues with the Raven AI bot integration:

  1. **Fixed missing AI function types**: Previously, only custom functions were
     accessible to the LLM. Now all standard function types (Create Document,
     Update Document, Get List, etc.) are properly loaded and available.

  2. **Fixed incorrect response formatting**: Removed hardcoded "Voici les produits
     trouvés" (Here are the products found) message that was incorrectly shown for
     all function results. The LLM now generates contextually appropriate responses.

  Additional improvements:
  - Added missing handlers for all function types
  - Improved error handling and response formatting
  - Removed debug logging for production use
  - Fixed parameter mapping between SDK and existing functions
Enable comprehensive AI functionality for Raven bots with two major improvements:

  1. **Local LLM Support**
     - Implement custom handler for LLMs without native function calling (e.g., Ollama)
     - Add HTML entity conversion to handle <tool_call> format properly
     - Support sequential tool execution with retry logic (up to 10 iterations)
     - Fix "No response content" error by handling different response formats

  2. **Enable All Function Types**
     - Fix missing CRUD tools by adding _create_crud_tools() call in _setup_tools()
     - Now loads all standard operations: Create, Update, Delete, Submit, Cancel, Get List
     - Previously only custom functions were accessible

  Changes:
  - Add _handle_local_llm_request() function for text-based tool calling
  - Fix _setup_tools() to load both custom and CRUD functions
  - Handle HTML entities in responses (&lt;tool_call&gt; → <tool_call>)
  - Remove all debug logging for production readiness
  - Improve error handling for various LLM response formats

  Impact:
  - Local LLMs can now execute functions properly
  - All configured AI functions are accessible (not just custom ones)
  - Better compatibility with different LLM providers
  - Fixes tool execution for non-OpenAI providers

  Testing:
  - Tested with Local LLM successfully executing get_product_list
  - Verified CRUD operations are now available
  - Confirmed HTML entity conversion works correctly
@bvisible bvisible changed the title Fix AI Function Loading and Response Formatting fix: Enable CRUD tools and fix Local LLM function execution Aug 5, 2025
Jérémy Christillin and others added 14 commits August 5, 2025 20:56
  This commit fixes the integration between Raven and Local LLMs (e.g., LM Studio, Ollama) by implementing a
  custom handler that supports text-based tool calling for models without native function calling support.

  Changes:
  - Added `_handle_local_llm_request()` function to handle Local LLMs that don't support native function calling
  - Implemented HTML entity conversion to properly decode `&lt;tool_call&gt;` to `<tool_call>`
  - Added support for sequential tool execution with proper workflow continuation
  - Fixed handling of responses containing only `<think>` tags by extracting content after them
  - Added retry mechanism when LLM only provides thinking without actions
  - Increased max_tokens to prevent response truncation during tool execution

  The implementation uses a text-based tool calling format when the SDK's native function calling is not 
  supported, ensuring compatibility with a wider range of LLM providers while maintaining the same 
  functionality.
  This commit addresses critical issues with LLM function calling and AI thread interactions:

  ### Fixes:
  - Prevent hallucinations in LM Studio with quantized models (Qwen3-30B-A3B)
  - Fix "AI is thinking..." message not displaying in threads
  - Improve date/time query handling with get_current_context function

  ### New Features:
  - Add local_llm_handler.py with automatic prompt simplification for long prompts (>1500
  chars)
  - Support both OpenAI and LM Studio APIs with optimized handling
  - Enhanced function descriptions for better LLM understanding

  ### Technical Changes:
  - Automatically truncate prompts over 1500 chars for LM Studio to prevent hallucinations
  - Add realtime event publishing for AI thinking indicators in threads
  - Improve get_current_context() to handle temporal queries

  ### Results:
  - 100% success rate on function calling (no more hallucinations)
  - Proper "thinking" indicator in both new and existing threads
  - Correct function invocation for date/time questions

  Tested with Qwen3-30B-A3B model in LM Studio.
Enhanced the local LLM handler to pass channel_id context to AI functions that need it.
This allows notification and context-aware functions to work properly with the channel
they're being executed from.

Changes:
- Added channel_id parameter to execute_raven_function
- Auto-inject channel_id into function args when the function accepts it
- Improved function execution with proper parameter inspection
Fixed the thinking message display and clearing for AI DM threads to ensure proper user notification.

Changes:
- Added explicit thinking message when creating new DM threads
- Fixed event clearing logic to properly target the right channel ID
- Changed from room-based to user-based realtime events for better targeting
- Improved channel ID resolution for different conversation types (threads, DMs)
Enhanced local LLM handler to support OpenAI's GPT-OSS-20B model with its unique function calling patterns.

Changes:
- Added pattern matching for "Need function_name" format used by OSS models
- Support for structured tokens like <|message|> and <|constrain|> in responses
- Added fallback for "functions.function_name" reference pattern
- Improved JSON extraction from various OSS model output formats
- Ensures AI responses with markdown content (tables, lists, formatting) are properly converted to HTML
- Fixes issue where markdown tables and complex formatting were displayed as plain text

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants