Multi-provider AI connectors for the Bonita platform. Interact with OpenAI, Anthropic (Claude), Google Gemini, Mistral AI, Azure AI Foundry, and Ollama (local LLMs) chat models by sending prompts and documents and returning structured output.
The Bonita AI Connectors are available for Bonita 10.2 Community (2024.3) version and above.
The project follows a modular architecture with a shared core and provider-specific modules:
bonita-connector-ai (parent)
|
+-- bonita-connector-ai-core Core abstractions, shared logic
|
+-- bonita-connector-ai-openai OpenAI provider (GPT-4o, GPT-4o-mini)
+-- bonita-connector-ai-anthropic Anthropic provider (Claude Sonnet, Opus, Haiku)
+-- bonita-connector-ai-gemini Google Gemini provider (Gemini 2.0 Flash, 1.5 Pro)
+-- bonita-connector-ai-mistral Mistral AI provider (Pixtral, Mistral Large)
+-- bonita-connector-ai-azure Azure AI Foundry provider (Azure-hosted OpenAI models)
+-- bonita-connector-ai-ollama Ollama provider (local LLMs: Llama, Mistral, etc.)Each provider module contains three connector implementations:
-
Ask — Send a user prompt and get a response (with optional documents and JSON schema)
-
Extract — Extract structured data from documents
-
Classify — Classify documents into predefined categories
The core module defines the template method pattern with abstract base classes (AskAiConnector, ExtractAiConnector, ClassifyAiConnector) and the AiChat interface that each provider implements using LangChain4j.
| Module | Provider | Default Model | API Docs |
|---|---|---|---|
|
(shared abstractions) |
N/A |
N/A |
|
|
||
|
|
||
|
|
||
|
|
||
|
(depends on deployment) |
||
|
|
To use a connector, add it as a dependency to your Bonita process. Choose the module for your AI provider.
<dependency>
<groupId>org.bonitasoft.connectors</groupId>
<artifactId>bonita-connector-ai-openai</artifactId>
<version>x.y.z</version>
</dependency>API key: OpenAI API Keys
<dependency>
<groupId>org.bonitasoft.connectors</groupId>
<artifactId>bonita-connector-ai-anthropic</artifactId>
<version>x.y.z</version>
</dependency>API key: Anthropic Console
<dependency>
<groupId>org.bonitasoft.connectors</groupId>
<artifactId>bonita-connector-ai-gemini</artifactId>
<version>x.y.z</version>
</dependency>API key: Google AI Studio
<dependency>
<groupId>org.bonitasoft.connectors</groupId>
<artifactId>bonita-connector-ai-mistral</artifactId>
<version>x.y.z</version>
</dependency>API key: Mistral Console
|
Warning
|
Image documents are not supported yet for the Mistral connector due to a limitation of the underlying library. |
<dependency>
<groupId>org.bonitasoft.connectors</groupId>
<artifactId>bonita-connector-ai-azure</artifactId>
<version>x.y.z</version>
</dependency>API key: Azure Portal > Azure AI Foundry > Keys and Endpoint
|
Note
|
Azure AI Foundry requires setting the url parameter to your Azure endpoint and the chatModelName to your deployment name.
|
<dependency>
<groupId>org.bonitasoft.connectors</groupId>
<artifactId>bonita-connector-ai-ollama</artifactId>
<version>x.y.z</version>
</dependency>|
Note
|
Ollama allows you to run large language models locally on your infrastructure. No API key required. Ideal for on-premises deployments, data privacy requirements, or cost optimization. |
| Parameter name | Required | Description | Default value |
|---|---|---|---|
apiKey |
false |
The AI provider API key. The connector will use the system environment variable named |
changeMe |
url |
false |
The AI provider endpoint url. This parameter allows to use an alternate endpoint for tests or custom deployments. |
|
requestTimeout |
false |
The request timeout in milliseconds for AI provider calls. |
null |
chatModelName |
false |
The model to use for chat. See the Modules table above for default values per provider. |
|
modelTemperature |
false |
The temperature to use for the model. Higher values will result in more creative responses. Must be between 0 and 1. Leave blank if the selected model does not support this parameter. |
null |
{
"apiKey": "${AI_API_KEY}",
"chatModelName": "gpt-4o",
"systemPrompt": "You are a customer service analyst.",
"userPrompt": "Summarize this complaint: ${complaintText}"
}{
"apiKey": "${AI_API_KEY}",
"chatModelName": "claude-sonnet-4-6",
"systemPrompt": "You are a legal compliance analyst.",
"userPrompt": "Analyze this contract for GDPR compliance issues."
}{
"apiKey": "${AI_API_KEY}",
"chatModelName": "gemini-2.0-flash",
"categories": "INVOICE,CONTRACT,ID_CARD,PROOF_OF_ADDRESS,OTHER"
}{
"apiKey": "${AI_API_KEY}",
"chatModelName": "mistral-large-latest",
"categories": "BILLING,TECHNICAL_SUPPORT,ACCOUNT_MANAGEMENT,OTHER"
}{
"apiKey": "${AZURE_OPENAI_API_KEY}",
"url": "https://my-company.openai.azure.com",
"chatModelName": "gpt-4o",
"systemPrompt": "You are a helpful assistant for HR processes.",
"userPrompt": "Evaluate this CV: ${cvText}"
}{
"apiKey": "not-needed",
"url": "http://localhost:11434",
"chatModelName": "llama3.1",
"systemPrompt": "You are a document analysis assistant.",
"userPrompt": "Extract the key dates and amounts from this invoice."
}AI connectors have the capability to return structured data in JSON format. It is possible to pass a JSON schema to tell the LLM how to format response data.
When using a JSON schema, you must list in the required property, all the fields you want in the JSON response.
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "ProofOfAddress",
"type": "object",
"required": [
"firstName",
"lastName",
"fullName",
"fullAddress",
"emissionDate",
"issuerName",
"identificationNumber"
],
"properties": {
"firstName": { "type": "string" },
"lastName": { "type": "string" },
"fullName": { "type": "string" },
"fullAddress": { "type": "string" },
"emissionDate": { "type": "string" },
"issuerName": { "type": "string" },
"identificationNumber": { "type": "string" }
}
}Take a user prompt and send it to the AI provider then return the response. The prompt text can ask questions about a provided process document.
| Parameter name | Required | Description | Default value |
|---|---|---|---|
systemPrompt |
false |
The system prompt to influence the behavior of the assistant and specify a default context. |
"You are a polite Assistant" |
userPrompt |
true |
The user prompt content to send to the AI provider |
|
sourceDocumentRef |
false |
The reference to the process document to load and add to the user prompt. Supported formats: "doc", "docx", "pdf", … (see Apache Tika formats) |
null |
outputJsonSchema |
false |
The JSON schema that represent how to structure the JSON connector output. |
null |
The result will be placed as a map entry of type java.lang.String for the key named output.
This connector allows extracting information from a Bonita document.
| Parameter name | Required | Description | Default value |
|---|---|---|---|
sourceDocumentRef |
true |
The reference to the process document to load. Supported formats: "doc", "docx", "pdf", … (see Apache Tika formats) |
null |
fieldsToExtract |
false |
The list of fields to extract from the given document ( |
null |
outputJsonSchema |
false |
The JSON schema that represent how to structure the JSON connector output. If specified, the |
null |
|
Important
|
You must provide at least one of fieldsToExtract or outputJsonSchema parameters.
|
This connector allows classifying a Bonita process document according to a list of categories provided by the user.
| Parameter name | Required | Description | Default value |
|---|---|---|---|
sourceDocumentRef |
true |
The reference to the process document to load. Supported formats: "doc", "docx", "pdf", … (see Apache Tika formats) |
null |
categories |
true |
The list of categories used to classify the given document ( |
null |
{
"category": "xxx",
"confidence": 0.9
}The confidence score is defined as:
-
[0.0..0.3]: Very uncertain or guessing
-
[0.3..0.6]: Some uncertainty, potential ambiguity exists
-
[0.6..0.8]: Reasonably certain, minor doubt
-
[0.8..1.0]: Very certain, no doubt
Prerequisite:
-
Java ( jdk 17 or higher)
-
Maven (optional if you chose to use maven wrapper script as archetype option)
-
A Git client (optional but highly recommended)
-
Docker and docker compose for integration tests
This repository follows the GitFlow branching strategy.
The project is a standard maven project. For more details about Apache Maven, please refer to the documentation
git clone https://github.com/bonitasoft/bonita-connector-ai.git
cd bonita-connector-ai/
./mvnw package# Build only the OpenAI module (and core dependency)
./mvnw clean package -pl bonita-connector-ai-openai -am
# Build only the Anthropic module
./mvnw clean package -pl bonita-connector-ai-anthropic -am
# Build only the Gemini module
./mvnw clean package -pl bonita-connector-ai-gemini -am
# Build only the Mistral module
./mvnw clean package -pl bonita-connector-ai-mistral -am
# Build only the Azure module
./mvnw clean package -pl bonita-connector-ai-azure -am
# Build only the Ollama module
./mvnw clean package -pl bonita-connector-ai-ollama -amThe build should produce connector packages as jar and zip archives under the modules target/ folders.
Integration tests require actual AI provider endpoints. Here are the options for each provider:
export OPENAI_API_KEY=your-api-key-here
./mvnw verify -PITs -pl bonita-connector-ai-openaiexport ANTHROPIC_API_KEY=your-api-key-here
./mvnw verify -PITs -pl bonita-connector-ai-anthropicexport GEMINI_API_KEY=your-api-key-here
./mvnw verify -PITs -pl bonita-connector-ai-geminiexport MISTRAL_API_KEY=your-api-key-here
./mvnw verify -PITs -pl bonita-connector-ai-mistralexport AZURE_OPENAI_API_KEY=your-api-key-here
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
export AZURE_OPENAI_MODEL=gpt-4o
./mvnw verify -PITs -pl bonita-connector-ai-azureStep 1: Start Ollama with Docker Compose
docker compose -f docker-compose-ollama.yml up -dStep 2: Pull a model (first time only)
# For default model (llama3.1 - ~4.7GB)
docker exec -it ollama-test ollama pull llama3.1
# Or for faster testing with smaller model (llama3.2:1b - ~1.3GB)
docker exec -it ollama-test ollama pull llama3.2:1bStep 3: Run integration tests
# Using default model (llama3.1)
./mvnw verify -PITs -pl bonita-connector-ai-ollama
# Using smaller model (llama3.2:1b)
export OLLAMA_MODEL_NAME="llama3.2:1b"
./mvnw verify -PITs -pl bonita-connector-ai-ollamaStep 4: Stop Ollama when done
docker compose -f docker-compose-ollama.yml down# Make sure Ollama is running and API keys are set
export OPENAI_API_KEY=your-openai-key
export ANTHROPIC_API_KEY=your-anthropic-key
export GEMINI_API_KEY=your-gemini-key
export MISTRAL_API_KEY=your-mistral-key
export AZURE_OPENAI_API_KEY=your-azure-key
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
./mvnw verify -PITsTo add support for a new AI provider:
-
Create a new module
bonita-connector-ai-{provider}following the existing structure -
Add provider-specific LangChain4j dependency to the module
pom.xml -
Create
{Provider}Chatimplementing theAiChatinterface from the core module -
Create three connector classes extending the abstract connectors from core:
-
{Provider}AskConnectorextendsAskAiConnector -
{Provider}ExtractDataConnectorextendsExtractAiConnector -
{Provider}ClassifyConnectorextendsClassifyAiConnector
-
-
Create connector definition files in
src/main/resources-filtered/:-
{provider}-ask.def/{provider}-ask.impl/{provider}-ask.properties -
{provider}-extract.def/{provider}-extract.impl/{provider}-extract.properties -
{provider}-classify.def/{provider}-classify.impl/{provider}-classify.properties
-
-
Add the new module to the parent
pom.xml<modules>section -
Configure maven properties for connector IDs and versions
To release a new version, maintainers may use the Release and Publication GitHub Actions workflows.
-
Running the Release workflow will invoke the
gitflow-maven-pluginto perform all required merges, version updates and tag creation. -
Run the Publication workflow action will build and deploy a given tag to Maven Central.
-
A GitHub release should be created and associated with the tag. Manage this manually.
Once this is done, update the Bonita marketplace repository with the new version of the connector.



