This policy allows you to track of the number of tokens sent and received by an AI API.
Here are some examples for how to use the AI - Prompt Token Tracking.
The plugin has built-in support for the following AI providers:
- OpenAI (ChatGPT)
- Google (Gemini)
- Anthropic (Claude)
- Mistral
Select the appropriate type in the configuration, and the plugin handles the token tracking automatically.
When the API provider is not one of the built-in providers, use the CUSTOM type. When you choose the CUSTOM, you must provide a custom response body parsing configuration that matches the structure of the API responses from your provider.
For example, the following configuration can be used to extract tokens usage and model from a custom AI API response:
{
"id": "a6775254-dc2f-4411-9b1c-415f3ba8ee8d",
"my_model": "LLAAMA",
"result": "a result",
"my_usage": {
"promptUsage": 100,
"responseUsage": 8
}
}- Sent tokens count point:
my_usage.promptUsage - Receive tokens count point:
my_usage.responseUsage - Sent tokens count point:
my_model
The ai-prompt-token-tracking policy can be applied to the following API types and flow phases.
PROXY
- Response
Strikethrough text indicates that a version is deprecated.
| Plugin version | APIM | Java version |
|---|---|---|
| 1.x | 4.8.x and 4.9.x | 21 |
| 2.x and after | 4.10.x and after | 21 |
Name json name |
Type constraint |
Mandatory | Description |
|---|---|---|---|
Response body parsingextraction |
object | See "Response body parsing" section. |
|
Costpricing |
object | See "Cost" section. |
Name json name |
Type constraint |
Mandatory | Description |
|---|---|---|---|
Typetype |
object | ✅ | Type of Response body parsing Values: GPT GEMINI CLAUDE MISTRAL CUSTOM |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
| No properties |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
| No properties |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
| No properties |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
| No properties |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
Sent token count ELinputTokenPointer |
string | ✅ | A Gravitee Expression Language that represent number of tokens sent to LLM | |
Model pointermodelPointer |
string | A Gravitee Expression Language that represent model of LLM | ||
Receive token count ELoutputTokenPointer |
string | ✅ | A Gravitee Expression Language that represent number of tokens receive from LLM |
Name json name |
Type constraint |
Mandatory | Description |
|---|---|---|---|
Typetype |
object | ✅ | Type of Cost Values: none pricing |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
| No properties |
Name json name |
Type constraint |
Mandatory | Default | Description |
|---|---|---|---|---|
Input Token Price UnitinputPriceUnit |
number(0, +Inf] |
✅ | Input Token Price Unit | |
Input Token Price ValueinputPriceValue |
number(0, +Inf] |
✅ | Input Token Price Value | |
Output Token Price UnitoutputPriceUnit |
number(0, +Inf] |
✅ | Output Token Price Unit | |
Output Token Price ValueoutputPriceValue |
number(0, +Inf] |
✅ | Output Token Price Value |
Calculate usage cost for OpenAI ChatGPT API
{
"api": {
"definitionVersion": "V4",
"type": "PROXY",
"name": "AI - Prompt Token Tracking example API",
"flows": [
{
"name": "Common Flow",
"enabled": true,
"selectors": [
{
"type": "HTTP",
"path": "/",
"pathOperator": "STARTS_WITH"
}
],
"response": [
{
"name": "AI - Prompt Token Tracking",
"enabled": true,
"policy": "ai-prompt-token-tracking",
"configuration":
{
"extraction": {
"type": "GPT"
},
"pricing": {
"inputPriceValue": 0.4,
"inputPriceUnit": 1000000,
"outputPriceValue": 0.8,
"outputPriceUnit": 1000000
}
}
}
]
}
]
}
}
Track tokens usage only on Custom API response
{
"api": {
"definitionVersion": "V4",
"type": "PROXY",
"name": "AI - Prompt Token Tracking example API",
"flows": [
{
"name": "Common Flow",
"enabled": true,
"selectors": [
{
"type": "HTTP",
"path": "/",
"pathOperator": "STARTS_WITH"
}
],
"response": [
{
"name": "AI - Prompt Token Tracking",
"enabled": true,
"policy": "ai-prompt-token-tracking",
"configuration":
{
"extraction": {
"type": "CUSTOM",
"inputTokenPointer": "/usage/custom_prompt_tokens",
"outputTokenPointer": "/usage/custom_completion_tokens",
"modelPointer": "/custom_model"
},
"pricing": {
"type": "none"
}
}
}
]
}
]
}
}
2.0.0 (2025-12-15)
- make the policy compatible with 4.10 (5cfa0cf)
- requires 4.10
1.2.0 (2025-11-13)
- relaxe the Content-Type check to handle more cases (c8c57f0)
- allign metrics naming on llm proxy (a53be09)
1.1.0 (2025-08-27)
- update form to provide el metadata (6c197d7)
1.0.1 (2025-06-19)
- ignore error when token usage not found in response (2718c28)
- extract token sent, received and model of LLM queries (c95d63e)