|
14 | 14 |
|
15 | 15 | This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses. |
16 | 16 |
|
17 | | -| Attribute | Type | Description | Examples | Stability | |
18 | | -| ---------------------------------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------- | ---------------------------------------------------------------- | |
19 | | -| `gen_ai.completion` | string | The full response received from the GenAI model. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` |  | |
20 | | -| `gen_ai.operation.name` | string | The name of the operation being performed. [2] | `chat`; `text_completion` |  | |
21 | | -| `gen_ai.prompt` | string | The full prompt sent to the GenAI model. [3] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` |  | |
22 | | -| `gen_ai.request.frequency_penalty` | double | The frequency penalty setting for the GenAI request. | `0.1` |  | |
23 | | -| `gen_ai.request.max_tokens` | int | The maximum number of tokens the model generates for a request. | `100` |  | |
24 | | -| `gen_ai.request.model` | string | The name of the GenAI model a request is being made to. | `gpt-4` |  | |
25 | | -| `gen_ai.request.presence_penalty` | double | The presence penalty setting for the GenAI request. | `0.1` |  | |
26 | | -| `gen_ai.request.stop_sequences` | string[] | List of sequences that the model will use to stop generating further tokens. | `["forest", "lived"]` |  | |
27 | | -| `gen_ai.request.temperature` | double | The temperature setting for the GenAI request. | `0.0` |  | |
28 | | -| `gen_ai.request.top_k` | double | The top_k sampling setting for the GenAI request. | `1.0` |  | |
29 | | -| `gen_ai.request.top_p` | double | The top_p sampling setting for the GenAI request. | `1.0` |  | |
30 | | -| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]`; `["stop", "length"]` |  | |
31 | | -| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` |  | |
32 | | -| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` |  | |
33 | | -| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [4] | `openai` |  | |
34 | | -| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` |  | |
35 | | -| `gen_ai.usage.input_tokens` | int | The number of tokens used in the GenAI input (prompt). | `100` |  | |
36 | | -| `gen_ai.usage.output_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` |  | |
37 | | - |
38 | | -**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation) |
39 | | - |
40 | | -**[2]:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value. |
41 | | - |
42 | | -**[3]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation) |
43 | | - |
44 | | -**[4]:** The `gen_ai.system` describes a family of GenAI models with specific model identified |
| 17 | +| Attribute | Type | Description | Examples | Stability | |
| 18 | +| ---------------------------------- | -------- | ------------------------------------------------------------------------------------------------ | -------------------------------- | ---------------------------------------------------------------- | |
| 19 | +| `gen_ai.operation.name` | string | The name of the operation being performed. [1] | `chat`; `text_completion` |  | |
| 20 | +| `gen_ai.request.frequency_penalty` | double | The frequency penalty setting for the GenAI request. | `0.1` |  | |
| 21 | +| `gen_ai.request.max_tokens` | int | The maximum number of tokens the model generates for a request. | `100` |  | |
| 22 | +| `gen_ai.request.model` | string | The name of the GenAI model a request is being made to. | `gpt-4` |  | |
| 23 | +| `gen_ai.request.presence_penalty` | double | The presence penalty setting for the GenAI request. | `0.1` |  | |
| 24 | +| `gen_ai.request.stop_sequences` | string[] | List of sequences that the model will use to stop generating further tokens. | `["forest", "lived"]` |  | |
| 25 | +| `gen_ai.request.temperature` | double | The temperature setting for the GenAI request. | `0.0` |  | |
| 26 | +| `gen_ai.request.top_k` | double | The top_k sampling setting for the GenAI request. | `1.0` |  | |
| 27 | +| `gen_ai.request.top_p` | double | The top_p sampling setting for the GenAI request. | `1.0` |  | |
| 28 | +| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]`; `["stop", "length"]` |  | |
| 29 | +| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` |  | |
| 30 | +| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` |  | |
| 31 | +| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` |  | |
| 32 | +| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` |  | |
| 33 | +| `gen_ai.usage.input_tokens` | int | The number of tokens used in the GenAI input (prompt). | `100` |  | |
| 34 | +| `gen_ai.usage.output_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` |  | |
| 35 | + |
| 36 | +**[1]:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value. |
| 37 | + |
| 38 | +**[2]:** The `gen_ai.system` describes a family of GenAI models with specific model identified |
45 | 39 | by `gen_ai.request.model` and `gen_ai.response.model` attributes. |
46 | 40 |
|
47 | 41 | The actual GenAI product may differ from the one identified by the client. |
@@ -104,7 +98,9 @@ Thie group defines attributes for OpenAI. |
104 | 98 |
|
105 | 99 | Describes deprecated `gen_ai` attributes. |
106 | 100 |
|
107 | | -| Attribute | Type | Description | Examples | Stability | |
108 | | -| -------------------------------- | ---- | ----------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------ | |
109 | | -| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | <br>Replaced by `gen_ai.usage.output_tokens` attribute. | |
110 | | -| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | <br>Replaced by `gen_ai.usage.input_tokens` attribute. | |
| 101 | +| Attribute | Type | Description | Examples | Stability | |
| 102 | +| -------------------------------- | ------ | --------------------------------------------------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | |
| 103 | +| `gen_ai.completion` | string | Deprecated, use Event API to report completions contents. | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | <br>Removed, no replacement at this time. | |
| 104 | +| `gen_ai.prompt` | string | Deprecated, use Event API to report prompt contents. | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | <br>Removed, no replacement at this time. | |
| 105 | +| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | <br>Replaced by `gen_ai.usage.output_tokens` attribute. | |
| 106 | +| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | <br>Replaced by `gen_ai.usage.input_tokens` attribute. | |
0 commit comments