Skip to content

Commit 6ac869d

Browse files
authored
Merge branch 'main' into sqlalchemy-semconv-opt-in
2 parents 0f78666 + 8f31a01 commit 6ac869d

9 files changed

Lines changed: 563 additions & 56 deletions

File tree

opamp/opentelemetry-opamp-client/CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## Unreleased
99

10+
## Version 0.2b0 (2026-04-01)
11+
1012
- Breaking change: callback class `Callbacks` renamed to `OpAMPCallbacks`
1113
([#4355](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4355))
1214

opamp/opentelemetry-opamp-client/src/opentelemetry/_opamp/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
__version__ = "0.2b0.dev"
15+
__version__ = "0.3b0.dev"

util/opentelemetry-util-genai/CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## Unreleased
99

10+
11+
- Add support for workflow in genAI utils handler.
12+
([https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4366](#4366))
1013
- Enrich ToolCall type, breaking change: usage of ToolCall class renamed to ToolCallRequest
1114
([#4218](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/4218))
1215
- Add EmbeddingInvocation span lifecycle support

util/opentelemetry-util-genai/README.rst

Lines changed: 123 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,58 +1,129 @@
11
OpenTelemetry Util for GenAI
22
============================
33

4+
The GenAI Utils package provides boilerplate and helpers to standardize instrumentation for Generative AI.
5+
It offers APIs to minimize the work needed to instrument GenAI libraries,
6+
while providing standardization for generating spans, metrics, and events.
47

5-
The GenAI Utils package will include boilerplate and helpers to standardize instrumentation for Generative AI.
6-
This package will provide APIs and decorators to minimize the work needed to instrument genai libraries,
7-
while providing standardization for generating both types of otel, "spans and metrics" and "spans, metrics and events"
88

9-
This package relies on environment variables to configure capturing of message content.
9+
Key Components
10+
--------------
11+
12+
- ``TelemetryHandler`` -- manages LLM invocation lifecycles (spans, metrics, events)
13+
- ``LLMInvocation`` and message types (``Text``, ``Reasoning``, ``Blob``, etc.) -- structured data model for GenAI interactions
14+
- ``CompletionHook`` -- protocol for uploading content to external storage (built-in ``fsspec`` support)
15+
- Metrics -- ``gen_ai.client.operation.duration`` and ``gen_ai.client.token.usage`` histograms
16+
17+
18+
Usage
19+
-----
20+
21+
See the module docstring in ``opentelemetry.util.genai.handler`` for usage examples,
22+
including context manager and manual lifecycle patterns.
23+
24+
25+
Environment Variables
26+
---------------------
27+
28+
This package relies on environment variables to configure capturing of message content.
1029
By default, message content will not be captured.
11-
Set the environment variable `OTEL_SEMCONV_STABILITY_OPT_IN` to `gen_ai_latest_experimental` to enable experimental features.
12-
Set the environment variable `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` to one of:
13-
- `NO_CONTENT`: Do not capture message content (default).
14-
- `SPAN_ONLY`: Capture message content in spans only.
15-
- `EVENT_ONLY`: Capture message content in events only.
16-
- `SPAN_AND_EVENT`: Capture message content in both spans and events.
17-
18-
To control event emission, you can optionally set `OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT` to `true` or `false` (case-insensitive).
19-
This variable controls whether to emit `gen_ai.client.inference.operation.details` events.
20-
If not explicitly set, the default value is automatically determined by `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT`:
21-
- When `NO_CONTENT` or `SPAN_ONLY` is set: defaults to `false`
22-
- When `EVENT_ONLY` or `SPAN_AND_EVENT` is set: defaults to `true`
30+
Set the environment variable ``OTEL_SEMCONV_STABILITY_OPT_IN`` to ``gen_ai_latest_experimental`` to enable experimental features.
31+
Set the environment variable ``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT`` to one of:
32+
33+
- ``NO_CONTENT``: Do not capture message content (default).
34+
- ``SPAN_ONLY``: Capture message content in spans only.
35+
- ``EVENT_ONLY``: Capture message content in events only.
36+
- ``SPAN_AND_EVENT``: Capture message content in both spans and events.
37+
38+
To control event emission, you can optionally set ``OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT`` to ``true`` or ``false`` (case-insensitive).
39+
This variable controls whether to emit ``gen_ai.client.inference.operation.details`` events.
40+
If not explicitly set, the default value is automatically determined by ``OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT``:
41+
42+
- When ``NO_CONTENT`` or ``SPAN_ONLY`` is set: defaults to ``false``
43+
- When ``EVENT_ONLY`` or ``SPAN_AND_EVENT`` is set: defaults to ``true``
44+
2345
If explicitly set, the user's value takes precedence over the default.
2446

25-
This package provides these span attributes:
26-
27-
- `gen_ai.provider.name`: Str(openai)
28-
- `gen_ai.operation.name`: Str(chat)
29-
- `gen_ai.request.model`: Str(gpt-3.5-turbo)
30-
- `gen_ai.response.finish_reasons`: Slice(["stop"])
31-
- `gen_ai.response.model`: Str(gpt-3.5-turbo-0125)
32-
- `gen_ai.response.id`: Str(chatcmpl-Bz8yrvPnydD9pObv625n2CGBPHS13)
33-
- `gen_ai.usage.input_tokens`: Int(24)
34-
- `gen_ai.usage.output_tokens`: Int(7)
35-
- `gen_ai.input.messages`: Str('[{"role": "Human", "parts": [{"content": "hello world", "type": "text"}]}]')
36-
- `gen_ai.output.messages`: Str('[{"role": "AI", "parts": [{"content": "hello back", "type": "text"}], "finish_reason": "stop"}]')
37-
- `gen_ai.system_instructions`: Str('[{"content": "You are a helpful assistant.", "type": "text"}]') (when system instruction is provided)
38-
39-
This package also supports embedding invocation spans via the `embedding` context manager.
40-
For embedding invocations, common attributes include:
41-
42-
- `gen_ai.provider.name`: Str(openai)
43-
- `gen_ai.operation.name`: Str(embeddings)
44-
- `gen_ai.request.model`: Str(text-embedding-3-small)
45-
- `gen_ai.embeddings.dimension.count`: Int(1536)
46-
- `gen_ai.request.encoding_formats`: Slice(["float"])
47-
- `gen_ai.usage.input_tokens`: Int(24)
48-
- `server.address`: Str(api.openai.com)
49-
- `server.port`: Int(443)
50-
51-
When `EVENT_ONLY` or `SPAN_AND_EVENT` mode is enabled and a LoggerProvider is configured,
52-
the package also emits `gen_ai.client.inference.operation.details` events with structured
53-
message content (as dictionaries instead of JSON strings). Note that when using `EVENT_ONLY`
54-
or `SPAN_AND_EVENT`, the `OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT` environment variable defaults
55-
to `true`, so events will be emitted automatically unless explicitly set to `false`.
47+
When ``EVENT_ONLY`` or ``SPAN_AND_EVENT`` mode is enabled and a LoggerProvider is configured,
48+
the package also emits ``gen_ai.client.inference.operation.details`` events with structured
49+
message content (as dictionaries instead of JSON strings). Note that when using ``EVENT_ONLY``
50+
or ``SPAN_AND_EVENT``, the ``OTEL_INSTRUMENTATION_GENAI_EMIT_EVENT`` environment variable defaults
51+
to ``true``, so events will be emitted automatically unless explicitly set to ``false``.
52+
53+
Completion Hook / Upload
54+
^^^^^^^^^^^^^^^^^^^^^^^^
55+
56+
- ``OTEL_INSTRUMENTATION_GENAI_COMPLETION_HOOK``: Name of the completion hook entry point to load (e.g. ``upload``).
57+
- ``OTEL_INSTRUMENTATION_GENAI_UPLOAD_BASE_PATH``: An ``fsspec``-compatible URI/path for uploading prompts and responses
58+
(e.g. ``/path/to/prompts`` or ``gs://my_bucket``). Required when using the ``upload`` hook.
59+
- ``OTEL_INSTRUMENTATION_GENAI_UPLOAD_FORMAT``: Format for uploaded data -- ``json`` (default) or ``jsonl``.
60+
- ``OTEL_INSTRUMENTATION_GENAI_UPLOAD_MAX_QUEUE_SIZE``: Maximum number of concurrent uploads to queue (default: ``20``).
61+
62+
63+
Span Attributes
64+
---------------
65+
66+
This package sets the following span attributes on LLM invocations:
67+
68+
**Common attributes:**
69+
70+
- ``gen_ai.operation.name``: Str(chat)
71+
- ``gen_ai.provider.name``: Str(openai)
72+
- ``gen_ai.request.model``: Str(gpt-4o)
73+
- ``server.address``: Str(api.openai.com)
74+
- ``server.port``: Int(443)
75+
76+
**Response attributes:**
77+
78+
- ``gen_ai.response.finish_reasons``: Slice(["stop"])
79+
- ``gen_ai.response.model``: Str(gpt-4o-2024-05-13)
80+
- ``gen_ai.response.id``: Str(chatcmpl-Bz8yrvPnydD9pObv625n2CGBPHS13)
81+
- ``gen_ai.usage.input_tokens``: Int(24)
82+
- ``gen_ai.usage.output_tokens``: Int(7)
83+
84+
**Request parameter attributes (when provided):**
85+
86+
- ``gen_ai.request.temperature``: Float(0.7)
87+
- ``gen_ai.request.top_p``: Float(1.0)
88+
- ``gen_ai.request.frequency_penalty``: Float(0.0)
89+
- ``gen_ai.request.presence_penalty``: Float(0.0)
90+
- ``gen_ai.request.max_tokens``: Int(1024)
91+
- ``gen_ai.request.stop_sequences``: Slice(["\\n"])
92+
- ``gen_ai.request.seed``: Int(42)
93+
94+
**Content attributes (sensitive, requires content capturing enabled):**
95+
96+
- ``gen_ai.input.messages``: Str('[{"role": "user", "parts": [{"content": "hello world", "type": "text"}]}]')
97+
- ``gen_ai.output.messages``: Str('[{"role": "assistant", "parts": [{"content": "hello back", "type": "text"}], "finish_reason": "stop"}]')
98+
- ``gen_ai.system_instructions``: Str('[{"content": "You are a helpful assistant.", "type": "text"}]')
99+
100+
**Error attributes:**
101+
102+
- ``error.type``: Str(TimeoutError)
103+
104+
Embedding Span Attributes
105+
-------------------------
106+
107+
This package also supports embedding invocation spans via the ``embedding`` context manager.
108+
For embedding invocations, the following attributes are set:
109+
110+
**Common attributes:**
111+
112+
- ``gen_ai.operation.name``: Str(embeddings)
113+
- ``gen_ai.provider.name``: Str(openai)
114+
- ``server.address``: Str(api.openai.com)
115+
- ``server.port``: Int(443)
116+
117+
**Request attributes:**
118+
119+
- ``gen_ai.request.model``: Str(text-embedding-3-small)
120+
- ``gen_ai.embeddings.dimension.count``: Int(1536)
121+
- ``gen_ai.request.encoding_formats``: Slice(["float"])
122+
123+
**Response attributes:**
124+
125+
- ``gen_ai.response.model``: Str(text-embedding-3-small)
126+
- ``gen_ai.usage.input_tokens``: Int(24)
56127

57128

58129
Installation
@@ -62,11 +133,15 @@ Installation
62133

63134
pip install opentelemetry-util-genai
64135

136+
For upload support (requires ``fsspec``)::
137+
138+
pip install opentelemetry-util-genai[upload]
139+
65140

66141
Design Document
67142
---------------
68143

69-
The design document for the OpenTelemetry GenAI Utils can be found at: `Design Document <https://docs.google.com/document/d/1w9TbtKjuRX_wymS8DRSwPA03_VhrGlyx65hHAdNik1E/edit?tab=t.qneb4vabc1wc#heading=h.kh4j6stirken>`_
144+
The design document for the OpenTelemetry GenAI Utils can be found at: `Design Document <https://docs.google.com/document/d/1LzNGylxot5zaIV1goOJZ2mz3LI0weFu1SgaMnyDv7gg/edit?usp=sharing>`_
70145

71146
References
72147
----------

util/opentelemetry-util-genai/src/opentelemetry/util/genai/handler.py

Lines changed: 75 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@
6060

6161
from __future__ import annotations
6262

63+
import logging
6364
import timeit
6465
from contextlib import contextmanager
6566
from typing import Iterator, TypeVar
@@ -83,18 +84,38 @@
8384
_apply_embedding_finish_attributes,
8485
_apply_error_attributes,
8586
_apply_llm_finish_attributes,
87+
_apply_workflow_finish_attributes,
8688
_get_embedding_span_name,
8789
_get_llm_span_name,
90+
_get_workflow_span_name,
8891
_maybe_emit_llm_event,
8992
)
9093
from opentelemetry.util.genai.types import (
9194
EmbeddingInvocation,
9295
Error,
9396
GenAIInvocation,
9497
LLMInvocation,
98+
WorkflowInvocation,
9599
)
96100
from opentelemetry.util.genai.version import __version__
97101

102+
_logger = logging.getLogger(__name__)
103+
104+
105+
def _safe_detach(invocation: GenAIInvocation) -> None:
106+
"""Detach the context token if still present, as a safety net."""
107+
if invocation.context_token is not None:
108+
try:
109+
otel_context.detach(invocation.context_token)
110+
except Exception: # pylint: disable=broad-except
111+
pass
112+
if invocation.span is not None:
113+
try:
114+
invocation.span.end()
115+
except Exception: # pylint: disable=broad-except
116+
pass
117+
118+
98119
_T = TypeVar("_T", bound=GenAIInvocation)
99120

100121

@@ -160,13 +181,19 @@ def _start(self, invocation: _T) -> _T:
160181
"""Start a GenAI invocation and create a pending span entry."""
161182
if isinstance(invocation, LLMInvocation):
162183
span_name = _get_llm_span_name(invocation)
184+
kind = SpanKind.CLIENT
163185
elif isinstance(invocation, EmbeddingInvocation):
164186
span_name = _get_embedding_span_name(invocation)
187+
kind = SpanKind.CLIENT
188+
elif isinstance(invocation, WorkflowInvocation):
189+
span_name = _get_workflow_span_name(invocation)
190+
kind = SpanKind.INTERNAL
165191
else:
166192
span_name = ""
193+
kind = SpanKind.CLIENT
167194
span = self._tracer.start_span(
168195
name=span_name,
169-
kind=SpanKind.CLIENT,
196+
kind=kind,
170197
)
171198
# Record a monotonic start timestamp (seconds) for duration
172199
# calculation using timeit.default_timer.
@@ -192,6 +219,9 @@ def _stop(self, invocation: _T) -> _T:
192219
elif isinstance(invocation, EmbeddingInvocation):
193220
_apply_embedding_finish_attributes(span, invocation)
194221
self._record_embedding_metrics(invocation, span)
222+
elif isinstance(invocation, WorkflowInvocation):
223+
_apply_workflow_finish_attributes(span, invocation)
224+
# TODO: Add workflow metrics when supported
195225
finally:
196226
# Detach context and end span even if finishing fails
197227
otel_context.detach(invocation.context_token)
@@ -222,6 +252,10 @@ def _fail(self, invocation: _T, error: Error) -> _T:
222252
self._record_embedding_metrics(
223253
invocation, span, error_type=error_type
224254
)
255+
elif isinstance(invocation, WorkflowInvocation):
256+
_apply_workflow_finish_attributes(span, invocation)
257+
_apply_error_attributes(span, error, error_type)
258+
# TODO: Add workflow metrics when supported
225259
finally:
226260
# Detach context and end span even if finishing fails
227261
otel_context.detach(invocation.context_token)
@@ -304,6 +338,46 @@ def embedding(
304338
raise
305339
self.stop(invocation)
306340

341+
@contextmanager
342+
def workflow(
343+
self, invocation: WorkflowInvocation | None = None
344+
) -> Iterator[WorkflowInvocation]:
345+
"""Context manager for Workflow invocations.
346+
347+
Only set data attributes on the invocation object, do not modify the span or context.
348+
349+
Starts the span on entry. On normal exit, finalizes the invocation and ends the span.
350+
If an exception occurs inside the context, marks the span as error, ends it, and
351+
re-raises the original exception.
352+
"""
353+
if invocation is None:
354+
invocation = WorkflowInvocation()
355+
356+
try:
357+
self.start(invocation)
358+
except Exception: # pylint: disable=broad-except
359+
_logger.warning(
360+
"Failed to start workflow telemetry", exc_info=True
361+
)
362+
363+
try:
364+
yield invocation
365+
except Exception as exc:
366+
try:
367+
self.fail(invocation, Error(message=str(exc), type=type(exc)))
368+
except Exception: # pylint: disable=broad-except
369+
_logger.warning(
370+
"Failed to record workflow failure", exc_info=True
371+
)
372+
_safe_detach(invocation)
373+
raise
374+
375+
try:
376+
self.stop(invocation)
377+
except Exception: # pylint: disable=broad-except
378+
_logger.warning("Failed to stop workflow telemetry", exc_info=True)
379+
_safe_detach(invocation)
380+
307381

308382
def get_telemetry_handler(
309383
tracer_provider: TracerProvider | None = None,

0 commit comments

Comments
 (0)