Skip to content

azure-monitor-opentelemetry-exporter breaks with newer opentelemetry-sdk due to internal API dependency (LogData) #46269

@nagkumar91

Description

@nagkumar91

Bug Report

Description

azure-monitor-opentelemetry-exporter breaks when installed alongside a newer opentelemetry-sdk than the version it was built against, due to a dependency on the internal/private LogData class from opentelemetry.sdk._logs.

ImportError: cannot import name 'LogData' from 'opentelemetry.sdk._logs'

This failure is silent in production — the exporter fails to initialize inside a try/except block, so no spans are ever exported. There are no error logs or warnings visible to the user.

Reproduction

This occurs when azure-monitor-opentelemetry-exporter is installed in an environment where another package (e.g., azure-ai-agentserver-core[tracing]) pulls a newer opentelemetry-sdk:

# b45 allows ~=1.35 which resolves to 1.41.0 (breaking)
pip install "azure-monitor-opentelemetry-exporter==1.0.0b45" "opentelemetry-sdk==1.41.0"
python -c "from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter"
# ImportError: cannot import name 'LogData' from 'opentelemetry.sdk._logs'

Version Details

Package OTel Pin Breaks with
1.0.0b45 (PyPI) opentelemetry-sdk~=1.35 >=1.41.0
1.0.0b48 (PyPI) opentelemetry-sdk==1.39 Any version != 1.39
main branch opentelemetry-sdk==1.40 Any version != 1.40

Root Cause

The exporter imports LogData from opentelemetry.sdk._logs, which is an internal module (prefixed with _). The LogData class was moved or removed in opentelemetry-sdk>=1.41.0, breaking the import.

The ~=1.35 compatible-release pin on 1.0.0b45 is too loose — pip's resolver can pick 1.41.0 when another package in the environment requires a newer OTel SDK. The exact pins (==1.39, ==1.40) on newer releases prevent this but are fragile.

Impact

In Azure AI Foundry hosted agents, this manifests as:

  1. AgentServerHost.TracingHelper._setup_azure_monitor() silently fails (caught by broad except)
  2. No BatchSpanProcessor or AzureMonitorTraceExporter is added to the TracerProvider
  3. The _FoundryEnrichmentSpanProcessor still runs (enrichment works) but no spans are exported
  4. Zero telemetry reaches Application Insights — completely invisible failure

This took significant debugging to identify because the failure path produces no errors, warnings, or log messages.

Suggested Fixes

  1. Use public OTel APIs only — avoid importing from opentelemetry.sdk._logs (internal module). Use the public opentelemetry.sdk._logs.export or equivalent public surface.

  2. Tighten the version pin on older releases — the ~=1.35 pin on 1.0.0b45 should be ==1.35 or >=1.35,<1.41 to prevent resolution to incompatible versions.

  3. Add a visible warning — when the exporter fails to initialize, log a WARNING-level message rather than silently swallowing the ImportError. This would have saved hours of debugging.

Environment

  • Python 3.12 (Azure AI Foundry hosted agent container)
  • azure-ai-agentserver-core==2.0.0b1 (from git commit 01e6eac) with [tracing] extra
  • azure-monitor-opentelemetry-exporter==1.0.0b45 (resolved via azure-monitor-opentelemetry==1.8.2)
  • opentelemetry-sdk==1.41.0 (pulled by agentserver-core dependencies)

Workaround

Pin compatible OTel versions before installing the exporter:

pip install "opentelemetry-api==1.39.0" "opentelemetry-sdk==1.39.0" \
    "azure-monitor-opentelemetry==1.8.4" "azure-monitor-opentelemetry-exporter==1.0.0b48"

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugThis issue requires a change to an existing behavior in the product in order to be resolved.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions