Bug Report
Description
azure-monitor-opentelemetry-exporter breaks when installed alongside a newer opentelemetry-sdk than the version it was built against, due to a dependency on the internal/private LogData class from opentelemetry.sdk._logs.
ImportError: cannot import name 'LogData' from 'opentelemetry.sdk._logs'
This failure is silent in production — the exporter fails to initialize inside a try/except block, so no spans are ever exported. There are no error logs or warnings visible to the user.
Reproduction
This occurs when azure-monitor-opentelemetry-exporter is installed in an environment where another package (e.g., azure-ai-agentserver-core[tracing]) pulls a newer opentelemetry-sdk:
# b45 allows ~=1.35 which resolves to 1.41.0 (breaking)
pip install "azure-monitor-opentelemetry-exporter==1.0.0b45" "opentelemetry-sdk==1.41.0"
python -c "from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter"
# ImportError: cannot import name 'LogData' from 'opentelemetry.sdk._logs'
Version Details
| Package |
OTel Pin |
Breaks with |
1.0.0b45 (PyPI) |
opentelemetry-sdk~=1.35 |
>=1.41.0 |
1.0.0b48 (PyPI) |
opentelemetry-sdk==1.39 |
Any version != 1.39 |
main branch |
opentelemetry-sdk==1.40 |
Any version != 1.40 |
Root Cause
The exporter imports LogData from opentelemetry.sdk._logs, which is an internal module (prefixed with _). The LogData class was moved or removed in opentelemetry-sdk>=1.41.0, breaking the import.
The ~=1.35 compatible-release pin on 1.0.0b45 is too loose — pip's resolver can pick 1.41.0 when another package in the environment requires a newer OTel SDK. The exact pins (==1.39, ==1.40) on newer releases prevent this but are fragile.
Impact
In Azure AI Foundry hosted agents, this manifests as:
AgentServerHost.TracingHelper._setup_azure_monitor() silently fails (caught by broad except)
- No
BatchSpanProcessor or AzureMonitorTraceExporter is added to the TracerProvider
- The
_FoundryEnrichmentSpanProcessor still runs (enrichment works) but no spans are exported
- Zero telemetry reaches Application Insights — completely invisible failure
This took significant debugging to identify because the failure path produces no errors, warnings, or log messages.
Suggested Fixes
-
Use public OTel APIs only — avoid importing from opentelemetry.sdk._logs (internal module). Use the public opentelemetry.sdk._logs.export or equivalent public surface.
-
Tighten the version pin on older releases — the ~=1.35 pin on 1.0.0b45 should be ==1.35 or >=1.35,<1.41 to prevent resolution to incompatible versions.
-
Add a visible warning — when the exporter fails to initialize, log a WARNING-level message rather than silently swallowing the ImportError. This would have saved hours of debugging.
Environment
- Python 3.12 (Azure AI Foundry hosted agent container)
azure-ai-agentserver-core==2.0.0b1 (from git commit 01e6eac) with [tracing] extra
azure-monitor-opentelemetry-exporter==1.0.0b45 (resolved via azure-monitor-opentelemetry==1.8.2)
opentelemetry-sdk==1.41.0 (pulled by agentserver-core dependencies)
Workaround
Pin compatible OTel versions before installing the exporter:
pip install "opentelemetry-api==1.39.0" "opentelemetry-sdk==1.39.0" \
"azure-monitor-opentelemetry==1.8.4" "azure-monitor-opentelemetry-exporter==1.0.0b48"
Bug Report
Description
azure-monitor-opentelemetry-exporterbreaks when installed alongside a neweropentelemetry-sdkthan the version it was built against, due to a dependency on the internal/privateLogDataclass fromopentelemetry.sdk._logs.This failure is silent in production — the exporter fails to initialize inside a
try/exceptblock, so no spans are ever exported. There are no error logs or warnings visible to the user.Reproduction
This occurs when
azure-monitor-opentelemetry-exporteris installed in an environment where another package (e.g.,azure-ai-agentserver-core[tracing]) pulls a neweropentelemetry-sdk:Version Details
1.0.0b45(PyPI)opentelemetry-sdk~=1.35>=1.41.01.0.0b48(PyPI)opentelemetry-sdk==1.39mainbranchopentelemetry-sdk==1.40Root Cause
The exporter imports
LogDatafromopentelemetry.sdk._logs, which is an internal module (prefixed with_). TheLogDataclass was moved or removed inopentelemetry-sdk>=1.41.0, breaking the import.The
~=1.35compatible-release pin on1.0.0b45is too loose — pip's resolver can pick 1.41.0 when another package in the environment requires a newer OTel SDK. The exact pins (==1.39,==1.40) on newer releases prevent this but are fragile.Impact
In Azure AI Foundry hosted agents, this manifests as:
AgentServerHost.TracingHelper._setup_azure_monitor()silently fails (caught by broadexcept)BatchSpanProcessororAzureMonitorTraceExporteris added to theTracerProvider_FoundryEnrichmentSpanProcessorstill runs (enrichment works) but no spans are exportedThis took significant debugging to identify because the failure path produces no errors, warnings, or log messages.
Suggested Fixes
Use public OTel APIs only — avoid importing from
opentelemetry.sdk._logs(internal module). Use the publicopentelemetry.sdk._logs.exportor equivalent public surface.Tighten the version pin on older releases — the
~=1.35pin on1.0.0b45should be==1.35or>=1.35,<1.41to prevent resolution to incompatible versions.Add a visible warning — when the exporter fails to initialize, log a WARNING-level message rather than silently swallowing the
ImportError. This would have saved hours of debugging.Environment
azure-ai-agentserver-core==2.0.0b1(from git commit01e6eac) with[tracing]extraazure-monitor-opentelemetry-exporter==1.0.0b45(resolved viaazure-monitor-opentelemetry==1.8.2)opentelemetry-sdk==1.41.0(pulled by agentserver-core dependencies)Workaround
Pin compatible OTel versions before installing the exporter: