fix(ai-sdk): agent name in span names when available#838
Conversation
WalkthroughThis PR extends the AI SDK tracing transformation layer to detect agent names from telemetry metadata and propagate them as span names and attributes. New test recordings capture interactions for agent-based scenarios (generateObject, streamText, tool calls), while updated span transformation logic conditionally renames top-level AI spans to agent names when metadata is present. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used📓 Path-based instructions (2)packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
packages/traceloop-sdk/**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
🧠 Learnings (6)📓 Common learnings📚 Learning: 2025-08-24T22:08:07.023ZApplied to files:
📚 Learning: 2025-08-24T22:08:07.023ZApplied to files:
📚 Learning: 2025-08-24T22:08:07.023ZApplied to files:
📚 Learning: 2025-08-24T22:08:07.023ZApplied to files:
📚 Learning: 2025-08-24T22:08:07.023ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (3)
Comment |
… into gk/ai-sdk-fixes
- Keep original span names (ai.generateText, ai.streamText, etc.) when no agent metadata - Only transform top-level spans to agent name when agent metadata is present - Transform child spans (text.generate, object.generate) to OpenLLMetry format - Updated unit test to pass agent name instead of deprecated "run.ai" name - Added tests for generateObject and streamText with agent metadata - Updated test recordings to match new behavior Fixes customer complaint where generic "AI" or "run" names appeared instead of actual agent names 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to ec097de in 2 minutes and 25 seconds. Click for details.
- Reviewed
965lines of code in7files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:515
- Draft comment:
Bug: In transformAiSdkSpanNames the agent name is retrieved using the key constructed from AI_TELEMETRY_METADATA_PREFIX ('ai.telemetry.metadata.agent'), but transformTelemetryMetadata deletes these keys and instead sets the agent name on the GEN_AI_AGENT_NAME attribute. Use span.attributes[SpanAttributes.GEN_AI_AGENT_NAME] instead. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment is making an assumption about execution order that I cannot verify from this file alone. The two functions are separate exports -transformAiSdkSpanNamesandtransformAiSdkSpanAttributes. Without seeing where these are called in the broader codebase, I cannot determine iftransformTelemetryMetadata(which deletes the metadata prefix keys and sets GEN_AI_AGENT_NAME) runs before or aftertransformAiSdkSpanNames. The comment could be correct, but it requires cross-file context to verify. According to the rules, I should delete comments that require more context to understand. I'm assuming the comment is wrong because I can't verify the execution order, but the reviewer might have knowledge of the calling code that showstransformTelemetryMetadatadoes run first. The comment provides a specific, actionable fix which suggests the reviewer has investigated this. Even if the reviewer has investigated this, the rule states I need STRONG EVIDENCE from the diff itself. The execution order is not clear from this file alone, and understanding this requires seeing other files or parts of the codebase. This falls under "cross-file issues" which I should ignore. Delete this comment because verifying whether it's correct requires understanding the execution order of these functions across multiple files, which is cross-file context I don't have access to in this diff.
Workflow ID: wflow_PbFsV69q7gHjvGZo
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Changed test name from "should use default 'run.ai' span name" to "should preserve original AI SDK span name" to accurately reflect that we now preserve the original AI SDK span names (ai.generateText, etc.) when no agent metadata is provided, rather than transforming to "run.ai" Created new test recording to match updated test name 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
430-443: Deduplicate the list of top-level AI span namesYou now maintain two separate arrays of top-level AI span identifiers:
- An inline
topLevelSpanNamesinsidetransformTelemetryMetadata(Lines 430-438).- The module-level
TOP_LEVEL_AI_SPANS(Lines 501-507).Keeping these in sync is error-prone when adding new top-level span types. Consider reusing
TOP_LEVEL_AI_SPANSintransformTelemetryMetadata(or centralizing the list elsewhere) to avoid future drift.Also applies to: 501-507
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-propagate-agent-name-to-tool-call-spans_3577231859/recording.har(10 hunks)packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-generateObject-with-agent-metadata_1744675110/recording.har(1 hunks)packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-streamText-with-agent-metadata_4019571713/recording.har(1 hunks)packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-default-run-ai-span-name-when-no-agent-metadata-is-provided_1300307112/recording.har(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(3 hunks)packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts(4 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
**/recordings/**
📄 CodeRabbit inference engine (CLAUDE.md)
Store HTTP interaction recordings for tests under recordings/ directories for Polly.js replay
Files:
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-propagate-agent-name-to-tool-call-spans_3577231859/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-generateObject-with-agent-metadata_1744675110/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-default-run-ai-span-name-when-no-agent-metadata-is-provided_1300307112/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-streamText-with-agent-metadata_4019571713/recording.har
🧠 Learnings (7)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-default-run-ai-span-name-when-no-agent-metadata-is-provided_1300307112/recording.harpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(459-473)
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/decorators.ts (1)
tool(289-295)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-75)
🔇 Additional comments (7)
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-default-run-ai-span-name-when-no-agent-metadata-is-provided_1300307112/recording.har (1)
1-172: Test fixture properly structured and placed.The HAR recording is correctly formatted for Polly.js replay, captures a realistic OpenAI API interaction with function-calling tools, and aligns with the test scenario ("no agent metadata, default run.ai span name"). Placement in the recordings/ directory follows coding guidelines.
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-streamText-with-agent-metadata_4019571713/recording.har (1)
1-144: HAR fixture structure looks correctRecording follows the existing Polly HAR structure and matches the expected OpenAI streaming response shape; fine as a test fixture.
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1777-1780: Root-span simulation now matches production behaviorPassing
"research_assistant"as spanName intotransformLLMSpanscorrectly mirrors the post-name-transformation root span and validates the new agent-root logic.packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-propagate-agent-name-to-tool-call-spans_3577231859/recording.har (1)
23-167: Updated HAR recording is structurally consistentOnly identifiers, cookie values, and timing/header fields changed; JSON structure is intact and suitable for Polly replay.
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-generateObject-with-agent-metadata_1744675110/recording.har (1)
1-172: New generateObject HAR fixture is well-formedRecording cleanly captures the json_schema-based generateObject call and response; appropriate as a Polly fixture.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
7-24: Agent-aware span naming for top-level vs child AI spans looks correctThe extended constants and updated
transformAiSdkSpanNamescleanly separate concerns:
- Top-level AI spans (
ai.generateText,ai.streamText,ai.generateObject,ai.streamObject) are renamed to the agent name whenai.telemetry.metadata.agentis present, leaving them asai.*when no agent metadata is provided.- Child spans (
*.doGenerate/*.doStream) still map to internal names liketext.generate,object.generate,text.stream,object.stream, so downstream processing remains stable.- Combined with passing
span.nameintotransformLLMSpans, this gives you root spans named after the agent while preserving consistent child span naming.The behavior aligns with the new tests and the stated PR objective.
Also applies to: 501-527
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (1)
20-21: Tests correctly cover agent naming for generateText/generateObject/streamTextThe three tests accurately exercise:
- Default root naming (
"ai.generateText") when no agent metadata is provided- Agent-named root spans (with AGENT span kind and entity name) for
generateObjectandstreamTextwhen agent metadata is present- Proper export flushing via
traceloop.forceFlush()before assertionsThe ai package dependency (
^5.0.22) is properly configured and supportsgenerateObject,streamText, and theexperimental_telemetryoptions used throughout.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 3bc47c5 in 44 seconds. Click for details.
- Reviewed
119lines of code in2files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.har:3
- Draft comment:
Updated the recording name and associated timestamps/cookie values to reflect the new behavior, i.e. preserving the original AI SDK span name when no agent metadata is provided. Ensure these fixture updates stay in sync with expected behavior. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts:236
- Draft comment:
Renamed test description from 'use default "run.ai" span name' to 'preserve original AI SDK span name' to better reflect the intended behavior when agent metadata is missing. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_rf6T22D6rzzwLjcW
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.har (1)
1-172: HAR fixture is well-formed and correctly placed; consider optional sanitization of opaque IDsThe HAR looks structurally valid for Polly.js (HAR 1.2, single POST entry, realistic OpenAI request/response) and the path under
recordings/matches the intended usage for HTTP interaction fixtures. It also correctly represents a “no agent metadata” scenario that your tests rely on.If you want to further harden/simplify fixtures, you could optionally normalize or redact opaque values like
cf-ray,x-request-id,openai-project,_cfuvid, and similar IDs/cookies to reduce churn on re-recording and avoid storing third-party identifiers verbatim in the repo. Not strictly necessary, but worth considering for future recordings.As per coding guidelines, this aligns with the requirement that
**/recordings/**store Polly.js HTTP interaction recordings.packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (1)
239-318: Good fallback test for no-agent case, with minor tool duplicationThis test nicely covers the case where AI SDK spans should retain the original
ai.generateTextname and lack agent-related attributes when noagentmetadata is supplied. The only nit is that the calculatortooldefinition duplicates the one in the first test; consider extracting a shared helper to reduce duplication and keep future changes to the tool logic in one place.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.har(1 hunks)packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/recordings/**
📄 CodeRabbit inference engine (CLAUDE.md)
Store HTTP interaction recordings for tests under recordings/ directories for Polly.js replay
Files:
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.har
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
🧠 Learnings (7)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to **/recordings/** : Store HTTP interaction recordings for tests under recordings/ directories for Polly.js replay
Applied to files:
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.har
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.harpackages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/decorators.ts (1)
tool(289-295)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-75)
🔇 Additional comments (4)
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (4)
20-20: Expandedaiimports correctly match new usagesImporting
generateObject,streamText, andtoolalongsidegenerateTextlines up with how the tests use these helpers later in the file; no issues here.
144-147: Root span selection now validates agent-based namingFinding the root span by the agent name and asserting its existence with a clear message directly exercises the new naming behavior for top-level AI spans when agent metadata is present; this is an appropriate and focused expectation.
Also applies to: 158-161
320-378: Solid coverage forgenerateObjectagent naming and attributesThis test cleanly verifies that
generateObjectspans are renamed to the agent name and carry the expectedGEN_AI_AGENT_NAME,TRACELOOP_SPAN_KIND = "agent", andTRACELOOP_ENTITY_NAMEattributes when agent metadata is present, which aligns with the intended transformation behavior and usesSpanAttributesconstants instead of raw strings, as per semantic-conventions guidance.
380-437: Streaming scenario is well-covered for agent-based span namingThe
streamTexttest fully consumes the stream and then asserts that the root span is renamed to the agent and tagged with the correct agent-related attributes, giving good coverage of the streaming path for the new naming logic.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 22b79f9 in 1 minute and 19 seconds. Click for details.
- Reviewed
1136lines of code in6files - Skipped
0files when reviewing. - Skipped posting
8draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:35
- Draft comment:
New constants for cache tokens are defined. Consider adding explicit type annotations for clarity. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:374
- Draft comment:
transformCacheCreationInputTokens correctly maps and removes the legacy attribute. Consider adding a comment on expected value type (e.g. number or string). - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:383
- Draft comment:
transformCacheReadInputTokens implementation is clear and follows the pattern. No issues detected. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:392
- Draft comment:
transformCachedInputTokens properly overwrites the cache read attribute if present; the ordering of these functions is critical. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
5. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:501
- Draft comment:
The order of cache token transformation calls (creation, read, then cached) ensures that 'cachedInputTokens' takes precedence. Consider adding an inline comment explaining this ordering for future maintainability. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
6. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:273
- Draft comment:
New integration test for cache tokens transformation is comprehensive; it verifies removal of old keys and correct mapping of cache creation and read tokens. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
7. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:942
- Draft comment:
The new test suite for cache tokens in ai-sdk-transformations covers multiple edge cases, including overriding and string values. Excellent coverage. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
8. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1964
- Draft comment:
Overall, the new attribute transformation tests (including telemetry metadata and tool calls) integrate well with existing functionality. No critical issues found. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_NAbAr5c82Wm42HHN
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed cc54052 in 1 minute and 37 seconds. Click for details.
- Reviewed
162lines of code in4files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:431
- Draft comment:
Good use of telemetry metadata to extract and attach agent info. Consider adding logging or error details in the catch blocks to help diagnose parsing issues in production. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:548
- Draft comment:
Span name transformation uses agent metadata for top-level spans. It might be helpful to clarify in comments that child spans retain their mapped names when agent metadata isn’t applied. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:397
- Draft comment:
When both 'ai.usage.cacheReadInputTokens' and 'ai.usage.cachedInputTokens' are present, the latter overwrites the former (last write wins). Confirm this behavior is intentional as per tests. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_lEnRXLusaNUq0zNd
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
943-1170: Cache token tests look correct, but some names/comments are misleadingThe new cache token tests exercise all relevant paths (creation, read, OpenAI
cachedInputTokens, coexistence, zero/string values, and integration) and match the implementation of the cache transforms andcalculateTotalTokens.Two minor clarity issues you may want to tidy up:
- Test name vs behavior mismatch:
it("should prefer cacheReadInputTokens over cachedInputTokens when both present", ...)actually verifies thatcachedInputTokenswins (last write wins) and the inline comment also explains that. Renaming the test (e.g. to “should prefer cachedInputTokens when both present (last write wins)”) would avoid confusion.- Comment vs behavior mismatch:
- In
it("should not modify attributes when cache token attributes are not present", ...), the comment says “Should preserve input tokens and add total tokens”, but with only input tokens presentcalculateTotalTokensintentionally does not set a total. The assertions don’t depend on this, so either adjust or drop that part of the comment.These are cosmetic only; the assertions correctly reflect the current transform behavior.
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
273-446: Strong coverage for prompt-cache token transformation, with minor coupling to fixture token countThis test does a good job validating both directions of the cache token mapping: it forces a cacheable prompt, checks
gen_ai.usage.*input/output and cache-* attributes viaSpanAttributes, and asserts that the legacyai.usage.*cache keys are fully stripped from spans. Resetting the exporter between calls and guarding the cache-token assertions behind existence checks keeps it robust across providers. As per coding guidelines, usingSpanAttributesfor thegen_ai.*keys is exactly what we want here.One nit: the strict expectation of
6900cache-read tokens ties the test to a particular HAR/model/tokenizer. That’s fine for a recorded fixture, but if recordings are regenerated or models change you’ll need to update this constant. If you want to reduce that coupling, you could keep only the> 0check or assert a looser lower bound.packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
7-24: Agent-aware span naming for top-level AI SDK spans looks correct; consider de-duplicating the top-level listThe new constants and
HANDLED_SPAN_NAMESentries forai.streamText,ai.generateObject, andai.streamObject, together withTOP_LEVEL_AI_SPANSand the updatedtransformAiSdkSpanNames, give you the intended behavior:
- Top-level AI spans (
ai.generateText/ai.streamText/ai.generateObject/ai.streamObject) are renamed to the agent name only whenai.telemetry.metadata.agentis present.- Child spans (
*.doGenerate/*.doStream) are always normalized totext.generate,object.generate,text.stream,object.stream, preserving the previous observability surface.This aligns with the PR’s goal and keeps non-agent spans from being renamed unexpectedly. As per coding guidelines, you’re correctly keeping semantic attribute strings out of here and delegating them to
SpanAttributeselsewhere.One small maintainability improvement:
transformTelemetryMetadatadefines its owntopLevelSpanNamesarray with the same four values asTOP_LEVEL_AI_SPANS. ReusingTOP_LEVEL_AI_SPANSthere would avoid the risk of future drift when new top-level operations are added.Also applies to: 540-565
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-and-transform-cache-tokens-from-OpenAI-with-prompt-caching_4027203422/recording.har(1 hunks)packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-and-transform-cache-tokens-from-OpenAI-with-prompt-caching_4027203422/recording.har.backup(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(6 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(1 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-and-transform-cache-tokens-from-OpenAI-with-prompt-caching_4027203422/recording.har
- packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-and-transform-cache-tokens-from-OpenAI-with-prompt-caching_4027203422/recording.har.backup
🧰 Additional context used
📓 Path-based instructions (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (3)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-78)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(495-512)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-78)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-78)
🔇 Additional comments (4)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
38-40: Cache token span attributes are consistent and correctly scopedThe new cache creation/read attributes follow the existing
gen_ai.usage.*convention and are correctly centralized inSpanAttributes, matching how downstream transforms consume them. This aligns with the guidance to define AI/LLM span attribute constants here.packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
2006-2008: Agent metadata test now accurately simulates production spanName flowPassing
"research_assistant"as thespanNameargument totransformLLMSpansmatches the real pipeline wheretransformAiSdkSpanNamesrenames the top-level span before attribute transformation. This keeps the agent root-span test aligned with runtime behavior.packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)
35-38: Cache token transformations correctly normalize AI SDK attributes to gen_ai cache usage attributesThe new cache-related constants and transformer functions cleanly map AI SDK–specific usage fields into the semantic
gen_ai.usage.*space:
ai.usage.cacheCreationInputTokens→SpanAttributes.LLM_USAGE_CACHE_CREATION_INPUT_TOKENS.ai.usage.cacheReadInputTokens→SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS.ai.usage.cachedInputTokens(OpenAI-specific) →SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS.Each transformer deletes the legacy
ai.usage.*key after mapping, andtransformLLMSpanscalls them beforecalculateTotalTokens, so cache tokens don’t interfere with the standard input/output token sum. This matches the semantics exercised in the new integration test and keeps all externally consumed keys on theSpanAttributessurface. Based on learnings, this is the right place to centralize these attribute mappings.Also applies to: 375-405, 506-508
466-483: Agent span-kind logic now correctly scopes AGENT to top-level AI spansThe updated
transformTelemetryMetadatabehavior—always settingSpanAttributes.GEN_AI_AGENT_NAMEwhen anagentmetadata key is present, but only markingTRACELOOP_SPAN_KIND = AGENT(andTRACELOOP_ENTITY_NAME) when the span is top-level (original AI span name or renamed to the agent) — cleanly distinguishes agent root spans from their children. This keeps downstream analysis from over-tagging nested LLM/tool spans while still propagating the agent name as context to all spans viaGEN_AI_AGENT_NAME. The logic is sound and lines up with the agent-aware naming introduced intransformAiSdkSpanNames.
| } | ||
| }; | ||
|
|
||
| const transformCacheCreationInputTokens = ( |
There was a problem hiding this comment.
@galkleinman maybe we can just transform any ai.usage. prefix to gen_ai.usage.?
There was a problem hiding this comment.
I think I prefer the explicity. it makes our life easier understanding what should be published and what not.
| if (span.name in HANDLED_SPAN_NAMES) { | ||
| span.updateName(HANDLED_SPAN_NAMES[span.name]); | ||
| // Check if this is a top-level AI span with agent metadata | ||
| const agentName = span.attributes[`${AI_TELEMETRY_METADATA_PREFIX}agent`]; |
There was a problem hiding this comment.
I think there's already a method for checking / extracting the agent name
| } | ||
| if (span.name in HANDLED_SPAN_NAMES) { | ||
| span.updateName(HANDLED_SPAN_NAMES[span.name]); | ||
| // Check if this is a top-level AI span with agent metadata |
There was a problem hiding this comment.
nit: redundant comment (this entire section is overly commented which reduces readability)
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 7e550f0 in 1 minute and 31 seconds. Click for details.
- Reviewed
152lines of code in3files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1038
- Draft comment:
The test name 'should prefer cacheReadInputTokens over cachedInputTokens when both present' is confusing since the final value (256) comes from 'cachedInputTokens'. Consider renaming it to 'should prefer cachedInputTokens over cacheReadInputTokens when both present' for clarity. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_fRmnmcRbvnhAvmwa
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
462-480: Simplify agent span detection logic.The condition on line 472-475 checks both
spanName === agentNameANDtopLevelSpanNames.includes(spanName). Given that span names are already transformed before this function runs (per the comment on lines 463-464), thetopLevelSpanNames.includes(spanName)check would only be true in edge cases where renaming failed.Consider simplifying to just check
spanName === agentNamesince:
- If the span was successfully renamed,
spanName === agentNamewill be true- If renaming failed for some reason, setting agent attributes on a span that kept its original name might be incorrect anyway
- const topLevelSpanNames = [ - AI_GENERATE_TEXT, - AI_STREAM_TEXT, - AI_GENERATE_OBJECT, - AI_STREAM_OBJECT, - ]; - if ( spanName && - (spanName === agentName || topLevelSpanNames.includes(spanName)) + spanName === agentName ) { attributes[SpanAttributes.TRACELOOP_SPAN_KIND] = TraceloopSpanKindValues.AGENT; attributes[SpanAttributes.TRACELOOP_ENTITY_NAME] = agentName; }Alternatively, if the fallback check is intentional for edge cases, update the comment to explain why.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(6 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(1 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-78)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(491-508)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-78)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)
8-24: LGTM!The new span constants and mappings follow the established patterns and align with the PR objectives to support
generateObject,streamText, andstreamObjectoperations.
375-401: Verify cache token precedence behavior.Both
transformCacheReadInputTokensandtransformCachedInputTokensmap to the same target attributeLLM_USAGE_CACHE_READ_INPUT_TOKENS. When both source attributes are present,cachedInputTokenstakes precedence due to execution order (lines 503-504 intransformLLMSpans).This behavior should be documented or verified:
- Anthropic SDK uses
cacheReadInputTokens- OpenAI SDK uses
cachedInputTokens- Is it expected that both would appear simultaneously?
- If so, which should take precedence?
Consider adding a comment explaining the precedence or verifying that both attributes won't appear together in practice:
const transformCachedInputTokens = (attributes: Record<string, any>): void => { // OpenAI format: cachedInputTokens maps to the same semantic meaning as Anthropic's cacheReadInputTokens // Takes precedence when both are present (though this should be rare in practice) if (AI_USAGE_CACHED_INPUT_TOKENS in attributes) { attributes[SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS] = attributes[AI_USAGE_CACHED_INPUT_TOKENS]; delete attributes[AI_USAGE_CACHED_INPUT_TOKENS]; } };
| // Check if this is a top-level AI span with agent metadata | ||
| const agentName = span.attributes[`${AI_TELEMETRY_METADATA_PREFIX}agent`]; | ||
| const isTopLevelSpan = TOP_LEVEL_AI_SPANS.includes(span.name); | ||
|
|
||
| if (agentName && typeof agentName === "string" && isTopLevelSpan) { | ||
| // Use agent name for top-level AI spans when agent metadata is provided | ||
| span.updateName(agentName); | ||
| } else if (!isTopLevelSpan) { | ||
| // Only transform child spans (text.generate, object.generate, etc.) | ||
| // Keep top-level spans with their original names when no agent metadata | ||
| span.updateName(HANDLED_SPAN_NAMES[span.name]); | ||
| } | ||
| // else: keep the original span name for top-level spans without agent metadata | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for references to old standardized span names
rg -n 'run\.ai|stream\.ai|object\.ai' --type=ts --type=tsx --type=md -C 2Repository: traceloop/openllmetry-js
Length of output: 95
🏁 Script executed:
# Check test files for span naming expectations
fd '.*\.test\.(ts|tsx)$|.*\.spec\.(ts|tsx)$' | xargs rg -l 'transformAiSdkSpanNames|span.*name' | head -20Repository: traceloop/openllmetry-js
Length of output: 841
🏁 Script executed:
# Search for HANDLED_SPAN_NAMES usage and definition to understand the mapping
rg -n 'HANDLED_SPAN_NAMES' --type=ts --type=tsx -B 5 -A 10Repository: traceloop/openllmetry-js
Length of output: 95
🏁 Script executed:
# Look for documentation about span naming or breaking changes
rg -n 'breaking|BREAKING|span.*name|ai\.generateText|run\.ai' packages/traceloop-sdk/README.md --type=md -C 3Repository: traceloop/openllmetry-js
Length of output: 140
🏁 Script executed:
# Check git history or comments about this change
rg -n 'agent.*metadata|top-level.*span|preserve.*name' packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts --type=ts -B 3 -A 3Repository: traceloop/openllmetry-js
Length of output: 1770
🏁 Script executed:
# Read the ai-sdk-transformations test file to see test coverage
cat -n packages/traceloop-sdk/test/ai-sdk-transformations.test.ts | head -100Repository: traceloop/openllmetry-js
Length of output: 3758
🏁 Script executed:
# Search for HANDLED_SPAN_NAMES and TOP_LEVEL_AI_SPANS definitions without type filter
rg -n 'HANDLED_SPAN_NAMES|TOP_LEVEL_AI_SPANS' packages/traceloop-sdk/src/Repository: traceloop/openllmetry-js
Length of output: 655
🏁 Script executed:
# Look for CHANGELOG or release notes
fd -e md -e txt | xargs rg -l 'CHANGELOG|BREAKING|Release' 2>/dev/null | head -10Repository: traceloop/openllmetry-js
Length of output: 92
🏁 Script executed:
# Check the full ai-sdk-transformations.ts file structure around these definitions
cat -n packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts | sed -n '1,60p'Repository: traceloop/openllmetry-js
Length of output: 2867
🏁 Script executed:
# Look for tests that verify the new span naming behavior with agent metadata
rg -n 'agentName|agent.*metadata|TOP_LEVEL_AI_SPANS' packages/traceloop-sdk/test/ai-sdk-transformations.test.ts -B 3 -A 3Repository: traceloop/openllmetry-js
Length of output: 700
🏁 Script executed:
# Get TOP_LEVEL_AI_SPANS definition
cat -n packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts | sed -n '537,545p'Repository: traceloop/openllmetry-js
Length of output: 344
🏁 Script executed:
# Look for test cases specifically about span name transformation with and without agent
rg -n 'updateName|span.*name|transformAiSdkSpanNames' packages/traceloop-sdk/test/ai-sdk-transformations.test.ts -B 5 -A 8 | head -80Repository: traceloop/openllmetry-js
Length of output: 2944
🏁 Script executed:
# Check if there are tests for the behavior of preserving original names for top-level spans without agent
rg -n 'ai\.generateText|run\.ai|stream\.ai' packages/traceloop-sdk/test/ai-sdk-transformations.test.ts -B 3 -A 5Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Look for any CHANGELOG or release notes that document this behavior change
find . -name 'CHANGELOG*' -o -name 'HISTORY*' -o -name 'RELEASES*' 2>/dev/null | head -5Repository: traceloop/openllmetry-js
Length of output: 277
🏁 Script executed:
# Check CHANGELOG for documentation of this change
cat CHANGELOG.md | head -150Repository: traceloop/openllmetry-js
Length of output: 8906
🏁 Script executed:
# Search for tests that specifically call transformAiSdkSpanNames
rg -n 'transformAiSdkSpanNames' packages/traceloop-sdk/test/ -A 20 -B 2Repository: traceloop/openllmetry-js
Length of output: 2079
🏁 Script executed:
# Look for test cases with span names like ai.generateText or ai.streamText
rg -n '"ai\.generateText"|"ai\.streamText"|"ai\.generateObject"' packages/traceloop-sdk/test/ai-sdk-transformations.test.ts -B 5 -A 10Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Search for describe blocks in the test file
rg -n 'describe\(' packages/traceloop-sdk/test/ai-sdk-transformations.test.ts | grep -i 'name\|transform'Repository: traceloop/openllmetry-js
Length of output: 1135
🏁 Script executed:
# Look at the end of the test file to see all test suites
tail -300 packages/traceloop-sdk/test/ai-sdk-transformations.test.ts | head -100Repository: traceloop/openllmetry-js
Length of output: 3064
🏁 Script executed:
# Search for all uses of transformAiSdkSpanNames in test file
grep -n "transformAiSdkSpanNames" packages/traceloop-sdk/test/ai-sdk-transformations.test.tsRepository: traceloop/openllmetry-js
Length of output: 96
🏁 Script executed:
# Get line count of test file to understand its size
wc -l packages/traceloop-sdk/test/ai-sdk-transformations.test.tsRepository: traceloop/openllmetry-js
Length of output: 131
🏁 Script executed:
# Search for test cases that might test span names with top-level spans
rg -n 'ai\.generateText|ai\.streamText|run\.ai|stream\.ai|TOP_LEVEL' packages/traceloop-sdk/test/ai-sdk-transformations.test.ts -B 2 -A 5Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Search for where transformAiSdkSpanNames is called in production code
rg -n 'transformAiSdkSpanNames' packages/traceloop-sdk/src/ -B 2 -A 2Repository: traceloop/openllmetry-js
Length of output: 1410
🏁 Script executed:
# Check if it's tested in integration tests
rg -n 'transformAiSdkSpanNames' packages/traceloop-sdk/test/ -B 3 -A 10Repository: traceloop/openllmetry-js
Length of output: 1261
🏁 Script executed:
# Look for span exporting or span processing pipelines that might use this
rg -n 'export.*Span|span.*process|transformAiSdk' packages/traceloop-sdk/src/lib/tracing/ -lRepository: traceloop/openllmetry-js
Length of output: 286
🏁 Script executed:
# Check the span exporter or batch processor
rg -n 'onBatchEnd|onEnd|processSpans' packages/traceloop-sdk/src/lib/tracing/ -A 5 -B 2Repository: traceloop/openllmetry-js
Length of output: 2202
🏁 Script executed:
# Look at the onSpanEnd implementation
cat -n packages/traceloop-sdk/src/lib/tracing/span-processor.ts | sed -n '180,210p'Repository: traceloop/openllmetry-js
Length of output: 1099
🏁 Script executed:
# Check the ai-sdk-integration test file
cat -n packages/traceloop-sdk/test/ai-sdk-integration.test.ts | head -150Repository: traceloop/openllmetry-js
Length of output: 5925
🏁 Script executed:
# Search for span name assertions in integration tests
rg -n 'span.*name|updateName|ai\.generateText|run\.ai' packages/traceloop-sdk/test/ai-sdk-integration.test.ts -B 3 -A 3Repository: traceloop/openllmetry-js
Length of output: 2098
🏁 Script executed:
# Look at the entire ai-sdk-agent-integration.test.ts file which may test agent-related span naming
head -200 packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tsRepository: traceloop/openllmetry-js
Length of output: 6127
🏁 Script executed:
# Search for test cases without agent metadata in agent integration test
rg -n 'generateText|streamText|generateObject|streamObject' packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts | grep -v metadataRepository: traceloop/openllmetry-js
Length of output: 543
🏁 Script executed:
# Check if ai-sdk-integration.test.ts (without agent) tests top-level spans
rg -n 'ai\.generateText|run\.ai|stream\.ai|generateText.*span' packages/traceloop-sdk/test/ai-sdk-integration.test.ts -B 5 -A 5Repository: traceloop/openllmetry-js
Length of output: 1572
🏁 Script executed:
# Look for any tests that expect the old span names (run.ai, stream.ai)
rg -n '"run\.ai"|"stream\.ai"|"object\.ai"' packages/traceloop-sdk/test/Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Check what spans are actually looked for in ai-sdk-integration.test (non-agent flows)
grep -n 'span.*find\|span\.name.*===' packages/traceloop-sdk/test/ai-sdk-integration.test.ts | head -20Repository: traceloop/openllmetry-js
Length of output: 501
🏁 Script executed:
# Get full context of the test at line 299 in ai-sdk-agent-integration.test.ts
cat -n packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts | sed -n '260,310p'Repository: traceloop/openllmetry-js
Length of output: 1909
Document this as a breaking change in release notes.
The code intentionally preserves original span names (e.g., ai.generateText) for top-level spans when no agent metadata is present, whereas previously they would be transformed to standardized names (e.g., run.ai). While this behavior is tested in ai-sdk-agent-integration.test.ts (lines 298-305), it represents a breaking change for users with existing monitoring dashboards or trace queries that depend on the old standardized span names.
Add a CHANGELOG entry explicitly noting this breaking change, and provide migration guidance for users who may need to update their dashboards to reference the new original span names instead of the transformed ones.
🤖 Prompt for AI Agents
In packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts around lines
549 to 562, document the behavior change that top-level AI spans without agent
metadata now preserve their original names instead of being normalized (e.g.,
keeping ai.generateText instead of becoming run.ai) by adding a CHANGELOG entry
under Unreleased/Breaking Changes naming the affected file and test
(ai-sdk-agent-integration.test.ts lines 298-305), describing the exact span-name
change and its impact on dashboards/queries, and include migration guidance
advising users to (a) update dashboards/trace queries to use the original span
names or (b) ensure agent metadata is present so names are normalized, plus a
short note on how to detect affected spans and a recommended rollout step
(search/replace or aliasing mapping in observability tools).
| it("should prefer cacheReadInputTokens over cachedInputTokens when both present", () => { | ||
| const attributes = { | ||
| "ai.usage.cacheReadInputTokens": 512, | ||
| "ai.usage.cachedInputTokens": 256, | ||
| }; | ||
|
|
||
| transformLLMSpans(attributes); | ||
|
|
||
| assert.strictEqual( | ||
| attributes[SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS], | ||
| 256, | ||
| ); | ||
| assert.strictEqual( | ||
| attributes["ai.usage.cacheReadInputTokens"], | ||
| undefined, | ||
| ); | ||
| assert.strictEqual(attributes["ai.usage.cachedInputTokens"], undefined); | ||
| }); |
There was a problem hiding this comment.
Test name contradicts implementation.
The test name says "prefer cacheReadInputTokens over cachedInputTokens" but the assertion expects the value from cachedInputTokens (256), not cacheReadInputTokens (512).
Looking at the implementation in ai-sdk-transformations.ts, transformCachedInputTokens() is called after transformCacheReadInputTokens() (lines 503-504), causing cachedInputTokens to overwrite the value set by cacheReadInputTokens.
Either:
- Update the test name to reflect actual behavior: "prefer cachedInputTokens over cacheReadInputTokens when both present"
- Or swap the transformation order in lines 503-504 of
ai-sdk-transformations.tsifcacheReadInputTokensshould actually take precedence
- it("should prefer cacheReadInputTokens over cachedInputTokens when both present", () => {
+ it("should prefer cachedInputTokens over cacheReadInputTokens when both present", () => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it("should prefer cacheReadInputTokens over cachedInputTokens when both present", () => { | |
| const attributes = { | |
| "ai.usage.cacheReadInputTokens": 512, | |
| "ai.usage.cachedInputTokens": 256, | |
| }; | |
| transformLLMSpans(attributes); | |
| assert.strictEqual( | |
| attributes[SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS], | |
| 256, | |
| ); | |
| assert.strictEqual( | |
| attributes["ai.usage.cacheReadInputTokens"], | |
| undefined, | |
| ); | |
| assert.strictEqual(attributes["ai.usage.cachedInputTokens"], undefined); | |
| }); | |
| it("should prefer cachedInputTokens over cacheReadInputTokens when both present", () => { | |
| const attributes = { | |
| "ai.usage.cacheReadInputTokens": 512, | |
| "ai.usage.cachedInputTokens": 256, | |
| }; | |
| transformLLMSpans(attributes); | |
| assert.strictEqual( | |
| attributes[SpanAttributes.LLM_USAGE_CACHE_READ_INPUT_TOKENS], | |
| 256, | |
| ); | |
| assert.strictEqual( | |
| attributes["ai.usage.cacheReadInputTokens"], | |
| undefined, | |
| ); | |
| assert.strictEqual(attributes["ai.usage.cachedInputTokens"], undefined); | |
| }); |
🤖 Prompt for AI Agents
In packages/traceloop-sdk/src/ai-sdk-transformations.ts around lines 503-504,
the transform functions are called in the wrong order so cachedInputTokens
currently overwrites cacheReadInputTokens; swap the two calls so
transformCachedInputTokens is invoked first and transformCacheReadInputTokens is
invoked afterward (so cacheReadInputTokens takes precedence), then run tests and
update any affected comments or test expectations if needed.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 6af240f in 57 seconds. Click for details.
- Reviewed
334lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-and-transform-cache-tokens-from-OpenAI-with-prompt-caching_4027203422/recording.har.backup:1
- Draft comment:
Backup HAR file removed. Confirm this removal is intentional and that replaced/updated recordings are correctly managed (consider using .gitignore for auto-generated backups). - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
Workflow ID: wflow_S7cokl1Y76FTiUg7
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 80ea668 in 1 minute and 58 seconds. Click for details.
- Reviewed
61lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:457
- Draft comment:
Removed the block that sets TRACELOOP_SPAN_KIND and TRACELOOP_ENTITY_NAME based on agentName in transformTelemetryMetadata. Confirm this behavior is now fully handled elsewhere (e.g. via transformAiSdkSpanNames) and that no downstream logic depends on these attributes. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to confirm that the behavior is handled elsewhere and that no downstream logic depends on the removed attributes. This violates the rule against asking the author to confirm their intention or to ensure behavior is intended. The comment does not provide a specific code suggestion or ask for a specific test to be written.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:539
- Draft comment:
In transformAiSdkSpanNames, the agent name is fetched from${AI_TELEMETRY_METADATA_PREFIX}agent, but transformTelemetryMetadata deletes telemetry attributes. Consider using the persisted attribute (GEN_AI_AGENT_NAME) to avoid missing agent data. - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_VIti5SXq6fMx1Vxq
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
542-546: Breaking change already flagged: top-level span naming behavior changed.As noted in a previous review, this represents a breaking change where top-level AI spans without agent metadata now preserve their original names (e.g.,
ai.generateText) instead of being normalized to standardized names (e.g.,run.ai). The previous review provides detailed guidance on documenting this change in the CHANGELOG with migration guidance.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(6 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (5)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🔇 Additional comments (6)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (6)
7-24: LGTM! Consistent extension of AI operation support.The new constants for
streamText,generateObject, andstreamObjectoperations follow the existing naming patterns and are properly registered inHANDLED_SPAN_NAMES.
35-38: LGTM! Cache token constants properly defined.The new cache token usage constants align with OpenAI's prompt caching metrics and follow the existing attribute naming pattern.
493-495: LGTM! Cache token transformations properly integrated.The new cache token transformation functions are correctly invoked within the transformation pipeline.
527-532: LGTM! Constant definition is clear.This constant properly centralizes the list of top-level AI operations. As noted in the earlier comment on lines 460-474, this should be reused in
transformTelemetryMetadatato eliminate duplication.
1-5: LGTM! Proper semantic attributes usage.The file correctly imports and uses semantic attribute constants from
@traceloop/ai-semantic-conventionsrather than hardcoding strings, as required by the coding guidelines.Based on coding guidelines, semantic attributes must be imported from @traceloop/ai-semantic-conventions.
539-539: No existing agent name extraction helper method was found in the codebase. The direct attribute access at line 539 is the appropriate pattern used throughout the file for reading metadata attributes. This is not an optimization opportunity.Likely an incorrect or invalid review comment.
Add getAgentNameFromAttributes helper to avoid code duplication and improve readability when extracting agent names from AI SDK telemetry metadata. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Previously, both transformTelemetryMetadata and transformAiSdkSpanNames had their own logic to extract agent name from ai.telemetry.metadata.agent attributes. This consolidates both to use the shared getAgentNameFromAttributes helper, eliminating code duplication. Addresses PR feedback about existing agent extraction code.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed a8a1618 in 1 minute and 20 seconds. Click for details.
- Reviewed
26lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:437
- Draft comment:
Using getAgentNameFromAttributes centralizes the agent name extraction. Note that it now returns the agent only if it's strictly a string, whereas the previous code coerced non-string values. Ensure this change is intended. - Reason this comment was not posted:
Confidence changes required:30%<= threshold50%None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:448
- Draft comment:
Removed the inline check for 'agent' metadata within the loop. Verify that using the helper function covers all expected cases without the need to override in the loop. - Reason this comment was not posted:
Confidence changes required:20%<= threshold50%None
Workflow ID: wflow_JqQKddLEdBdtQmwt
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
The cachedInputTokens field from Vercel AI SDK is ambiguous - it includes both tokens read from cache AND tokens written during cache creation. Mapping this to gen_ai.usage.cache_read_input_tokens is semantically incorrect, as cache_read_input_tokens should only represent tokens retrieved from an existing cache entry. We now only support the precise provider-specific fields: - cacheReadInputTokens (tokens read from cache) - cacheCreationInputTokens (tokens written when creating cache) This provides accurate, unambiguous cache token metrics. Addresses CodeRabbit PR feedback about semantic accuracy.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 7cfbcbd in 1 minute and 5 seconds. Click for details.
- Reviewed
88lines of code in2files - Skipped
0files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:5
- Draft comment:
Typo in the imported variable name: 'ASSOCATION_PROPERTIES_KEY' appears misspelled. Consider renaming to 'ASSOCIATION_PROPERTIES_KEY' for clarity. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:40
- Draft comment:
The test file is very comprehensive but quite long. Consider splitting tests into separate modules or grouping them with helper functions to improve readability and maintainability. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:500
- Draft comment:
There are repeated JSON.stringify assertions for expected tool parameters. Consider extracting common expected structures into helper constants/functions to reduce duplicate code. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1957
- Draft comment:
The tests for agent metadata transformations effectively cover both root and child span scenarios. Great job ensuring that the agent name, span kind, and entity name are set appropriately based on the span name. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_oqtj5gXRogdBItQO
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed ee3b6bc in 2 minutes and 8 seconds. Click for details.
- Reviewed
769lines of code in5files - Skipped
0files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/src/SemanticAttributes.ts:35
- Draft comment:
The semantic attributes for cache tokens (LLM_USAGE_CACHE_CREATION_INPUT_TOKENS and LLM_USAGE_CACHE_READ_INPUT_TOKENS) have been removed. Confirm that this removal is intentional and that any downstream users or documentation are updated accordingly. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to confirm the intention behind the removal of semantic attributes. It also suggests ensuring that downstream users or documentation are updated. This falls under the rule of not asking the author to confirm their intention or ensure updates, which is not allowed.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:32
- Draft comment:
Constants for cache tokens ('AI_USAGE_CACHE_CREATION_INPUT_TOKENS' and 'AI_USAGE_CACHE_READ_INPUT_TOKENS') have been removed. Verify that removal of these constants and the associated functionality aligns with the telemetry requirements. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the author to verify that the removal of constants aligns with telemetry requirements. This is a request for confirmation, which violates the rule against asking the PR author to confirm their intention or ensure behavior is intended.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:378
- Draft comment:
The transformation functions for cache token metrics (transformCacheCreationInputTokens and transformCacheReadInputTokens) and their calls in transformLLMSpans have been removed. Confirm that cache token telemetry is no longer needed and this removal won't affect legacy data. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =30%<= threshold50%The comment is asking the PR author to confirm that the removal of certain functions and their calls is intentional and won't affect legacy data. This falls under asking the author to confirm their intention and ensure behavior, which is against the rules. However, it does point out a specific change that could have significant implications, which might be useful for the author to consider. Despite this, the comment still violates the rule of not asking for confirmation of intention.
4. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:269
- Draft comment:
The test case for capturing and transforming cache tokens from OpenAI with prompt caching has been removed. Ensure that this removal is intentional and that any related documentation or feature flags are updated. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% This comment is asking the PR author to "ensure" something and to confirm their intention. According to the rules, comments that ask the PR author to confirm their intention, to double-check things, or to ensure the behavior is intended should be removed. The rule explicitly states: "Do NOT ask the PR author to confirm their intention, to explain, to double-check things, to ensure the behavior is intended" and "If the comments starts with 'Verify that...' or 'Ensure that...', it is likely not useful." This comment starts with "Ensure that" and is asking for confirmation rather than pointing out a specific problem. The removal of a test is clearly intentional (tests don't get deleted by accident), and there's no evidence of a bug or issue - it's just a test being removed. Could there be a legitimate concern here about removing test coverage for an important feature? Perhaps the cache token functionality is critical and removing its test could indicate a problem with the implementation or a regression risk. While test coverage is important, the comment doesn't identify any specific problem - it just asks the author to confirm their intention. If the author removed the test, they clearly intended to do so. The comment provides no actionable feedback about what's wrong or what needs to be fixed. According to the rules, we should assume the author had a good reason for the change unless there's strong evidence otherwise. This comment should be deleted because it violates the rule against asking the PR author to confirm their intention or "ensure" something. It starts with "Ensure that" and doesn't provide actionable feedback about a specific problem.
Workflow ID: wflow_bKQQN59KknphlfMm
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Important
Enhance AI SDK to use agent names in span names when agent metadata is available, with updated span naming logic and expanded tests.
ai-sdk-transformations.ts.ai-sdk-transformations.ts.ai-sdk-agent-integration.test.tsfor agent-based span naming, streaming/generation, and cache-token scenarios.recordings/.getAgentNameFromAttributes()added to extract agent names from attributes inai-sdk-transformations.ts.transformAiSdkSpanNames()updated to use agent names for top-level spans inai-sdk-transformations.ts.This description was created by
for ee3b6bc. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
generateObjectandstreamTextwith agent metadata, enabling consistent agent-based span naming.Tests
✏️ Tip: You can customize this high-level summary in your review settings.