fix(vercel): remove duplicate token attributes (prompt/input and completion/output)#831
fix(vercel): remove duplicate token attributes (prompt/input and completion/output)#831avivhalfon merged 11 commits intomainfrom
Conversation
WalkthroughToken normalization now prefers Changes
Sequence Diagram(s)sequenceDiagram
participant Source as Trace Source
participant Transformer as ai-sdk-transformations
participant Span as Span Attributes
Note over Source,Transformer: Incoming span attrs may include legacy `ai.usage.*`, old `llm.usage.*`, and/or new `gen_ai.usage.*`
Source->>Transformer: deliver span attributes
alt gen_ai input/output present
Transformer->>Transformer: delete legacy `ai.*` and old `llm.*`
Transformer->>Span: ensure `gen_ai.usage.input_tokens` & `gen_ai.usage.output_tokens`
Transformer->>Transformer: compute total = input + output
else gen_ai missing but legacy present
Transformer->>Transformer: map `ai.usage.promptTokens` -> `gen_ai.usage.input_tokens` (if absent)
Transformer->>Transformer: map `ai.usage.completionTokens` -> `gen_ai.usage.output_tokens` (if absent)
Transformer->>Transformer: delete legacy `ai.*` and old `llm.*`
Transformer->>Transformer: compute total = mapped input + mapped output
end
alt both input & output numeric
Transformer->>Span: write `llm.usage.total_tokens`
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to 93b2388 in 1 minute and 47 seconds. Click for details.
- Reviewed
56lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:323
- Draft comment:
Removed token assignment in transformPromptTokens/transformCompletionTokens now just delete duplicate keys. Ensure downstream logic doesn't rely on the removed duplicated attributes. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is suggesting to ensure that downstream logic doesn't rely on removed duplicated attributes. This is a request for confirmation or verification, which violates the rules. The comment does not provide a specific code suggestion or ask for a specific test to be written.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:348
- Draft comment:
calculateTotalTokens uses a truthy check (if(inputTokens && outputTokens)) which may skip calculation when token values are 0. Consider checking for undefined/null explicitly. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment is technically correct about the behavior of truthy checks with 0 values. However, looking at the diff, the truthy checkif (inputTokens && outputTokens)was already present in the code before this PR (it wasif (promptTokens && completionTokens)before). The PR only changed which attributes are being read, not the conditional logic itself. According to the rules, I should only keep comments that are about changes made in the diff. Since the truthy check logic was not changed (only the variable names/sources changed), this comment is about unchanged code logic and should be deleted. While the truthy check logic pattern existed before, one could argue that since the PR is touching this function and changing the attribute sources, it's a reasonable time to fix this potential bug. The comment is actionable and identifies a real issue where 0 tokens would be incorrectly handled. Even though the function was modified, the specific conditional logic that the comment addresses was not changed in this PR. The rules explicitly state to only comment on changes made by the diff. The truthy check existed before and continues to exist after - only the variable names changed. This is pre-existing code behavior, not something introduced or modified by this PR. This comment should be deleted because it addresses pre-existing code logic (the truthy check) that was not changed in this PR. The diff only changed which attributes are being read (from AI_USAGE_* to GEN_AI_USAGE_*), not the conditional logic itself.
Workflow ID: wflow_VOgT69d504NTCJ9P
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (3)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-63)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 705fd1e in 45 seconds. Click for details.
- Reviewed
321lines of code in4files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/decorators.test.ts:515
- Draft comment:
Consider using assert.strictEqual instead of assert.equal to avoid unintended type coercion and maintain consistency. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/decorators.test.ts:40
- Draft comment:
The test setup and helper code is quite repetitive. Consider refactoring common initialization and helper routines to improve maintainability. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
Workflow ID: wflow_xEFPo6JJdVO7RZvD
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
| LLM_RESPONSE_MODEL: "gen_ai.response.model", | ||
| LLM_USAGE_PROMPT_TOKENS: "gen_ai.usage.prompt_tokens", | ||
| LLM_USAGE_COMPLETION_TOKENS: "gen_ai.usage.completion_tokens", | ||
| LLM_USAGE_INPUT_TOKENS: "gen_ai.usage.input_tokens", |
There was a problem hiding this comment.
Take this from opentelemetry semantic conventions, no need to define them here
| attributes[`${SpanAttributes.LLM_USAGE_COMPLETION_TOKENS}`]; | ||
| const inputTokens = attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS]; | ||
| const outputTokens = attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS]; | ||
| const promptTokens = attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS]; |
There was a problem hiding this comment.
Why? If you deleted them they will never be here
| attributes[AI_USAGE_PROMPT_TOKENS]; | ||
| delete attributes[AI_USAGE_PROMPT_TOKENS]; | ||
| if (SpanAttributes.LLM_USAGE_INPUT_TOKENS in attributes) { | ||
| if (AI_USAGE_PROMPT_TOKENS in attributes) { |
There was a problem hiding this comment.
This is where the token data is - if you deleted it before setting the input tokens attribute you lose it forever
There was a problem hiding this comment.
Here we already have input
| attributes[`${SpanAttributes.LLM_USAGE_PROMPT_TOKENS}`] = | ||
| attributes[AI_USAGE_PROMPT_TOKENS]; | ||
| delete attributes[AI_USAGE_PROMPT_TOKENS]; | ||
| if (SpanAttributes.LLM_USAGE_INPUT_TOKENS in attributes) { |
There was a problem hiding this comment.
This is overly complex. Did you check if the gen ai input usage tokens is already set by the vervet ai sdk?
There was a problem hiding this comment.
Yes Vercel sdk sets input usage and prompt tokens (duplication) in text.generate.
but in run.ai its sets only prompt tokens and not input usage.
meaning i need to check in the transform when already have input and leave only it, and when its prompt change it to input
705fd1e to
e67a5fc
Compare
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed e67a5fc in 50 seconds. Click for details.
- Reviewed
346lines of code in5files - Skipped
0files when reviewing. - Skipped posting
5draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/decorators.test.ts:515
- Draft comment:
In the manual LLM instrumentation test, the assertion for LLM_USAGE_PROMPT_TOKENS compares to the string "15". Consider converting the token count to a number and using a numeric comparison to ensure consistency with token usage types. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/decorators.test.ts:622
- Draft comment:
Several tests compare JSON.stringify outputs (e.g. entity input/output) directly. Use deep equality on deserialized objects to avoid brittle tests if object property order changes. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/decorators.test.ts:410
- Draft comment:
In the test for 'should not log prompts if traceContent is disabled', additional assertions confirming that keys for entity input/output and prompt content are completely absent would improve clarity. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/decorators.test.ts:640
- Draft comment:
In the Vercel AI spans test, expected token values (input/output/total) are hardcoded. Verify these values either dynamically or document the rationale so future changes in underlying provider responses don't cause false failures. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/decorators.test.ts:270
- Draft comment:
Test descriptions and inline comments are clear overall. Consider adding brief inline comments in decorator-based workflow tests to explain the purpose of chained entity name verification for maintainability. - Reason this comment was not posted:
Confidence changes required:30%<= threshold50%None
Workflow ID: wflow_L15zXbvWMxfaHJ0b
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
212-216: Consider verifying absence of legacy token attributes.Same suggestion as the OpenAI test case above - consider adding assertions to verify that legacy duplicate attributes have been removed after transformation.
🧹 Nitpick comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
144-148: Consider verifying absence of legacy token attributes.The assertions correctly verify that the new
gen_ai.usage.input_tokensandgen_ai.usage.output_tokensattributes exist after transformation. However, to more thoroughly validate the deduplication fix, consider also asserting that the legacy duplicate attributes (ai.usage.promptTokensandgen_ai.usage.prompt_tokens) have been removed.Example:
// Verify token usage - should be transformed to input/output tokens assert.ok(generateTextSpan.attributes["gen_ai.usage.input_tokens"]); assert.ok(generateTextSpan.attributes["gen_ai.usage.output_tokens"]); assert.ok(generateTextSpan.attributes["llm.usage.total_tokens"]); + // Verify legacy duplicates are removed + assert.strictEqual(generateTextSpan.attributes["ai.usage.promptTokens"], undefined); + assert.strictEqual(generateTextSpan.attributes["gen_ai.usage.prompt_tokens"], undefined); + assert.strictEqual(generateTextSpan.attributes["ai.usage.completionTokens"], undefined); + assert.strictEqual(generateTextSpan.attributes["gen_ai.usage.completion_tokens"], undefined);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(1 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(2 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(11 hunks)packages/traceloop-sdk/test/decorators.test.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/ai-semantic-conventions/src/SemanticAttributes.ts
- packages/traceloop-sdk/test/decorators.test.ts
- packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-65)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (3)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (3)
320-337: Transformation logic correctly handles dual-source token attributes.The function properly addresses the deduplication issue by handling two scenarios:
- When
gen_ai.usage.input_tokensalready exists (Vercel AI SDK), it removes the duplicateprompt_tokensattributes- When only legacy
ai.usage.promptTokensexists (run.ai), it maps togen_ai.usage.input_tokensThis ensures a single source of truth for input tokens while maintaining backward compatibility with different instrumentations.
339-356: LGTM: Completion tokens transformation mirrors prompt tokens logic.The function correctly handles the same dual-source scenario for completion/output tokens, ensuring consistency with the prompt tokens transformation.
358-366: LGTM: Total tokens correctly calculated from normalized attributes.The function correctly:
- Reads from the normalized
input_tokensandoutput_tokensattributes after transformation- Converts values to numbers to handle potential string types
- Only calculates total when both components exist
The execution order in
transformLLMSpans(lines 428-430) ensures token transformations complete before this calculation runs.
e67a5fc to
1a416a0
Compare
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 1a416a0 in 2 minutes and 13 seconds. Click for details.
- Reviewed
355lines of code in5files - Skipped
0files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:359
- Draft comment:
The total token calculation uses 'if (inputTokens && outputTokens)' which will fail (skip calculation) if one value is 0, since 0 is falsy. Consider using an explicit check (e.g. !== undefined) so that zero values are correctly handled. - Reason this comment was not posted:
Comment was on unchanged code.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:320
- Draft comment:
The transformPromptTokens (and similarly transformCompletionTokens) conditionally remove legacy token attributes if the new key is present. This dual behavior (keeping legacy fields for manual instrumentation while removing them for Vercel spans) may be confusing. Consider adding inline comments or documentation to clarify when each set is retained. - Reason this comment was not posted:
Comment looked like it was already resolved.
3. packages/traceloop-sdk/test/decorators.test.ts:515
- Draft comment:
In the manual LLM instrumentation test, the assertion expects the legacy token attribute (llm.usage.prompt_tokens) to remain. This differs from the Vercel integration behavior. Consider documenting this divergence to avoid confusion. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/decorators.test.ts:680
- Draft comment:
Consider adding tests for scenarios where token counts are 0. This will ensure the total token calculation handles falsy (zero) values correctly. - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_Ojyj7GmLBt68VEuA
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
789-826: Prompt token test description doesn’t match the fixtureThe test name says “delete ai.usage.promptTokens and gen_ai.usage.prompt_tokens (keep input_tokens)”, but the fixture only includes:
"ai.usage.promptTokens""gen_ai.usage.input_tokens"and never sets
gen_ai.usage.prompt_tokens(SpanAttributes.LLM_USAGE_PROMPT_TOKENS). So the test doesn’t actually exercise deletion of theprompt_tokensvariant.Either:
- Add
attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS]to the fixture and assert it becomesundefined, or- Update the description to only mention
ai.usage.promptTokens.Same comment applies to the zero‑tokens variant below.
828-868: Completion token test description has the same mismatchSimilarly, the completion test title mentions deleting
gen_ai.usage.completion_tokens, but the fixture only includes:
"ai.usage.completionTokens""gen_ai.usage.output_tokens"No attribute keyed by
SpanAttributes.LLM_USAGE_COMPLETION_TOKENSis ever set, so that part of the behavior isn’t validated.As above, either add
SpanAttributes.LLM_USAGE_COMPLETION_TOKENSto the fixture and assert it’s removed, or trim the description.
🧹 Nitpick comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
870-929: Add total-token tests for zero-count casesThese tests cover:
- Numeric input/output tokens (50/25 → 75).
- String tokens ("50"/"25" → 75).
- Missing input, missing output, and both missing.
They don’t cover cases where one or both token counts are
0, which is exactly where the currentif (inputTokens && outputTokens)logic incalculateTotalTokensmisbehaves.After adjusting
calculateTotalTokensto check for attribute presence instead of truthiness (see comment inai-sdk-transformations.ts), consider adding tests like:it("should calculate total when one token count is zero", () => { const attributes = { [SpanAttributes.LLM_USAGE_INPUT_TOKENS]: 0, [SpanAttributes.LLM_USAGE_OUTPUT_TOKENS]: 25, }; transformLLMSpans(attributes); assert.strictEqual( attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS], 25, ); }); it("should calculate total when both token counts are zero", () => { const attributes = { [SpanAttributes.LLM_USAGE_INPUT_TOKENS]: 0, [SpanAttributes.LLM_USAGE_OUTPUT_TOKENS]: 0, }; transformLLMSpans(attributes); assert.strictEqual( attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS], 0, ); });This will lock in the intended semantics for zero-token spans.
1015-1149: Integration tests correctly assert canonical input/output tokens and legacy cleanupThe end‑to‑end tests for
transformAiSdkAttributesnow:
- Provide both
gen_ai.usage.input_tokens/output_tokensandai.usage.*Tokens.- Assert that only
SpanAttributes.LLM_USAGE_INPUT_TOKENS/LLM_USAGE_OUTPUT_TOKENSremain, with the correct totals.- Verify that legacy
ai.usage.*TokensandSpanAttributes.LLM_USAGE_PROMPT_TOKENS/LLM_USAGE_COMPLETION_TOKENSare gone.That matches the new normalization behavior and the PR objective of eliminating duplicated token attributes. As a small enhancement, you might add one integration test where only
ai.usage.promptTokens/ai.usage.completionTokensare present (nogen_ai.*) to explicitly cover the “migration-only” path.As per coding guidelines, using
SpanAttributeshere keeps the tests aligned with the shared semantic conventions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(1 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(2 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(11 hunks)packages/traceloop-sdk/test/decorators.test.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/traceloop-sdk/test/decorators.test.ts
- packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧰 Additional context used
📓 Path-based instructions (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-65)
🔇 Additional comments (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
30-31: New gen_ai input/output usage attributes correctly addedThe new
LLM_USAGE_INPUT_TOKENS/LLM_USAGE_OUTPUT_TOKENSconstants match thegen_ai.usage.input_tokens/gen_ai.usage.output_tokenskeys and keep all token attributes centralized inSemanticAttributes.ts, which is in line with how the rest of the SDK is structured.As per coding guidelines, defining AI/LLM span attributes here is the correct approach.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)
320-336: Prompt token normalization logic looks goodWhen
gen_ai.usage.input_tokensis already present, you drop bothai.usage.promptTokensandgen_ai.usage.prompt_tokens; otherwise you backfillinput_tokensfromai.usage.promptTokens. Usinginavoids problems with zero values being treated as absent.Based on learnings, importing and using
SpanAttributesinstead of hardcoded strings is the right pattern here.
339-355: Completion token normalization logic looks goodThis mirrors the prompt path: prefer
gen_ai.usage.output_tokensand deleteai.usage.completionTokens/gen_ai.usage.completion_tokens, or map fromai.usage.completionTokenswhenoutput_tokensis missing. The behavior matches the PR goal of de-duplicating and normalizing token attributes.Based on learnings, this keeps the AI SDK spans aligned with the shared semantic conventions.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 00e2529 in 2 minutes and 10 seconds. Click for details.
- Reviewed
101lines of code in3files - Skipped
1files when reviewing. - Skipped posting
5draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/package.json:38
- Draft comment:
Added dependency '@opentelemetry/semantic-conventions'. Verify if this should instead be a peer dependency to prevent version conflicts. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% This comment clearly violates the rule "Do NOT ask the PR author to confirm their intention, to explain, to double-check things, to ensure the behavior is intended, to make sure their change is tested, or similar. If the comments starts with 'Verify that...' or 'Ensure that...', it is likely not useful." The comment literally starts with "Verify if..." and is asking the author to confirm whether their dependency choice is correct. It's also speculative - it suggests there "might" be version conflicts but doesn't demonstrate that there definitely are any. The comment doesn't provide strong evidence of an actual problem, just raises a hypothetical concern. Could there be a legitimate architectural concern here about peer dependencies vs regular dependencies that the tool is trying to surface? Perhaps in OpenTelemetry ecosystems, semantic-conventions packages are typically peer dependencies to avoid version mismatches across packages. Even if there's a valid architectural pattern being referenced, the comment is phrased as "Verify if..." which explicitly asks the author to confirm rather than stating a definite issue. Without seeing actual evidence of a problem or a clear standard being violated, this is speculative advice that violates the rules. This comment should be deleted because it explicitly asks the PR author to "verify" their choice, which violates the rule against asking authors to confirm their intentions. It's speculative about potential version conflicts without demonstrating an actual problem.
2. packages/ai-semantic-conventions/src/SemanticAttributes.ts:14
- Draft comment:
Importing from 'build/src/index-incubating' may be unstable; consider using a stable public API export if available. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 15% vs. threshold = 50% The comment is making a speculative suggestion - it says "may be unstable" and "consider using a stable public API if available". This violates the rule against speculative comments. The comment doesn't definitively know if there's a stable alternative, and it's asking the author to investigate rather than pointing out a clear issue. The author intentionally made this change to use these specific imports, and without seeing the package.json or knowing what alternatives exist, this is just speculation. The comment also doesn't provide concrete evidence that this is wrong or will cause problems. Perhaps the import path really is problematic and will break in production builds. The "build/src" path and "incubating" naming could indicate real stability concerns that the author should be aware of. Maybe this is common knowledge in the OpenTelemetry ecosystem. Even if there are potential stability concerns, the comment is still speculative ("may be unstable", "if available"). It doesn't provide definitive evidence of a problem or a concrete alternative. The author made this change deliberately, and without strong evidence that it's wrong, we should assume they knew what they were doing. This falls under "speculative comments" that should be removed. This comment should be deleted because it's speculative ("may be unstable", "if available") and doesn't provide definitive evidence of a problem or a concrete alternative solution. It violates the rule against speculative comments.
3. packages/ai-semantic-conventions/src/SemanticAttributes.ts:32
- Draft comment:
Mapping token attributes to imported constants improves consistency. Ensure this aligns with all downstream usages. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment starts with a positive note about the change, which is informative. However, it then asks the author to ensure alignment with downstream usages, which is a general request for verification. This violates the rule against asking the author to ensure behavior is intended or tested.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:321
- Draft comment:
Refactor in transformPromptTokens: the conditional now sets the new token key only if absent and then unconditionally cleans up legacy keys to avoid duplication. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, describing what the refactor does without suggesting any changes or asking for clarification. It doesn't provide any actionable feedback or raise any concerns about the code.
5. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:331
- Draft comment:
Similarly, transformCompletionTokens now conditionally assigns the new key and removes legacy attributes; ensure downstream logic accepts this naming convention. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the author to ensure that downstream logic accepts a naming convention change. This falls under asking the author to ensure behavior is intended or tested, which is against the rules.
Workflow ID: wflow_J0iXL0hrVxlmZMSh
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 155e14a in 1 minute and 39 seconds. Click for details.
- Reviewed
14lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/src/SemanticAttributes.ts:20
- Draft comment:
Consider moving the // @ts-ignore comment above the import statement rather than inside the destructuring block to ensure TypeScript suppression applies correctly. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 15% vs. threshold = 50% The comment is about code style/placement of the @ts-ignore directive. In TypeScript, @ts-ignore suppresses errors on the next line. When placed on line 20 inside the destructuring block, it would suppress errors on line 21 (the closing brace and from clause). If placed above line 17, it would suppress errors on the entire import statement. The current placement is actually unconventional - typically @ts-ignore is placed directly before the statement it's meant to suppress. However, this is a minor style issue and the code likely works either way. The comment is somewhat pedantic and doesn't point to a clear bug - it's more of a style preference. The rules say not to comment on things that are obvious or unimportant, and this seems to fall into that category. The placement might actually matter for TypeScript's error suppression to work correctly. If the @ts-ignore doesn't properly suppress the module resolution error, this could be a real issue. I might be underestimating the importance of correct @ts-ignore placement. While @ts-ignore placement can matter, the code as written likely works (the PR author presumably tested it). The comment uses "consider" and "ensure TypeScript suppression applies correctly" which is speculative language - it's not definitively stating there's a problem. This violates the rule against speculative comments. If the suppression wasn't working, the build would catch it. This comment should be deleted. It's speculative (using "consider" and "ensure"), suggests a style preference rather than a clear bug, and any actual TypeScript error would be caught by the build. The comment doesn't provide strong evidence that the current placement is incorrect.
2. packages/ai-semantic-conventions/src/SemanticAttributes.ts:21
- Draft comment:
Verify that the new module path '@opentelemetry/semantic-conventions/incubating' is stable and version-safe compared to the previous 'build/src/index-incubating' path. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is asking the PR author to verify the stability and version-safety of a new module path compared to a previous one. It is not making a specific code suggestion or pointing out a specific issue with the code. It falls under the category of asking the author to ensure something, which is against the rules.
Workflow ID: wflow_1ec5mk1zBMMtmVhD
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
790-802: Consider verifying legacyLLM_USAGE_PROMPT_TOKENSremoval.The test name mentions deleting
gen_ai.usage.prompt_tokens, but the test doesn't assert thatLLM_USAGE_PROMPT_TOKENSis removed after transformation. While the integration tests verify this, adding an assertion here would make the individual test more complete and self-documenting.Apply this diff to add the assertion:
transformLLMSpans(attributes); assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS], 50); assert.strictEqual(attributes["ai.usage.promptTokens"], undefined); + assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS], undefined); assert.strictEqual(attributes.someOtherAttr, "value");
829-844: Consider verifying legacyLLM_USAGE_COMPLETION_TOKENSremoval.The test name mentions deleting
gen_ai.usage.completion_tokens, but the test doesn't assert thatLLM_USAGE_COMPLETION_TOKENSis removed after transformation. While the integration tests verify this, adding an assertion here would improve test clarity.Apply this diff to add the assertion:
transformLLMSpans(attributes); assert.strictEqual( attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 25, ); assert.strictEqual(attributes["ai.usage.completionTokens"], undefined); + assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS], undefined); assert.strictEqual(attributes.someOtherAttr, "value");
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(2 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(1 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(11 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/ai-semantic-conventions/src/SemanticAttributes.ts
- packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
870-929: LGTM! Total token calculation tests properly updated.The tests correctly verify that total tokens are now calculated from
LLM_USAGE_INPUT_TOKENSandLLM_USAGE_OUTPUT_TOKENS, and properly handle edge cases (missing tokens, string values). This aligns with the PR objective to use input/output tokens instead of prompt/completion tokens.
1061-1068: Excellent addition of assertions for legacy token attribute removal.These new assertions directly verify that the legacy
LLM_USAGE_PROMPT_TOKENSandLLM_USAGE_COMPLETION_TOKENSattributes are removed after transformation, which is the core objective of this PR (preventing duplicate token reporting). This ensures that only the newinput_tokensandoutput_tokensattributes remain.Also applies to: 1137-1144
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 9a4fc6c in 58 seconds. Click for details.
- Reviewed
13lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/src/SemanticAttributes.ts:20
- Draft comment:
Good change: switching to @ts-expect-error is preferred as it flags an error if the expected error doesn't occur. Be sure to update or remove the directive if the underlying import eventually resolves without error. - Reason this comment was not posted:
Comment looked like it was already resolved.
Workflow ID: wflow_a38zH3szd2lqJ7R1
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 8f8785f in 50 seconds. Click for details.
- Reviewed
34lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:324
- Draft comment:
The multi-line if condition in transformPromptTokens improves readability. Consider consolidating the legacy token mapping logic with its counterpart in transformCompletionTokens to reduce duplication, ensuring cleanup of legacy keys remains intact. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:341
- Draft comment:
The transformCompletionTokens function now uses a similar multi-line if condition which is clearer. Again, if possible, factor out the common legacy token normalization logic to avoid repeating similar code. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_Rcj0YUCjxUxanC9z
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
nirga
left a comment
There was a problem hiding this comment.
Maybe keep the testing of the old prompt tokens - just make sure that they don't exist
|
|
||
| transformLLMSpans(attributes); | ||
|
|
||
| assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS], 0); |
There was a problem hiding this comment.
test that PROMPT TOKENS doesn't exist as well
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 6531e69 in 1 minute and 39 seconds. Click for details.
- Reviewed
26lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:798
- Draft comment:
New assertion: Ensure duplicate prompt token attribute (LLM_USAGE_PROMPT_TOKENS) is removed after transformation. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:842
- Draft comment:
New assertion: Ensure duplicate completion token attribute (LLM_USAGE_COMPLETION_TOKENS) is removed after transformation. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_GjA8E1rfVxkou7LX
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1022-1081: Add legacy attributes to input to properly test their removal.The integration test verifies that legacy token attributes (
LLM_USAGE_PROMPT_TOKENSandLLM_USAGE_COMPLETION_TOKENS) are undefined after transformation (lines 1069-1076), but these attributes are never included in the input. To properly test that the transformation removes legacy attributes when present, add them to the input:Apply this diff:
it("should apply all attribute transformations", () => { const attributes = { "ai.response.text": "Hello!", "ai.prompt.messages": JSON.stringify([{ role: "user", content: "Hi" }]), "ai.usage.promptTokens": 10, "ai.usage.completionTokens": 5, "gen_ai.usage.input_tokens": 10, "gen_ai.usage.output_tokens": 5, + "gen_ai.usage.prompt_tokens": 10, + "gen_ai.usage.completion_tokens": 5, "ai.model.provider": "openai.chat", someOtherAttr: "value", };Same issue applies to the integration test starting at line 1098.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(11 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(440-454)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
879-936: LGTM: Total tokens calculation tests correctly updated.The tests now properly verify that total tokens are calculated from input and output tokens only when both are present, including edge cases (string values, missing tokens).
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 3045fb3 in 54 seconds. Click for details.
- Reviewed
36lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:792
- Draft comment:
Adding legacy duplicate keys for prompt tokens ("gen_ai.usage.prompt_tokens") and for completion tokens ("gen_ai.usage.completion_tokens") in the test cases ensures that transformLLMSpans removes these older fields while retaining normalized ones. This improves test coverage for legacy scenarios. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, explaining why certain test cases are added. It doesn't provide a suggestion, ask for confirmation, or point out a potential issue. It violates the rule against making purely informative comments.
Workflow ID: wflow_Y1Wt1i6kLjqwnmp5
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(11 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(440-454)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(23-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
882-941: LGTM! Total tokens calculation correctly uses input/output tokens.The tests properly verify that:
- Total tokens are calculated from
LLM_USAGE_INPUT_TOKENSandLLM_USAGE_OUTPUT_TOKENS- Total is only computed when both input and output are present
- String values are handled correctly
This aligns with the PR objective to normalize token attributes and avoid duplicates.
1025-1221: LGTM! Integration tests comprehensively verify token normalization.The integration tests properly verify:
- All transformations work together (response, prompts, tokens, vendor, metadata)
- Legacy token attributes (
ai.usage.promptTokens,ai.usage.completionTokens,LLM_USAGE_PROMPT_TOKENS,LLM_USAGE_COMPLETION_TOKENS) are removed- New standardized attributes (
LLM_USAGE_INPUT_TOKENS,LLM_USAGE_OUTPUT_TOKENS) are correctly set- Total tokens are computed from input/output tokens
The tests cover multiple scenarios (generateText, generateObject, with tools) and ensure end-to-end correctness.
| it("should handle zero input tokens", () => { | ||
| const attributes = { | ||
| "ai.usage.promptTokens": 0, | ||
| "gen_ai.usage.input_tokens": 0, | ||
| "gen_ai.usage.prompt_tokens": 0, | ||
| }; | ||
|
|
||
| transformLLMSpans(attributes); | ||
|
|
||
| assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS], 0); | ||
| assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS], 0); | ||
| assert.strictEqual(attributes["ai.usage.promptTokens"], undefined); | ||
| }); |
There was a problem hiding this comment.
Add assertion to verify legacy PROMPT_TOKENS attribute is removed.
The test includes gen_ai.usage.prompt_tokens in the input (line 824) but doesn't verify it's removed after transformation. For consistency with the main test (lines 802-805) and per the previous review comment, add an assertion to check that LLM_USAGE_PROMPT_TOKENS is undefined.
Apply this diff:
assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS], 0);
assert.strictEqual(attributes["ai.usage.promptTokens"], undefined);
+ assert.strictEqual(
+ attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS],
+ undefined,
+ );
});🤖 Prompt for AI Agents
In packages/traceloop-sdk/test/ai-sdk-transformations.test.ts around lines 820
to 831, the test for zero input tokens does not assert that the legacy
PROMPT_TOKENS attribute is removed; add an assertion after existing checks to
verify that attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS] is undefined so
the legacy gen_ai.usage.prompt_tokens is cleared by transformLLMSpans, mirroring
the main test's behavior.
| it("should handle zero output tokens", () => { | ||
| const attributes = { | ||
| "ai.usage.completionTokens": 0, | ||
| "gen_ai.usage.output_tokens": 0, | ||
| "gen_ai.usage.completion_tokens": 0, | ||
| }; | ||
|
|
||
| transformLLMSpans(attributes); | ||
|
|
||
| assert.strictEqual( | ||
| attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS], | ||
| 0, | ||
| ); | ||
| assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0); | ||
| assert.strictEqual(attributes["ai.usage.completionTokens"], undefined); | ||
| }); |
There was a problem hiding this comment.
Add assertion to verify legacy COMPLETION_TOKENS attribute is removed.
The test includes gen_ai.usage.completion_tokens in the input (line 872) but doesn't verify it's removed after transformation. For consistency with the main test (lines 850-853), add an assertion to check that LLM_USAGE_COMPLETION_TOKENS is undefined.
Apply this diff:
assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0);
assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
+ assert.strictEqual(
+ attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS],
+ undefined,
+ );
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it("should handle zero output tokens", () => { | |
| const attributes = { | |
| "ai.usage.completionTokens": 0, | |
| "gen_ai.usage.output_tokens": 0, | |
| "gen_ai.usage.completion_tokens": 0, | |
| }; | |
| transformLLMSpans(attributes); | |
| assert.strictEqual( | |
| attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS], | |
| 0, | |
| ); | |
| assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0); | |
| assert.strictEqual(attributes["ai.usage.completionTokens"], undefined); | |
| }); | |
| it("should handle zero output tokens", () => { | |
| const attributes = { | |
| "ai.usage.completionTokens": 0, | |
| "gen_ai.usage.output_tokens": 0, | |
| "gen_ai.usage.completion_tokens": 0, | |
| }; | |
| transformLLMSpans(attributes); | |
| assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0); | |
| assert.strictEqual(attributes["ai.usage.completionTokens"], undefined); | |
| assert.strictEqual( | |
| attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS], | |
| undefined, | |
| ); | |
| }); |
🤖 Prompt for AI Agents
In packages/traceloop-sdk/test/ai-sdk-transformations.test.ts around lines 868
to 879, the test "should handle zero output tokens" sets
gen_ai.usage.completion_tokens but does not assert that the legacy
COMPLETION_TOKENS attribute was removed; add an assertion after the existing
checks to verify attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS] is
undefined (matching the main test pattern), ensuring the legacy key is cleaned
up by transformLLMSpans.
Important
Normalize token attributes, remove legacy fields, and update tests for token handling.
LLM_USAGE_INPUT_TOKENSandLLM_USAGE_OUTPUT_TOKENSinai-sdk-transformations.ts.AI_USAGE_PROMPT_TOKENSandAI_USAGE_COMPLETION_TOKENS.LLM_USAGE_TOTAL_TOKENSonly if both input and output tokens are present.@opentelemetry/semantic-conventionstopackage.json.ai-sdk-integration.test.tsandai-sdk-transformations.test.tsto check for new token attributes and ensure legacy fields are removed.This description was created by
for 3045fb3. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
Bug Fixes
New Features
Tests
Notes
✏️ Tip: You can customize this high-level summary in your review settings.