Skip to content

fix(vercel): remove duplicate token attributes (prompt/input and completion/output)#831

Merged
avivhalfon merged 11 commits intomainfrom
ah/TLP-1192/fix-duplication
Nov 26, 2025
Merged

fix(vercel): remove duplicate token attributes (prompt/input and completion/output)#831
avivhalfon merged 11 commits intomainfrom
ah/TLP-1192/fix-duplication

Conversation

@avivhalfon
Copy link
Copy Markdown
Contributor

@avivhalfon avivhalfon commented Nov 23, 2025

Important

Normalize token attributes, remove legacy fields, and update tests for token handling.

  • Behavior:
    • Normalize token attributes to LLM_USAGE_INPUT_TOKENS and LLM_USAGE_OUTPUT_TOKENS in ai-sdk-transformations.ts.
    • Remove legacy token fields AI_USAGE_PROMPT_TOKENS and AI_USAGE_COMPLETION_TOKENS.
    • Calculate LLM_USAGE_TOTAL_TOKENS only if both input and output tokens are present.
  • Dependencies:
    • Add @opentelemetry/semantic-conventions to package.json.
  • Tests:
    • Update tests in ai-sdk-integration.test.ts and ai-sdk-transformations.test.ts to check for new token attributes and ensure legacy fields are removed.

This description was created by Ellipsis for 3045fb3. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • Bug Fixes

    • Token handling now normalizes to modern gen_ai input/output attributes, removes legacy token fields, and computes total tokens only when both input and output counts exist to avoid miscounting.
  • New Features

    • Added standardized mappings for LLM input/output token attributes aligned with updated semantic conventions.
  • Tests

    • Updated tests to assert new input/output token keys, cleanup of legacy keys, and revised total-token calculations.
  • Notes

    • No public API changes.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Nov 23, 2025

Walkthrough

Token normalization now prefers gen_ai.usage.input_tokens / gen_ai.usage.output_tokens, removes legacy ai.usage.* and old llm.usage.* duplicates, and computes llm.usage.total_tokens only when both input and output tokens are present. Semantic constants, tests, and a dependency were updated.

Changes

Cohort / File(s) Summary
AI SDK Token Transformations
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
transformPromptTokens/transformCompletionTokens now map legacy AI_USAGE_* to LLM_USAGE_INPUT_TOKENS/LLM_USAGE_OUTPUT_TOKENS only if targets are absent, delete legacy AI_USAGE_* and old LLM_USAGE_* duplicates, and calculate LLM_USAGE_TOTAL_TOKENS from LLM_USAGE_INPUT_TOKENS + LLM_USAGE_OUTPUT_TOKENS only when both exist.
Semantic Attributes
packages/ai-semantic-conventions/src/SemanticAttributes.ts
Imported incubating ATTR_GEN_AI_USAGE_INPUT_TOKENS and ATTR_GEN_AI_USAGE_OUTPUT_TOKENS and added LLM_USAGE_INPUT_TOKENS / LLM_USAGE_OUTPUT_TOKENS properties mapped to those constants.
Tests
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts, packages/traceloop-sdk/test/decorators.test.ts, packages/traceloop-sdk/test/ai-sdk-integration.test.ts
Updated fixtures and assertions to use gen_ai.usage.input_tokens / gen_ai.usage.output_tokens and corresponding LLM_USAGE_INPUT/OUTPUT constants; verify legacy keys removed and total token computation derives from input + output.
Package deps
packages/ai-semantic-conventions/package.json
Added dependency @opentelemetry/semantic-conventions to access incubating semantic constants.

Sequence Diagram(s)

sequenceDiagram
  participant Source as Trace Source
  participant Transformer as ai-sdk-transformations
  participant Span as Span Attributes

  Note over Source,Transformer: Incoming span attrs may include legacy `ai.usage.*`, old `llm.usage.*`, and/or new `gen_ai.usage.*`
  Source->>Transformer: deliver span attributes
  alt gen_ai input/output present
    Transformer->>Transformer: delete legacy `ai.*` and old `llm.*`
    Transformer->>Span: ensure `gen_ai.usage.input_tokens` & `gen_ai.usage.output_tokens`
    Transformer->>Transformer: compute total = input + output
  else gen_ai missing but legacy present
    Transformer->>Transformer: map `ai.usage.promptTokens` -> `gen_ai.usage.input_tokens` (if absent)
    Transformer->>Transformer: map `ai.usage.completionTokens` -> `gen_ai.usage.output_tokens` (if absent)
    Transformer->>Transformer: delete legacy `ai.*` and old `llm.*`
    Transformer->>Transformer: compute total = mapped input + mapped output
  end
  alt both input & output numeric
    Transformer->>Span: write `llm.usage.total_tokens`
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Inspect mapping/deletion ordering in ai-sdk-transformations.ts to ensure existing gen_ai.* values are not overwritten.
  • Validate calculateTotalTokens handles non-numeric and zero values and only writes total when both inputs exist.
  • Check the incubating imports and ts-expect-error usage in SemanticAttributes.ts for TypeScript compatibility.

Possibly related PRs

Poem

🐰 Hop, hop — tokens neat and bright,
gen_ai leads by morning light,
Old keys tucked away with care,
Input + output make the pair,
A rabbit’s nibble — totals right.

Pre-merge checks and finishing touches

✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately describes the main objective: removing duplicate token attributes (prompt/input and completion/output) from the AI SDK transformations.
Linked Issues check ✅ Passed The PR successfully implements all coding requirements from TLP-1192: normalizes to input/output token names, removes legacy prompt/completion attributes, and computes total_tokens only when both exist.
Out of Scope Changes check ✅ Passed All changes are directly aligned with TLP-1192 objectives: token attribute transformations, semantic conventions updates, test updates, and dependency addition are all in-scope.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ah/TLP-1192/fix-duplication

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 93b2388 in 1 minute and 47 seconds. Click for details.
  • Reviewed 56 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:323
  • Draft comment:
    Removed token assignment in transformPromptTokens/transformCompletionTokens now just delete duplicate keys. Ensure downstream logic doesn't rely on the removed duplicated attributes.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is suggesting to ensure that downstream logic doesn't rely on removed duplicated attributes. This is a request for confirmation or verification, which violates the rules. The comment does not provide a specific code suggestion or ask for a specific test to be written.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:348
  • Draft comment:
    calculateTotalTokens uses a truthy check (if(inputTokens && outputTokens)) which may skip calculation when token values are 0. Consider checking for undefined/null explicitly.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment is technically correct about the behavior of truthy checks with 0 values. However, looking at the diff, the truthy check if (inputTokens && outputTokens) was already present in the code before this PR (it was if (promptTokens && completionTokens) before). The PR only changed which attributes are being read, not the conditional logic itself. According to the rules, I should only keep comments that are about changes made in the diff. Since the truthy check logic was not changed (only the variable names/sources changed), this comment is about unchanged code logic and should be deleted. While the truthy check logic pattern existed before, one could argue that since the PR is touching this function and changing the attribute sources, it's a reasonable time to fix this potential bug. The comment is actionable and identifies a real issue where 0 tokens would be incorrectly handled. Even though the function was modified, the specific conditional logic that the comment addresses was not changed in this PR. The rules explicitly state to only comment on changes made by the diff. The truthy check existed before and continues to exist after - only the variable names changed. This is pre-existing code behavior, not something introduced or modified by this PR. This comment should be deleted because it addresses pre-existing code logic (the truthy check) that was not changed in this PR. The diff only changed which attributes are being read (from AI_USAGE_* to GEN_AI_USAGE_*), not the conditional logic itself.

Workflow ID: wflow_VOgT69d504NTCJ9P

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d6ebc23 and 93b2388.

📒 Files selected for processing (1)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (3)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-63)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test

Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts Outdated
Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts Outdated
Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts Outdated
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 705fd1e in 45 seconds. Click for details.
  • Reviewed 321 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/decorators.test.ts:515
  • Draft comment:
    Consider using assert.strictEqual instead of assert.equal to avoid unintended type coercion and maintain consistency.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/decorators.test.ts:40
  • Draft comment:
    The test setup and helper code is quite repetitive. Consider refactoring common initialization and helper routines to improve maintainability.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None

Workflow ID: wflow_xEFPo6JJdVO7RZvD

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@nirga nirga changed the title fix(vercel): remove duplicate token attributes (prompt/input and completion/output) feat(vercel): remove duplicate token attributes (prompt/input and completion/output) Nov 23, 2025
LLM_RESPONSE_MODEL: "gen_ai.response.model",
LLM_USAGE_PROMPT_TOKENS: "gen_ai.usage.prompt_tokens",
LLM_USAGE_COMPLETION_TOKENS: "gen_ai.usage.completion_tokens",
LLM_USAGE_INPUT_TOKENS: "gen_ai.usage.input_tokens",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take this from opentelemetry semantic conventions, no need to define them here

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wdym?

attributes[`${SpanAttributes.LLM_USAGE_COMPLETION_TOKENS}`];
const inputTokens = attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS];
const outputTokens = attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS];
const promptTokens = attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS];
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? If you deleted them they will never be here

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you right

Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts Outdated
Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts Outdated
attributes[AI_USAGE_PROMPT_TOKENS];
delete attributes[AI_USAGE_PROMPT_TOKENS];
if (SpanAttributes.LLM_USAGE_INPUT_TOKENS in attributes) {
if (AI_USAGE_PROMPT_TOKENS in attributes) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the token data is - if you deleted it before setting the input tokens attribute you lose it forever

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we already have input

attributes[`${SpanAttributes.LLM_USAGE_PROMPT_TOKENS}`] =
attributes[AI_USAGE_PROMPT_TOKENS];
delete attributes[AI_USAGE_PROMPT_TOKENS];
if (SpanAttributes.LLM_USAGE_INPUT_TOKENS in attributes) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is overly complex. Did you check if the gen ai input usage tokens is already set by the vervet ai sdk?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes Vercel sdk sets input usage and prompt tokens (duplication) in text.generate.
but in run.ai its sets only prompt tokens and not input usage.
meaning i need to check in the transform when already have input and leave only it, and when its prompt change it to input

@avivhalfon avivhalfon force-pushed the ah/TLP-1192/fix-duplication branch from 705fd1e to e67a5fc Compare November 24, 2025 07:26
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed e67a5fc in 50 seconds. Click for details.
  • Reviewed 346 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/decorators.test.ts:515
  • Draft comment:
    In the manual LLM instrumentation test, the assertion for LLM_USAGE_PROMPT_TOKENS compares to the string "15". Consider converting the token count to a number and using a numeric comparison to ensure consistency with token usage types.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/decorators.test.ts:622
  • Draft comment:
    Several tests compare JSON.stringify outputs (e.g. entity input/output) directly. Use deep equality on deserialized objects to avoid brittle tests if object property order changes.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/decorators.test.ts:410
  • Draft comment:
    In the test for 'should not log prompts if traceContent is disabled', additional assertions confirming that keys for entity input/output and prompt content are completely absent would improve clarity.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/decorators.test.ts:640
  • Draft comment:
    In the Vercel AI spans test, expected token values (input/output/total) are hardcoded. Verify these values either dynamically or document the rationale so future changes in underlying provider responses don't cause false failures.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/decorators.test.ts:270
  • Draft comment:
    Test descriptions and inline comments are clear overall. Consider adding brief inline comments in decorator-based workflow tests to explain the purpose of chained entity name verification for maintainability.
  • Reason this comment was not posted:
    Confidence changes required: 30% <= threshold 50% None

Workflow ID: wflow_L15zXbvWMxfaHJ0b

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)

212-216: Consider verifying absence of legacy token attributes.

Same suggestion as the OpenAI test case above - consider adding assertions to verify that legacy duplicate attributes have been removed after transformation.

🧹 Nitpick comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)

144-148: Consider verifying absence of legacy token attributes.

The assertions correctly verify that the new gen_ai.usage.input_tokens and gen_ai.usage.output_tokens attributes exist after transformation. However, to more thoroughly validate the deduplication fix, consider also asserting that the legacy duplicate attributes (ai.usage.promptTokens and gen_ai.usage.prompt_tokens) have been removed.

Example:

     // Verify token usage - should be transformed to input/output tokens
     assert.ok(generateTextSpan.attributes["gen_ai.usage.input_tokens"]);
     assert.ok(generateTextSpan.attributes["gen_ai.usage.output_tokens"]);
     assert.ok(generateTextSpan.attributes["llm.usage.total_tokens"]);
+    // Verify legacy duplicates are removed
+    assert.strictEqual(generateTextSpan.attributes["ai.usage.promptTokens"], undefined);
+    assert.strictEqual(generateTextSpan.attributes["gen_ai.usage.prompt_tokens"], undefined);
+    assert.strictEqual(generateTextSpan.attributes["ai.usage.completionTokens"], undefined);
+    assert.strictEqual(generateTextSpan.attributes["gen_ai.usage.completion_tokens"], undefined);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 705fd1e and e67a5fc.

📒 Files selected for processing (5)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (11 hunks)
  • packages/traceloop-sdk/test/decorators.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-65)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test
🔇 Additional comments (3)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (3)

320-337: Transformation logic correctly handles dual-source token attributes.

The function properly addresses the deduplication issue by handling two scenarios:

  1. When gen_ai.usage.input_tokens already exists (Vercel AI SDK), it removes the duplicate prompt_tokens attributes
  2. When only legacy ai.usage.promptTokens exists (run.ai), it maps to gen_ai.usage.input_tokens

This ensures a single source of truth for input tokens while maintaining backward compatibility with different instrumentations.


339-356: LGTM: Completion tokens transformation mirrors prompt tokens logic.

The function correctly handles the same dual-source scenario for completion/output tokens, ensuring consistency with the prompt tokens transformation.


358-366: LGTM: Total tokens correctly calculated from normalized attributes.

The function correctly:

  • Reads from the normalized input_tokens and output_tokens attributes after transformation
  • Converts values to numbers to handle potential string types
  • Only calculates total when both components exist

The execution order in transformLLMSpans (lines 428-430) ensures token transformations complete before this calculation runs.

@avivhalfon avivhalfon force-pushed the ah/TLP-1192/fix-duplication branch from e67a5fc to 1a416a0 Compare November 24, 2025 07:43
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 1a416a0 in 2 minutes and 13 seconds. Click for details.
  • Reviewed 355 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:359
  • Draft comment:
    The total token calculation uses 'if (inputTokens && outputTokens)' which will fail (skip calculation) if one value is 0, since 0 is falsy. Consider using an explicit check (e.g. !== undefined) so that zero values are correctly handled.
  • Reason this comment was not posted:
    Comment was on unchanged code.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:320
  • Draft comment:
    The transformPromptTokens (and similarly transformCompletionTokens) conditionally remove legacy token attributes if the new key is present. This dual behavior (keeping legacy fields for manual instrumentation while removing them for Vercel spans) may be confusing. Consider adding inline comments or documentation to clarify when each set is retained.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
3. packages/traceloop-sdk/test/decorators.test.ts:515
  • Draft comment:
    In the manual LLM instrumentation test, the assertion expects the legacy token attribute (llm.usage.prompt_tokens) to remain. This differs from the Vercel integration behavior. Consider documenting this divergence to avoid confusion.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/decorators.test.ts:680
  • Draft comment:
    Consider adding tests for scenarios where token counts are 0. This will ensure the total token calculation handles falsy (zero) values correctly.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_Ojyj7GmLBt68VEuA

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)

789-826: Prompt token test description doesn’t match the fixture

The test name says “delete ai.usage.promptTokens and gen_ai.usage.prompt_tokens (keep input_tokens)”, but the fixture only includes:

  • "ai.usage.promptTokens"
  • "gen_ai.usage.input_tokens"

and never sets gen_ai.usage.prompt_tokens (SpanAttributes.LLM_USAGE_PROMPT_TOKENS). So the test doesn’t actually exercise deletion of the prompt_tokens variant.

Either:

  • Add attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS] to the fixture and assert it becomes undefined, or
  • Update the description to only mention ai.usage.promptTokens.

Same comment applies to the zero‑tokens variant below.


828-868: Completion token test description has the same mismatch

Similarly, the completion test title mentions deleting gen_ai.usage.completion_tokens, but the fixture only includes:

  • "ai.usage.completionTokens"
  • "gen_ai.usage.output_tokens"

No attribute keyed by SpanAttributes.LLM_USAGE_COMPLETION_TOKENS is ever set, so that part of the behavior isn’t validated.

As above, either add SpanAttributes.LLM_USAGE_COMPLETION_TOKENS to the fixture and assert it’s removed, or trim the description.

🧹 Nitpick comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)

870-929: Add total-token tests for zero-count cases

These tests cover:

  • Numeric input/output tokens (50/25 → 75).
  • String tokens ("50"/"25" → 75).
  • Missing input, missing output, and both missing.

They don’t cover cases where one or both token counts are 0, which is exactly where the current if (inputTokens && outputTokens) logic in calculateTotalTokens misbehaves.

After adjusting calculateTotalTokens to check for attribute presence instead of truthiness (see comment in ai-sdk-transformations.ts), consider adding tests like:

it("should calculate total when one token count is zero", () => {
  const attributes = {
    [SpanAttributes.LLM_USAGE_INPUT_TOKENS]: 0,
    [SpanAttributes.LLM_USAGE_OUTPUT_TOKENS]: 25,
  };

  transformLLMSpans(attributes);

  assert.strictEqual(
    attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS],
    25,
  );
});

it("should calculate total when both token counts are zero", () => {
  const attributes = {
    [SpanAttributes.LLM_USAGE_INPUT_TOKENS]: 0,
    [SpanAttributes.LLM_USAGE_OUTPUT_TOKENS]: 0,
  };

  transformLLMSpans(attributes);

  assert.strictEqual(
    attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS],
    0,
  );
});

This will lock in the intended semantics for zero-token spans.


1015-1149: Integration tests correctly assert canonical input/output tokens and legacy cleanup

The end‑to‑end tests for transformAiSdkAttributes now:

  • Provide both gen_ai.usage.input_tokens / output_tokens and ai.usage.*Tokens.
  • Assert that only SpanAttributes.LLM_USAGE_INPUT_TOKENS / LLM_USAGE_OUTPUT_TOKENS remain, with the correct totals.
  • Verify that legacy ai.usage.*Tokens and SpanAttributes.LLM_USAGE_PROMPT_TOKENS / LLM_USAGE_COMPLETION_TOKENS are gone.

That matches the new normalization behavior and the PR objective of eliminating duplicated token attributes. As a small enhancement, you might add one integration test where only ai.usage.promptTokens / ai.usage.completionTokens are present (no gen_ai.*) to explicitly cover the “migration-only” path.

As per coding guidelines, using SpanAttributes here keeps the tests aligned with the shared semantic conventions.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e67a5fc and 1a416a0.

📒 Files selected for processing (5)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (11 hunks)
  • packages/traceloop-sdk/test/decorators.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧰 Additional context used
📓 Path-based instructions (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-65)
🔇 Additional comments (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)

30-31: New gen_ai input/output usage attributes correctly added

The new LLM_USAGE_INPUT_TOKENS / LLM_USAGE_OUTPUT_TOKENS constants match the gen_ai.usage.input_tokens / gen_ai.usage.output_tokens keys and keep all token attributes centralized in SemanticAttributes.ts, which is in line with how the rest of the SDK is structured.

As per coding guidelines, defining AI/LLM span attributes here is the correct approach.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)

320-336: Prompt token normalization logic looks good

When gen_ai.usage.input_tokens is already present, you drop both ai.usage.promptTokens and gen_ai.usage.prompt_tokens; otherwise you backfill input_tokens from ai.usage.promptTokens. Using in avoids problems with zero values being treated as absent.

Based on learnings, importing and using SpanAttributes instead of hardcoded strings is the right pattern here.


339-355: Completion token normalization logic looks good

This mirrors the prompt path: prefer gen_ai.usage.output_tokens and delete ai.usage.completionTokens / gen_ai.usage.completion_tokens, or map from ai.usage.completionTokens when output_tokens is missing. The behavior matches the PR goal of de-duplicating and normalizing token attributes.

Based on learnings, this keeps the AI SDK spans aligned with the shared semantic conventions.

Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 00e2529 in 2 minutes and 10 seconds. Click for details.
  • Reviewed 101 lines of code in 3 files
  • Skipped 1 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/package.json:38
  • Draft comment:
    Added dependency '@opentelemetry/semantic-conventions'. Verify if this should instead be a peer dependency to prevent version conflicts.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% This comment clearly violates the rule "Do NOT ask the PR author to confirm their intention, to explain, to double-check things, to ensure the behavior is intended, to make sure their change is tested, or similar. If the comments starts with 'Verify that...' or 'Ensure that...', it is likely not useful." The comment literally starts with "Verify if..." and is asking the author to confirm whether their dependency choice is correct. It's also speculative - it suggests there "might" be version conflicts but doesn't demonstrate that there definitely are any. The comment doesn't provide strong evidence of an actual problem, just raises a hypothetical concern. Could there be a legitimate architectural concern here about peer dependencies vs regular dependencies that the tool is trying to surface? Perhaps in OpenTelemetry ecosystems, semantic-conventions packages are typically peer dependencies to avoid version mismatches across packages. Even if there's a valid architectural pattern being referenced, the comment is phrased as "Verify if..." which explicitly asks the author to confirm rather than stating a definite issue. Without seeing actual evidence of a problem or a clear standard being violated, this is speculative advice that violates the rules. This comment should be deleted because it explicitly asks the PR author to "verify" their choice, which violates the rule against asking authors to confirm their intentions. It's speculative about potential version conflicts without demonstrating an actual problem.
2. packages/ai-semantic-conventions/src/SemanticAttributes.ts:14
  • Draft comment:
    Importing from 'build/src/index-incubating' may be unstable; consider using a stable public API export if available.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 15% vs. threshold = 50% The comment is making a speculative suggestion - it says "may be unstable" and "consider using a stable public API if available". This violates the rule against speculative comments. The comment doesn't definitively know if there's a stable alternative, and it's asking the author to investigate rather than pointing out a clear issue. The author intentionally made this change to use these specific imports, and without seeing the package.json or knowing what alternatives exist, this is just speculation. The comment also doesn't provide concrete evidence that this is wrong or will cause problems. Perhaps the import path really is problematic and will break in production builds. The "build/src" path and "incubating" naming could indicate real stability concerns that the author should be aware of. Maybe this is common knowledge in the OpenTelemetry ecosystem. Even if there are potential stability concerns, the comment is still speculative ("may be unstable", "if available"). It doesn't provide definitive evidence of a problem or a concrete alternative. The author made this change deliberately, and without strong evidence that it's wrong, we should assume they knew what they were doing. This falls under "speculative comments" that should be removed. This comment should be deleted because it's speculative ("may be unstable", "if available") and doesn't provide definitive evidence of a problem or a concrete alternative solution. It violates the rule against speculative comments.
3. packages/ai-semantic-conventions/src/SemanticAttributes.ts:32
  • Draft comment:
    Mapping token attributes to imported constants improves consistency. Ensure this aligns with all downstream usages.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment starts with a positive note about the change, which is informative. However, it then asks the author to ensure alignment with downstream usages, which is a general request for verification. This violates the rule against asking the author to ensure behavior is intended or tested.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:321
  • Draft comment:
    Refactor in transformPromptTokens: the conditional now sets the new token key only if absent and then unconditionally cleans up legacy keys to avoid duplication.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, describing what the refactor does without suggesting any changes or asking for clarification. It doesn't provide any actionable feedback or raise any concerns about the code.
5. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:331
  • Draft comment:
    Similarly, transformCompletionTokens now conditionally assigns the new key and removes legacy attributes; ensure downstream logic accepts this naming convention.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that downstream logic accepts a naming convention change. This falls under asking the author to ensure behavior is intended or tested, which is against the rules.

Workflow ID: wflow_J0iXL0hrVxlmZMSh

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 155e14a in 1 minute and 39 seconds. Click for details.
  • Reviewed 14 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/src/SemanticAttributes.ts:20
  • Draft comment:
    Consider moving the // @ts-ignore comment above the import statement rather than inside the destructuring block to ensure TypeScript suppression applies correctly.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 15% vs. threshold = 50% The comment is about code style/placement of the @ts-ignore directive. In TypeScript, @ts-ignore suppresses errors on the next line. When placed on line 20 inside the destructuring block, it would suppress errors on line 21 (the closing brace and from clause). If placed above line 17, it would suppress errors on the entire import statement. The current placement is actually unconventional - typically @ts-ignore is placed directly before the statement it's meant to suppress. However, this is a minor style issue and the code likely works either way. The comment is somewhat pedantic and doesn't point to a clear bug - it's more of a style preference. The rules say not to comment on things that are obvious or unimportant, and this seems to fall into that category. The placement might actually matter for TypeScript's error suppression to work correctly. If the @ts-ignore doesn't properly suppress the module resolution error, this could be a real issue. I might be underestimating the importance of correct @ts-ignore placement. While @ts-ignore placement can matter, the code as written likely works (the PR author presumably tested it). The comment uses "consider" and "ensure TypeScript suppression applies correctly" which is speculative language - it's not definitively stating there's a problem. This violates the rule against speculative comments. If the suppression wasn't working, the build would catch it. This comment should be deleted. It's speculative (using "consider" and "ensure"), suggests a style preference rather than a clear bug, and any actual TypeScript error would be caught by the build. The comment doesn't provide strong evidence that the current placement is incorrect.
2. packages/ai-semantic-conventions/src/SemanticAttributes.ts:21
  • Draft comment:
    Verify that the new module path '@opentelemetry/semantic-conventions/incubating' is stable and version-safe compared to the previous 'build/src/index-incubating' path.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is asking the PR author to verify the stability and version-safety of a new module path compared to a previous one. It is not making a specific code suggestion or pointing out a specific issue with the code. It falls under the category of asking the author to ensure something, which is against the rules.

Workflow ID: wflow_1ec5mk1zBMMtmVhD

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)

790-802: Consider verifying legacy LLM_USAGE_PROMPT_TOKENS removal.

The test name mentions deleting gen_ai.usage.prompt_tokens, but the test doesn't assert that LLM_USAGE_PROMPT_TOKENS is removed after transformation. While the integration tests verify this, adding an assertion here would make the individual test more complete and self-documenting.

Apply this diff to add the assertion:

     transformLLMSpans(attributes);

     assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS], 50);
     assert.strictEqual(attributes["ai.usage.promptTokens"], undefined);
+    assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS], undefined);
     assert.strictEqual(attributes.someOtherAttr, "value");

829-844: Consider verifying legacy LLM_USAGE_COMPLETION_TOKENS removal.

The test name mentions deleting gen_ai.usage.completion_tokens, but the test doesn't assert that LLM_USAGE_COMPLETION_TOKENS is removed after transformation. While the integration tests verify this, adding an assertion here would improve test clarity.

Apply this diff to add the assertion:

     transformLLMSpans(attributes);

     assert.strictEqual(
       attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS],
       25,
     );
     assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
+    assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS], undefined);
     assert.strictEqual(attributes.someOtherAttr, "value");
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 00e2529 and ca097a2.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (3)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (2 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (11 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)

870-929: LGTM! Total token calculation tests properly updated.

The tests correctly verify that total tokens are now calculated from LLM_USAGE_INPUT_TOKENS and LLM_USAGE_OUTPUT_TOKENS, and properly handle edge cases (missing tokens, string values). This aligns with the PR objective to use input/output tokens instead of prompt/completion tokens.


1061-1068: Excellent addition of assertions for legacy token attribute removal.

These new assertions directly verify that the legacy LLM_USAGE_PROMPT_TOKENS and LLM_USAGE_COMPLETION_TOKENS attributes are removed after transformation, which is the core objective of this PR (preventing duplicate token reporting). This ensures that only the new input_tokens and output_tokens attributes remain.

Also applies to: 1137-1144

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 9a4fc6c in 58 seconds. Click for details.
  • Reviewed 13 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/src/SemanticAttributes.ts:20
  • Draft comment:
    Good change: switching to @ts-expect-error is preferred as it flags an error if the expected error doesn't occur. Be sure to update or remove the directive if the underlying import eventually resolves without error.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.

Workflow ID: wflow_a38zH3szd2lqJ7R1

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 8f8785f in 50 seconds. Click for details.
  • Reviewed 34 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:324
  • Draft comment:
    The multi-line if condition in transformPromptTokens improves readability. Consider consolidating the legacy token mapping logic with its counterpart in transformCompletionTokens to reduce duplication, ensuring cleanup of legacy keys remains intact.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:341
  • Draft comment:
    The transformCompletionTokens function now uses a similar multi-line if condition which is clearer. Again, if possible, factor out the common legacy token normalization logic to avoid repeating similar code.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_Rcj0YUCjxUxanC9z

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe keep the testing of the old prompt tokens - just make sure that they don't exist


transformLLMSpans(attributes);

assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS], 0);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test that PROMPT TOKENS doesn't exist as well

@nirga nirga changed the title feat(vercel): remove duplicate token attributes (prompt/input and completion/output) fix(vercel): remove duplicate token attributes (prompt/input and completion/output) Nov 25, 2025
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 6531e69 in 1 minute and 39 seconds. Click for details.
  • Reviewed 26 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:798
  • Draft comment:
    New assertion: Ensure duplicate prompt token attribute (LLM_USAGE_PROMPT_TOKENS) is removed after transformation.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:842
  • Draft comment:
    New assertion: Ensure duplicate completion token attribute (LLM_USAGE_COMPLETION_TOKENS) is removed after transformation.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_GjA8E1rfVxkou7LX

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)

1022-1081: Add legacy attributes to input to properly test their removal.

The integration test verifies that legacy token attributes (LLM_USAGE_PROMPT_TOKENS and LLM_USAGE_COMPLETION_TOKENS) are undefined after transformation (lines 1069-1076), but these attributes are never included in the input. To properly test that the transformation removes legacy attributes when present, add them to the input:

Apply this diff:

 it("should apply all attribute transformations", () => {
   const attributes = {
     "ai.response.text": "Hello!",
     "ai.prompt.messages": JSON.stringify([{ role: "user", content: "Hi" }]),
     "ai.usage.promptTokens": 10,
     "ai.usage.completionTokens": 5,
     "gen_ai.usage.input_tokens": 10,
     "gen_ai.usage.output_tokens": 5,
+    "gen_ai.usage.prompt_tokens": 10,
+    "gen_ai.usage.completion_tokens": 5,
     "ai.model.provider": "openai.chat",
     someOtherAttr: "value",
   };

Same issue applies to the integration test starting at line 1098.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8f8785f and 6531e69.

📒 Files selected for processing (1)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (11 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformLLMSpans (440-454)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (23-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test
🔇 Additional comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)

879-936: LGTM: Total tokens calculation tests correctly updated.

The tests now properly verify that total tokens are calculated from input and output tokens only when both are present, including edge cases (string values, missing tokens).

Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 3045fb3 in 54 seconds. Click for details.
  • Reviewed 36 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:792
  • Draft comment:
    Adding legacy duplicate keys for prompt tokens ("gen_ai.usage.prompt_tokens") and for completion tokens ("gen_ai.usage.completion_tokens") in the test cases ensures that transformLLMSpans removes these older fields while retaining normalized ones. This improves test coverage for legacy scenarios.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, explaining why certain test cases are added. It doesn't provide a suggestion, ask for confirmation, or point out a potential issue. It violates the rule against making purely informative comments.

Workflow ID: wflow_Y1Wt1i6kLjqwnmp5

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6531e69 and 3045fb3.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (1)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (11 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformLLMSpans (440-454)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (23-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)

882-941: LGTM! Total tokens calculation correctly uses input/output tokens.

The tests properly verify that:

  • Total tokens are calculated from LLM_USAGE_INPUT_TOKENS and LLM_USAGE_OUTPUT_TOKENS
  • Total is only computed when both input and output are present
  • String values are handled correctly

This aligns with the PR objective to normalize token attributes and avoid duplicates.


1025-1221: LGTM! Integration tests comprehensively verify token normalization.

The integration tests properly verify:

  • All transformations work together (response, prompts, tokens, vendor, metadata)
  • Legacy token attributes (ai.usage.promptTokens, ai.usage.completionTokens, LLM_USAGE_PROMPT_TOKENS, LLM_USAGE_COMPLETION_TOKENS) are removed
  • New standardized attributes (LLM_USAGE_INPUT_TOKENS, LLM_USAGE_OUTPUT_TOKENS) are correctly set
  • Total tokens are computed from input/output tokens

The tests cover multiple scenarios (generateText, generateObject, with tools) and ensure end-to-end correctness.

Comment on lines +820 to 831
it("should handle zero input tokens", () => {
const attributes = {
"ai.usage.promptTokens": 0,
"gen_ai.usage.input_tokens": 0,
"gen_ai.usage.prompt_tokens": 0,
};

transformLLMSpans(attributes);

assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS], 0);
assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS], 0);
assert.strictEqual(attributes["ai.usage.promptTokens"], undefined);
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add assertion to verify legacy PROMPT_TOKENS attribute is removed.

The test includes gen_ai.usage.prompt_tokens in the input (line 824) but doesn't verify it's removed after transformation. For consistency with the main test (lines 802-805) and per the previous review comment, add an assertion to check that LLM_USAGE_PROMPT_TOKENS is undefined.

Apply this diff:

     assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_INPUT_TOKENS], 0);
     assert.strictEqual(attributes["ai.usage.promptTokens"], undefined);
+    assert.strictEqual(
+      attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS],
+      undefined,
+    );
   });
🤖 Prompt for AI Agents
In packages/traceloop-sdk/test/ai-sdk-transformations.test.ts around lines 820
to 831, the test for zero input tokens does not assert that the legacy
PROMPT_TOKENS attribute is removed; add an assertion after existing checks to
verify that attributes[SpanAttributes.LLM_USAGE_PROMPT_TOKENS] is undefined so
the legacy gen_ai.usage.prompt_tokens is cleared by transformLLMSpans, mirroring
the main test's behavior.

Comment on lines +868 to 879
it("should handle zero output tokens", () => {
const attributes = {
"ai.usage.completionTokens": 0,
"gen_ai.usage.output_tokens": 0,
"gen_ai.usage.completion_tokens": 0,
};

transformLLMSpans(attributes);

assert.strictEqual(
attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS],
0,
);
assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0);
assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add assertion to verify legacy COMPLETION_TOKENS attribute is removed.

The test includes gen_ai.usage.completion_tokens in the input (line 872) but doesn't verify it's removed after transformation. For consistency with the main test (lines 850-853), add an assertion to check that LLM_USAGE_COMPLETION_TOKENS is undefined.

Apply this diff:

     assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0);
     assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
+    assert.strictEqual(
+      attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS],
+      undefined,
+    );
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it("should handle zero output tokens", () => {
const attributes = {
"ai.usage.completionTokens": 0,
"gen_ai.usage.output_tokens": 0,
"gen_ai.usage.completion_tokens": 0,
};
transformLLMSpans(attributes);
assert.strictEqual(
attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS],
0,
);
assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0);
assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
});
it("should handle zero output tokens", () => {
const attributes = {
"ai.usage.completionTokens": 0,
"gen_ai.usage.output_tokens": 0,
"gen_ai.usage.completion_tokens": 0,
};
transformLLMSpans(attributes);
assert.strictEqual(attributes[SpanAttributes.LLM_USAGE_OUTPUT_TOKENS], 0);
assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
assert.strictEqual(
attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS],
undefined,
);
});
🤖 Prompt for AI Agents
In packages/traceloop-sdk/test/ai-sdk-transformations.test.ts around lines 868
to 879, the test "should handle zero output tokens" sets
gen_ai.usage.completion_tokens but does not assert that the legacy
COMPLETION_TOKENS attribute was removed; add an assertion after the existing
checks to verify attributes[SpanAttributes.LLM_USAGE_COMPLETION_TOKENS] is
undefined (matching the main test pattern), ensuring the legacy key is cleaned
up by transformLLMSpans.

@avivhalfon avivhalfon merged commit b326268 into main Nov 26, 2025
8 checks passed
@avivhalfon avivhalfon deleted the ah/TLP-1192/fix-duplication branch November 26, 2025 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants