Skip to content

fix(ai-sdk): Add conversion to opentelemetry semantic convention #845

Merged
nina-kollman merged 20 commits intomainfrom
nk/ai_sdk_to_conv
Dec 14, 2025
Merged

fix(ai-sdk): Add conversion to opentelemetry semantic convention #845
nina-kollman merged 20 commits intomainfrom
nk/ai_sdk_to_conv

Conversation

@nina-kollman
Copy link
Copy Markdown
Contributor

@nina-kollman nina-kollman commented Dec 10, 2025

Fixes TLP-1235


Important

Consolidates tracing attributes under GenAI namespace, updates tests, and upgrades dependencies to standardize AI SDK tracing with OpenTelemetry conventions.

  • Behavior:
    • Consolidates tracing attributes under GenAI namespace in SemanticAttributes.ts.
    • Normalizes GenAI attributes for prompts, messages, completions, tool calls, provider/model, operation/conversation/agent metadata.
    • Removes legacy LLM/old GenAI keys.
  • Tests:
    • Updates tests in agent_decorator.test.ts, ai-sdk-agent-integration.test.ts, ai-sdk-integration.test.ts, ai-sdk-transformations.test.ts, and decorators.test.ts to reflect new GenAI attributes.
    • Expands coverage for prompts/messages, tool calls, provider/model normalization, finish reasons, usage tokens, and edge cases.
  • Chores:
    • Upgrades @opentelemetry/semantic-conventions to ^1.38.0 in package.json files.
    • Updates TypeScript module/test configs in various package.json files.

This description was created by Ellipsis for d7df478. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • New Features

    • Consolidated tracing attributes under GenAI namespace; added GenAI usage tokens for cache creation, cache read, and reasoning.
  • Behavior

    • Traces now emit normalized GenAI attributes for prompts, messages, completions, tool calls, provider/model, operation/conversation/agent metadata; legacy LLM/old GenAI keys removed; totals computed from GenAI usage tokens.
  • Tests

    • Expanded coverage for prompts/messages, tool calls, provider/model normalization, finish reasons, usage tokens and edge cases.
  • Chores

    • Upgraded OpenTelemetry semantic-conventions and updated TypeScript module/test configs.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 10, 2025

Walkthrough

Replaced many LLM_* and legacy GEN_AI_* span attribute constants with incubating ATTR_GEN_AI_* symbols, added three GEN_AI_USAGE_* tokens, introduced GEN_AI-focused transformers in traceloop, updated instrumentations and tests to use incubating constants, and bumped @opentelemetry/semantic-conventions plus TypeScript moduleResolution/test tsconfigs.

Changes

Cohort / File(s) Summary
Semantic conventions
packages/ai-semantic-conventions/src/SemanticAttributes.ts
Removed many public LLM_* and legacy GEN_AI_AGENT_NAME constants; removed some incubating imports; added/exposed new GEN_AI_USAGE_* tokens and a comment marking attributes not yet in core conventions.
Traceloop SDK — transformers & tracing
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts, .../decorators.ts, .../manual.ts, .../span-processor.ts, .../tracing.ts
Added GEN_AI transformers (operation name, model mapping, finishReason normalization, tool-call & conversation propagation, response metadata), migrated agent-name and other attributes to ATTR_GEN_AI_*, and merged an import for Context. No exported signatures changed.
Traceloop SDK tests
packages/traceloop-sdk/test/*
Updated assertions to use ATTR_GEN_AI_* constants; added/expanded tests for prompts/messages parsing, tool calls, provider/model mapping, finishReason normalization, operation-name derivation, conversationId resolution, and end‑to‑end transformation scenarios.
Vendor instrumentations (core)
packages/instrumentation-openai/**, packages/instrumentation-anthropic/**, packages/instrumentation-bedrock/**, packages/instrumentation-cohere/**, packages/instrumentation-together/**, packages/instrumentation-langchain/**, packages/instrumentation-llamaindex/**, packages/instrumentation-vertexai/**, packages/instrumentation-pinecone/**, packages/instrumentation-mcp/**, packages/instrumentation-chromadb/**
Replaced vendor instrumentation attribute keys from LLM_* to ATTR_GEN_AI_* across span start/end, prompts, completions, request/response fields, tool/function metadata, and usage tokens (streaming and non‑streaming). Control flow and public APIs unchanged.
Instrumentation tests
packages/instrumentation-*/tests/*
Updated many test assertions and imports to use incubating ATTR_GEN_AI_* constants (prompts, completions, system, model, request params, usage tokens); added required imports.
OpenAI image wrappers
packages/instrumentation-openai/src/image-wrappers.ts
Switched prompt/completion/system and usage token keys to ATTR_GEN_AI_* and updated token calculation/enrichment to reference the new keys.
Traceloop decorators / manual usage
packages/traceloop-sdk/src/lib/tracing/decorators.ts, .../manual.ts, .../span-processor.ts
Switched agent-name read/write/propagation to ATTR_GEN_AI_AGENT_NAME and updated related propagation logic and imports.
Dependency & TypeScript config
multiple package.json, tsconfig*.json, tsconfig.base.json (e.g., packages//{package.json,tsconfig.json}, packages//tsconfig.test.json)
Bumped @opentelemetry/semantic-conventions from ^1.36.0^1.38.0 across packages; added package-specific tsconfig.test.json files and switched test scripts to use them; added/changed TypeScript moduleResolution/module options (e.g., bundler, node16, node).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Areas to focus on:
    • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts — attribute ordering, side effects, legacy-attribute removal, and compatibility short-circuits.
    • packages/ai-semantic-conventions/src/SemanticAttributes.ts — exported constant names and alignment with incubating API.
    • Instrumentation streaming vs non‑streaming branches — prompt/completion indexing and token aggregation consistency.
    • Tests importing incubating constants — verify imports match bumped semantic-conventions and moduleResolution settings.
    • packages/traceloop-sdk/src/lib/tracing/span-processor.ts and decorators/manual — symmetric read/write of ATTR_GEN_AI_AGENT_NAME across propagation.

Poem

🐇 I hop through spans with whiskers bright,
I swap the keys and set them right,
Prompts and tokens, neatly shown,
Gen_AI names now call my own,
I nibble traces, then bound home.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Linked Issues check ❓ Inconclusive The linked issue TLP-1235 'Add set_conversation_id to the SDK' lacks sufficient detail to fully validate against the PR's comprehensive changes spanning semantic attribute migration. Expand issue description to clarify specific requirements for conversation ID handling, attribute transformations, and compatibility expectations.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix(ai-sdk): Add conversion to opentelemetry semantic convention' accurately summarizes the main change: migrating AI SDK attributes to OpenTelemetry semantic conventions.
Out of Scope Changes check ✅ Passed All changes focus on migrating LLM_* attributes to ATTR_GEN_AI_* semantic conventions, adding conversation ID support, and updating configurations; no unrelated modifications detected.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch nk/ai_sdk_to_conv

Comment @coderabbitai help to get the list of available commands and usage tips.

@nina-kollman nina-kollman marked this pull request as ready for review December 10, 2025 10:00
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 4047a13 in 44 seconds. Click for details.
  • Reviewed 850 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:5
  • Draft comment:
    Unused import detected: 'ASSOCATION_PROPERTIES_KEY' is imported but never used. Consider removing it to keep the test file clean.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:40
  • Draft comment:
    The test file is very comprehensive but quite long. Consider splitting tests by logical grouping (e.g., response, prompt, token, metadata, tool calls) into separate files or using nested describe blocks to improve readability and maintainability.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:424
  • Draft comment:
    There is repeated usage of JSON.stringify and JSON.parse in multiple tests. Consider extracting helper functions for common patterns (e.g. for creating test messages and verifying transformed prompts) to reduce duplication and ease future updates.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_Ll4qRIXPDJZ9JlNI

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 3429596 in 1 minute and 33 seconds. Click for details.
  • Reviewed 173 lines of code in 3 files
  • Skipped 1 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/package.json:38
  • Draft comment:
    Bump dependency version for @opentelemetry/semantic-conventions to ^1.38.0. Verify this upgrade is compatible with downstream consumers.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is about a dependency version change and asks the author to verify compatibility with downstream consumers. According to the rules, comments on dependency changes and requests for verification are not allowed.
2. packages/ai-semantic-conventions/src/SemanticAttributes.ts:45
  • Draft comment:
    Refactored attribute mapping: using incubating constants and adding legacy aliases. Ensure these mappings remain consistent with the updated OpenTelemetry semantic conventions.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and asks the author to ensure consistency with OpenTelemetry semantic conventions, which violates the rules against asking the author to ensure behavior is intended or tested.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:377
  • Draft comment:
    Updated token attribute mappings in transformPromptTokens and transformCompletionTokens. Confirm that legacy keys are correctly removed and new keys (ATTR_GEN_AI_USAGE_PROMPT_TOKENS / ATTR_GEN_AI_USAGE_OUTPUT_TOKENS) are used consistently.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:744
  • Draft comment:
    Consider adding unit tests to verify that the new semantic conversion and legacy key cleanup are working as expected.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_EpSgJ8azMSuHBtlQ

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)

454-462: Use ATTR_GEN_AI_USAGE_PROMPT_TOKENS instead of ATTR_GEN_AI_USAGE_INPUT_TOKENS in calculateTotalTokens.

Line 455 reads ATTR_GEN_AI_USAGE_INPUT_TOKENS, but transformPromptTokens() (line 384) sets ATTR_GEN_AI_USAGE_PROMPT_TOKENS. Since INPUT_TOKENS is never populated, the total tokens calculation will always fail for the input side. Change line 455 to read ATTR_GEN_AI_USAGE_PROMPT_TOKENS to match what is actually set.

🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)

313-347: Consider extracting duplicate message processing logic.

The message processing code (lines 313-347) duplicates the logic from lines 280-304 in the AI_PROMPT_MESSAGES handling above. Both blocks iterate messages, process content, set LLM_PROMPTS attributes, and build LLM_INPUT_MESSAGES.

Consider extracting this into a shared helper function:

+const processMessagesArray = (
+  messages: Array<{ role: string; content: any }>,
+  attributes: Record<string, any>,
+): void => {
+  const inputMessages: any[] = [];
+
+  messages.forEach((msg: { role: string; content: any }, index: number) => {
+    const processedContent = processMessageContent(msg.content);
+    attributes[`${SpanAttributes.LLM_PROMPTS}.${index}.content`] = processedContent;
+    attributes[`${SpanAttributes.LLM_PROMPTS}.${index}.role`] = msg.role;
+
+    inputMessages.push({
+      role: msg.role,
+      parts: [{ type: TYPE_TEXT, content: processedContent }],
+    });
+  });
+
+  if (inputMessages.length > 0) {
+    attributes[SpanAttributes.LLM_INPUT_MESSAGES] = JSON.stringify(inputMessages);
+  }
+};

Then use it in both places:

 if (AI_PROMPT_MESSAGES in attributes) {
   try {
     // ... JSON parsing ...
     const messages = JSON.parse(jsonString);
-    const inputMessages: any[] = [];
-    messages.forEach((msg, index) => { ... });
-    if (inputMessages.length > 0) { ... }
+    processMessagesArray(messages, attributes);
     delete attributes[AI_PROMPT_MESSAGES];
   } catch { ... }
 }

 if (AI_PROMPT in attributes) {
   try {
     const promptData = JSON.parse(attributes[AI_PROMPT] as string);
     if (promptData.messages && Array.isArray(promptData.messages)) {
-      const messages = promptData.messages;
-      const inputMessages: any[] = [];
-      messages.forEach((msg, index) => { ... });
-      if (inputMessages.length > 0) { ... }
+      processMessagesArray(promptData.messages, attributes);
       delete attributes[AI_PROMPT];
     }
     // ...
   } catch { ... }
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 4047a13 and 3429596.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (3)
  • packages/ai-semantic-conventions/package.json (1 hunks)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (9 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
packages/*/package.json

📄 CodeRabbit inference engine (CLAUDE.md)

Use workspace:* for intra-repo package dependencies in package.json

Files:

  • packages/ai-semantic-conventions/package.json
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/ai-semantic-conventions/src/SemanticAttributes.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/ai-semantic-conventions/package.json
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/ai-semantic-conventions/package.json
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies

Applied to files:

  • packages/ai-semantic-conventions/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-136)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test
🔇 Additional comments (14)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (2)

45-100: LGTM! Good backward compatibility design.

The SpanAttributes exports and legacy aliases are well-structured. The legacy aliases (e.g., LLM_SYSTEM, LLM_REQUEST_MODEL) correctly map to the new ATTR_GEN_AI_* constants, maintaining backward compatibility while aligning with OpenTelemetry semantic conventions.


17-43: No action needed — all imported attributes are available in @opentelemetry/semantic-conventions v1.38.0.

The ATTR_GEN_AI_* constants you've imported from the /incubating path all exist in version 1.38.0. These are experimental/development conventions, and the @ts-expect-error directive is appropriate since TypeScript's type definitions may not recognize the incubating entrypoint, but the imports work correctly at runtime.

packages/ai-semantic-conventions/package.json (1)

38-38: Version bump aligns with new attribute support.

The upgrade to ^1.38.0 supports the new ATTR_GEN_AI_* constants imported in SemanticAttributes.ts. The verification requested in the previous file will confirm availability.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (11)

375-388: LGTM! Correctly migrates to new token attribute.

The function properly maps AI_USAGE_PROMPT_TOKENS to ATTR_GEN_AI_USAGE_PROMPT_TOKENS and cleans up legacy attributes.


390-403: LGTM! Correctly migrates to new output token attribute.

The function properly maps AI_USAGE_COMPLETION_TOKENS to ATTR_GEN_AI_USAGE_OUTPUT_TOKENS and cleans up legacy attributes.


464-486: LGTM! Provider name extraction is correct.

Lines 471-473 properly extract the base provider name (e.g., "openai" from "openai.chat") and set GEN_AI_PROVIDER_NAME, while maintaining existing vendor mapping for LLM_SYSTEM.


488-509: LGTM! Operation name mapping is correct.

The function properly derives GEN_AI_OPERATION_NAME from span names, mapping text/object generation to "chat" and tool-related spans to "execute_tool".


511-517: LGTM! Model ID transformation is correct.

The function properly maps ai.model.id to LLM_REQUEST_MODEL and cleans up the source attribute.


519-531: LGTM! Finish reason transformation is correct.

The function properly converts ai.response.finishReason to array format for OpenTelemetry standard GEN_AI_RESPONSE_FINISH_REASONS, handling both single values and existing arrays.


533-561: LGTM! Tool call attribute transformation is correct.

The function properly mirrors AI SDK tool call attributes to OpenTelemetry format. The comments correctly note that some attributes are preserved for later cleanup by transformToolCalls.


563-573: LGTM! Conversation ID extraction is correct.

The function properly prioritizes conversationId over sessionId when setting GEN_AI_CONVERSATION_ID.


575-589: LGTM! Response metadata transformation is correct.

The function properly maps ai.response.model to LLM_RESPONSE_MODEL and ai.response.id to GEN_AI_RESPONSE_ID, cleaning up source attributes.


667-688: LGTM! Transformation pipeline is properly extended.

The new transformation functions are correctly integrated into transformLLMSpans, and the order is logical. The comment on line 684 accurately notes that transformVendor also sets GEN_AI_PROVIDER_NAME.


690-711: LGTM! Tool name cleanup is correct.

Line 708 properly deletes ai.toolCall.name after setting TRACELOOP_ENTITY_NAME, completing the deferred cleanup mentioned in transformToolCallAttributes.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 671117b in 1 minute and 14 seconds. Click for details.
  • Reviewed 56 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2068
  • Draft comment:
    Multiple tests in the provider name suite have duplicate/conflicting assertions—for example, one assert expects the transformed provider to be “openai” while the next expects “OpenAI” (and similar for azure-openai and anthropic). Please clarify the intended normalization (e.g. should it be consistently capitalized?) and update the tests accordingly.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_uwlNoD2ug5Dil12Q

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 8528f24 in 1 minute and 16 seconds. Click for details.
  • Reviewed 123 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:459
  • Draft comment:
    In transformVendor, the initial assignment using the base provider name (extracted via vendor.split(".")[0]) is later overwritten with mappedVendor || vendor. Consider using the base name as the fallback (e.g. mappedVendor || providerName) to ensure standardized naming if no mapping is found.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:310
  • Draft comment:
    Several inline comments have been removed (e.g. in transformPrompts, transformPromptTokens, etc.). This cleanup is fine if the function names and structure are clear; just ensure that any needed context for future maintainers isn’t lost.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_JOzFiXyb3jM2sdfE

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (5)

560-617: ai.prompt.tools tests now expect ATTR_GEN_AI_TOOL_NAME but implementation never sets it for tools

Here you expect:

attributes[`${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.0.name`] === "getWeather";

but transformTools only writes SpanAttributes.LLM_REQUEST_FUNCTIONS.* keys and never sets any ${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.* attributes for ai.prompt.tools. As a result, these new expectations will fail.

Consider either:

  • Updating transformTools to also populate the new GenAI tool-definition attributes, e.g.:
-          if (tool.name) {
-            attributes[
-              `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name`
-            ] = tool.name;
-          }
+          if (tool.name) {
+            attributes[
+              `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name`
+            ] = tool.name;
+            attributes[
+              `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.name`
+            ] = tool.name;
+          }

(and similarly for description/parameters), or

  • Keeping these tests on SpanAttributes.LLM_REQUEST_FUNCTIONS only, and adding separate tests for the OpenTelemetry ATTR_GEN_AI_TOOL_NAME behavior where it is actually set (tool-call attributes).

849-940: Prompt/completion token tests match current cleanup behavior but reveal loss of data when only ai.usage.* is present

The expectations around keeping ATTR_GEN_AI_USAGE_INPUT_TOKENS/OUTPUT_TOKENS and deleting ai.usage.* and ATTR_GEN_AI_USAGE_PROMPT_TOKENS / ATTR_GEN_AI_USAGE_COMPLETION_TOKENS are consistent with transformPromptTokens/transformCompletionTokens and will pass as long as gen_ai.usage.input_tokens / output_tokens are already set.

Note though that in scenarios where only ai.usage.promptTokens / ai.usage.completionTokens are present and no gen_ai.usage.* attributes exist, the current transformers will delete the legacy attributes without preserving their values anywhere. If that’s not desired, we should adjust the implementation and add tests for that case.


942-1001: Total-token tests mostly match implementation; missing edge case for zero tokens

The tests for calculating LLM_USAGE_TOTAL_TOKENS from ATTR_GEN_AI_USAGE_INPUT_TOKENS and *_OUTPUT_TOKENS (both numeric and string) align with calculateTotalTokens. However, with both values equal to 0, the current implementation will skip setting LLM_USAGE_TOTAL_TOKENS because it checks if (inputTokens && outputTokens); there’s no test covering that case.

You may want to add a test where both are zero and, if desired, adjust the implementation to treat zero as valid and still emit a total.


1003-1083: Existing vendor tests still assert human‑readable names; see conflict with new provider‑name tests below

These tests verify that ATTR_GEN_AI_PROVIDER_NAME ends up as "OpenAI", "Azure", "Anthropic", or "", matching the old VENDOR_MAPPING behavior. On their own they are consistent with the current transformVendor.

However, the new describe("transformLLMSpans - provider name") block later in this file expects different values (base provider IDs like "openai", "azure-openai", "anthropic") for the same attribute, which cannot be simultaneously true. The semantics for ATTR_GEN_AI_PROVIDER_NAME need to be clarified and the tests made consistent.


1-2483: Prettier formatting failure

CI reports a Prettier formatting error for this file. After finalizing test changes, please run:

pnpm prettier --write packages/traceloop-sdk/test/ai-sdk-transformations.test.ts

to satisfy the formatting check.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (4)

219-263: transformTools still only fills LLM_REQUEST_FUNCTIONS; no Gen‑AI tool‑definition attributes

This transformer maps ai.prompt.tools exclusively into SpanAttributes.LLM_REQUEST_FUNCTIONS.*. Given the new tests that expect ${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.name etc. for tools, you likely need to extend this function to populate those Gen‑AI attributes as well.

For example:

if (tool && typeof tool === "object") {
  if (tool.name) {
    attributes[
      `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name`
    ] = tool.name;
+   attributes[
+     `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.name`
+   ] = tool.name;
  }
  if (tool.description) {
    attributes[
      `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.description`
    ] = tool.description;
+   attributes[
+     `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.description`
+   ] = tool.description;
  }
  if (tool.parameters) {
    const params =
      typeof tool.parameters === "string"
        ? tool.parameters
        : JSON.stringify(tool.parameters);
    attributes[
      `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.parameters`
    ] = params;
+   attributes[
+     `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.parameters`
+   ] = params;
  }
}

This would bring the implementation in line with the new tests and Gen‑AI semantic attributes.


454-462: Total‑token calculation ignores valid zero values

calculateTotalTokens currently does:

const inputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS];
const outputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_OUTPUT_TOKENS];

if (inputTokens && outputTokens) {
  attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] =
    Number(inputTokens) + Number(outputTokens);
}

This means that when either count is 0 (number) the total is not computed, even though 0 is a valid value.

A safer implementation would explicitly check for presence and numeric convertibility, for example:

const inputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS];
const outputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_OUTPUT_TOKENS];

- if (inputTokens && outputTokens) {
-   attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] =
-     Number(inputTokens) + Number(outputTokens);
- }
+ const input = Number(inputTokens);
+ const output = Number(outputTokens);
+ if (!Number.isNaN(input) && !Number.isNaN(output)) {
+   attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = input + output;
+ }

so that 0 + 0 correctly yields a total of 0.


464-485: Provider/vendor mapping currently overwrites extracted provider name and conflicts with tests

transformVendor first extracts a base provider name:

const providerName = vendor.split(".")[0];
attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName;

and then later overwrites that same attribute with mappedVendor || vendor:

attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor;

So the final value is always the mapped human‑readable vendor (“OpenAI”, “Azure”, “Anthropic”, etc.) or the raw input string, never the base ID ("openai", "azure-openai", "anthropic").

Given the new tests under describe("transformLLMSpans - provider name") that assert base provider IDs, and the existing tests that assert vendor display names, this implementation cannot satisfy both.

You should decide:

  • Whether ATTR_GEN_AI_PROVIDER_NAME should hold the base ID (e.g., "openai") or the display name (e.g., "OpenAI"), and
  • Whether a second attribute (e.g., ATTR_GEN_AI_SYSTEM) should carry the vendor‑mapped name if both are needed.

Then adjust transformVendor and the tests accordingly—for example, one possible approach:

if (typeof vendor === "string" && vendor.length > 0) {
  const providerName = vendor.split(".")[0];
- attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName;
+ attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName;
  for (const prefix of Object.keys(VENDOR_MAPPING)) {
    if (vendor.startsWith(prefix)) {
      mappedVendor = VENDOR_MAPPING[prefix];
      break;
    }
  }
}
-attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor;
+if (mappedVendor) {
+  attributes[SpanAttributes.ATTR_GEN_AI_SYSTEM] = mappedVendor;
+}

or another consistent scheme that matches your chosen semantics.


1-746: Prettier formatting failure

CI indicates a Prettier issue in this file as well. After finalizing logic changes, run:

pnpm prettier --write packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts

to clear the formatting check.

🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)

667-688: Ordering of transformers in transformLLMSpans is reasonable, but consider dependency on token transforms

The call sequence (operation name/model ID → responses → prompts/tools → token transforms → provider metadata → finish reason/response metadata → total tokens → vendor mapping → conversation ID → tool‑call attributes → telemetry metadata) is coherent.

Just be aware that:

  • calculateTotalTokens depends on ATTR_GEN_AI_USAGE_INPUT_TOKENS/OUTPUT_TOKENS already being set; combined with the current transformPromptTokens/transformCompletionTokens, spans that only have ai.usage.* will not get a total.
  • transformToolCallAttributes runs before transformTelemetryMetadata, ensuring agent spans see both Gen‑AI tool attributes and raw tool args/result.

Once the token‑handling tweaks above are applied, the overall pipeline should behave as intended.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3429596 and 671117b.

📒 Files selected for processing (3)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (19 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (45 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/ai-semantic-conventions/src/SemanticAttributes.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (9)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformLLMSpans (667-688)
🪛 GitHub Actions: CI
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts

[error] 1-1: Prettier formatting check failed. Run 'pnpm prettier --write' to fix code style issues in this file.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts

[error] 1-1: Prettier formatting check failed. Run 'pnpm prettier --write' to fix code style issues in this file.

🔇 Additional comments (18)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)

17-71: ATTR_GEN_AI_ imports and SpanAttributes re‑export look consistent*

The new ATTR_GEN_AI_* imports and their exposure via SpanAttributes are coherent and align with the goal of centralizing AI/LLM span attribute constants in this module. No issues spotted here.

packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (9)

13-222: Response text/object and basic prompt-message expectations align with implementation

The assertions around mapping ai.response.* and ai.prompt.messages into ATTR_GEN_AI_COMPLETION / ATTR_GEN_AI_PROMPT plus input/output messages match the current transformer logic and cover a good range of edge cases (empty strings, JSON objects, arrays, invalid JSON).


423-533: New ai.prompt (object) tests match the added transformPrompts behavior

The tests for ai.prompt with prompt and messages correctly reflect the new branch that parses promptData, fills ATTR_GEN_AI_PROMPT.* entries, and populates ATTR_GEN_AI_INPUT_MESSAGES while cleaning up the original attribute. These should pass with the current implementation.


1283-1635: Gen‑AI input/output messages tests are in line with current transformers

The new tests for ATTR_GEN_AI_INPUT_MESSAGES and ATTR_GEN_AI_OUTPUT_MESSAGES (for text, tool calls, object responses, and complex multi‑turn conversations) accurately reflect how transformResponse*, transformPrompts, and processMessageContent construct these message arrays. They provide solid coverage for varied content shapes.


1638-1914: Telemetry metadata and agent‑related tests align with transformTelemetryMetadata behavior

The expectations around turning ai.telemetry.metadata.* into association properties, stringifying non‑string values, ignoring null/undefined, and setting ATTR_GEN_AI_AGENT_NAME plus Traceloop span kind/entity name for root agent spans all look correct relative to transformTelemetryMetadata. These tests should help prevent regressions in agent attribution.


1917-1957: Tool‑span attribute tests remain compatible with added tool‑call transformers

This block still checks only the Traceloop entity input/output and span‑kind behavior for tool spans. With transformToolCallAttributes now also setting OpenTelemetry tool‑call attributes, these assertions remain valid and non‑conflicting.


2116-2144: Model‑ID mapping tests look correct

The tests for mapping ai.model.id into ATTR_GEN_AI_REQUEST_MODEL and deleting the original attribute mirror transformModelId’s behavior and should pass as written.


2146-2204: Finish‑reason tests correctly exercise new array‑based attribute

The finish‑reason tests match transformFinishReason (string vs array, multiple allowed values, and the no‑attribute case). They appear consistent and comprehensive.


2206-2286: Tool‑call attribute tests align with transformToolCallAttributes

These tests for mapping ai.toolCall.* into ATTR_GEN_AI_TOOL_NAME / ATTR_GEN_AI_TOOL_CALL_* while selectively deleting only ai.toolCall.id are consistent with the new transformer and should pass. Good coverage of presence/absence of ID and the no‑tool‑call case.


2289-2341: Conversation‑ID tests match transformConversationId’s precedence rules

The expectations around preferring conversationId over sessionId, falling back to sessionId, and doing nothing when neither is present are all aligned with the helper’s current behavior.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (8)

74-164: Response text/object/tool‑calls transformers correctly emit Gen‑AI completion and output messages

The updated transformResponseText, transformResponseObject, and transformResponseToolCalls functions now write to ATTR_GEN_AI_COMPLETION.* and ATTR_GEN_AI_OUTPUT_MESSAGES in addition to cleaning up the original AI SDK attributes. The logic for building ROLE_ASSISTANT output messages and tool‑call parts is coherent and matches the expectations in the tests.


166-217: processMessageContent behavior is sensible for mixed content and matches tests

The function’s handling of arrays (filtering type === "text"), objects, and JSON‑stringified arrays is consistent with how the prompt/message tests are written. It falls back to stringifying complex/mixed content, which is acceptable given the need to preserve non‑text payloads.


265-373: Prompt transformations correctly emit Gen‑AI prompt and input‑message attributes

Both branches of transformPrompts (ai.prompt.messages and ai.prompt with messages or prompt) now:

  • Use processMessageContent to normalize content.
  • Write to ATTR_GEN_AI_PROMPT.{i}.content / .role.
  • Build OpenTelemetry gen_ai.input.messages under ATTR_GEN_AI_INPUT_MESSAGES.
  • Remove the original AI SDK attributes once processed.

This matches the updated tests and looks solid for common and complex prompt shapes.


488-589: New helper transformers (operation name, model ID, finish reason, tool‑call attributes, conversation ID, response metadata) look correct in isolation

  • transformOperationName sensibly maps generate/stream spans to "chat" and tool spans to "execute_tool".
  • transformModelId and transformResponseMetadata cleanly move ai.model.id and ai.response.* into the new Gen‑AI attributes and delete the originals.
  • transformFinishReason normalizes ai.response.finishReason to the array form expected by the semantic conventions.
  • transformToolCallAttributes mirrors ai.toolCall.* into the OTel attributes while keeping args/result around for later Traceloop handling.
  • transformConversationId applies a simple, reasonable precedence (conversationId over sessionId).

These helpers match the new tests (aside from the response‑model test still asserting LLM_RESPONSE_MODEL) and slot well into the main pipeline.


591-665: Telemetry‑metadata transform cleanly adds Gen‑AI agent attributes and reuses Gen‑AI messages for entity I/O

transformTelemetryMetadata now:

  • Uses getAgentNameFromAttributes and sets ATTR_GEN_AI_AGENT_NAME when present.
  • For top‑level spans (agent name span or top AI spans), sets Traceloop span kind and entity name to AGENT.
  • Reuses ATTR_GEN_AI_INPUT_MESSAGES / ATTR_GEN_AI_OUTPUT_MESSAGES (or tool args/result) to populate TRACELOOP_ENTITY_INPUT/OUTPUT.
  • Converts all ai.telemetry.metadata.* values to string association properties and deletes the originals.

This aligns with the telemetry metadata tests and keeps Gen‑AI and Traceloop concepts consistent.


690-709: Tool‑span cleanup remains compatible with new tool‑call attributes

transformToolCalls still moves ai.toolCall.args/result into Traceloop entity input/output and deletes those raw attributes, while transformToolCallAttributes has already copied the values into the Gen‑AI tool‑call attributes. Deleting ai.toolCall.name here is fine, since the OTel tool name has already been set earlier.


724-746: Span‑name transformation logic unchanged and compatible with new operation‑name helper

transformAiSdkSpanNames continues to rewrite tool and AI SDK span names based on HANDLED_SPAN_NAMES and agent metadata. With transformOperationName now handling semantic gen_ai.operation.name, there’s a clear separation between human‑readable span names and semantic attributes. No issues here.


740-746: End‑to‑end span attribute transformation wiring looks correct

transformAiSdkSpanAttributes now simply orchestrates transformLLMSpans followed by transformToolCalls for spans from the "ai" instrumentation scope. Given the new helpers and tests, this is an appropriate composition point.

Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
Comment thread packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)

455-475: Provider name fallback uses full vendor string instead of extracted base name.

Lines 461-462 extract the base provider name (e.g., "openai" from "openai.chat"), but line 472 falls back to the full vendor string when no mapping is found:

const providerName = vendor.split(".")[0];  // "openai"
attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName; // Set but overwritten
// ... mapping logic ...
attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor; // Uses full "openai.chat" as fallback

For unmapped providers (e.g., "custom.service"), this produces ATTR_GEN_AI_PROVIDER_NAME = "custom.service" instead of the extracted base "custom". The OpenTelemetry semantic convention expects simple provider identifiers.

Additionally, line 462 is redundant since it's immediately overwritten by line 472.

 const transformVendor = (attributes: Record<string, any>): void => {
   if (AI_MODEL_PROVIDER in attributes) {
     const vendor = attributes[AI_MODEL_PROVIDER];
 
     let mappedVendor = null;
     if (typeof vendor === "string" && vendor.length > 0) {
       const providerName = vendor.split(".")[0];
-      attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName;
-
+
       for (const prefix of Object.keys(VENDOR_MAPPING)) {
         if (vendor.startsWith(prefix)) {
           mappedVendor = VENDOR_MAPPING[prefix];
           break;
         }
       }
+
+      attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || providerName;
+    } else {
+      attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = vendor;
     }
 
-    attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor;
     delete attributes[AI_MODEL_PROVIDER];
   }
 };

1-1: Fix Prettier formatting issues.

The CI pipeline detected code style issues. Please run Prettier to format the file:

npx prettier --write packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧹 Nitpick comments (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)

500-572: Consider moving inline constants to file-level for consistency.

Functions transformModelId, transformFinishReason, and transformResponseMetadata define constants inline:

  • Line 501: AI_MODEL_ID
  • Line 509: AI_RESPONSE_FINISH_REASON
  • Lines 559-560: AI_RESPONSE_MODEL, AI_RESPONSE_ID

For consistency with other constants defined at lines 26-42 (e.g., AI_RESPONSE_TEXT, AI_PROMPT_MESSAGES), consider moving these to the top of the file.


313-343: Consider extracting message processing logic to reduce nesting.

The AI_PROMPT handling for messages (lines 313-343) duplicates logic from AI_PROMPT_MESSAGES handling (lines 266-308). Both contain similar forEach loops processing messages.

Consider extracting the message processing logic into a helper function to reduce duplication and improve readability:

const processMessages = (messages: any[], attributes: Record<string, any>) => {
  const inputMessages: any[] = [];
  messages.forEach((msg: { role: string; content: any }, index: number) => {
    const processedContent = processMessageContent(msg.content);
    attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.${index}.content`] = processedContent;
    attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.${index}.role`] = msg.role;
    
    inputMessages.push({
      role: msg.role,
      parts: [{ type: TYPE_TEXT, content: processedContent }],
    });
  });
  
  if (inputMessages.length > 0) {
    attributes[SpanAttributes.ATTR_GEN_AI_INPUT_MESSAGES] = JSON.stringify(inputMessages);
  }
};

Then use in both locations:

if (AI_PROMPT_MESSAGES in attributes) {
  const messages = JSON.parse(attributes[AI_PROMPT_MESSAGES]);
  processMessages(messages, attributes);
  delete attributes[AI_PROMPT_MESSAGES];
}

if (AI_PROMPT in attributes) {
  const promptData = JSON.parse(attributes[AI_PROMPT]);
  if (promptData.messages && Array.isArray(promptData.messages)) {
    processMessages(promptData.messages, attributes);
    delete attributes[AI_PROMPT];
  }
  // ... rest of logic
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 671117b and 8528f24.

📒 Files selected for processing (1)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (19 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (9)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
🪛 GitHub Actions: CI
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts

[warning] 1-1: Code style issues detected by Prettier.

Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed ff1d6a8 in 1 minute and 26 seconds. Click for details.
  • Reviewed 1131 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2110
  • Draft comment:
    The provider name tests include duplicate assertions that expect the same attribute to equal both a lowercase value (e.g. 'openai') and a mapped standardized value (e.g. 'OpenAI'). This is contradictory. Please clarify the intended final value and remove the redundant/conflicting assertion.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:427
  • Draft comment:
    Overall, the tests are comprehensive and well‐structured, covering a wide range of transformation scenarios. Consider adding brief inline comments to each describe block summarizing the expected transformation behavior to improve future maintainability.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
3. packages/instrumentation-bedrock/src/instrumentation.ts:370
  • Draft comment:
    Typo in the comment: "The format is removing when we are setting span attribute" does not read clearly. Consider rephrasing it to something like "The format is removed when setting the span attribute" or "Formatting is removed when setting the span attribute" for clarity.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_tzoMbBDFfyez6jBe

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/instrumentation-together/test/instrumentation.test.ts (1)

563-590: Contradictory assertions on the same attribute index.

Lines 563-590 assert that ATTR_GEN_AI_COMPLETION.0.function_call.name equals both "get_current_weather" (line 565) and "get_tomorrow_weather" (line 581). A single attribute cannot have two different values simultaneously—the second assertion will always fail since it tests the same key.

If the intent is to verify two tool calls, the second should reference index 1:

     assert.strictEqual(
       completionSpan.attributes[
-        `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.function_call.name`
+        `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.1.function_call.name`
       ],
       "get_tomorrow_weather",
     );
     assert.deepEqual(
       JSON.parse(
         completionSpan.attributes[
-          `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.function_call.arguments`
+          `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.1.function_call.arguments`
         ]! as string,
       ),
       { location: "Chicago, IL", unit: "fahrenheit" },
     );
packages/instrumentation-bedrock/src/instrumentation.ts (1)

159-164: Inconsistent attribute keys between span start and end.

_startSpan sets LLM_SYSTEM, LLM_REQUEST_MODEL, and LLM_RESPONSE_MODEL (lines 160-162), but _endSpan checks for ATTR_GEN_AI_SYSTEM (line 201) and reads ATTR_GEN_AI_RESPONSE_MODEL (line 203). Since these are different string values, the condition at line 201 will always be false, causing token/completion attributes to never be set.

       attributes = {
-        [SpanAttributes.LLM_SYSTEM]: "AWS",
-        [SpanAttributes.LLM_REQUEST_MODEL]: model,
-        [SpanAttributes.LLM_RESPONSE_MODEL]: input.modelId,
+        [SpanAttributes.ATTR_GEN_AI_SYSTEM]: "AWS",
+        [SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL]: model,
+        [SpanAttributes.ATTR_GEN_AI_RESPONSE_MODEL]: input.modelId,
         [SpanAttributes.LLM_REQUEST_TYPE]: LLMRequestTypeValues.COMPLETION,
       };
♻️ Duplicate comments (4)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (3)

2110-2171: Provider-name tests contain contradictory assertions.

Each test asserts that ATTR_GEN_AI_PROVIDER_NAME equals both a lowercase ID (e.g., "openai") and a capitalized vendor name (e.g., "OpenAI") for the same attribute. This is impossible—an attribute can only hold one value.

Looking at transformVendor in the implementation (lines 459-480), it first sets the provider name to the prefix (providerName), then overwrites it with the mapped vendor or original value. Only the final value will exist.

Remove the duplicate assertions and keep only the expected final value (the mapped vendor name).


2431-2449: Test uses non-existent LLM_RESPONSE_MODEL attribute.

Line 2440 asserts SpanAttributes.LLM_RESPONSE_MODEL, but this key doesn't exist in the SpanAttributes object (see packages/ai-semantic-conventions/src/SemanticAttributes.ts). The correct key is ATTR_GEN_AI_RESPONSE_MODEL.

       assert.strictEqual(
-        attributes[SpanAttributes.LLM_RESPONSE_MODEL],
+        attributes[SpanAttributes.ATTR_GEN_AI_RESPONSE_MODEL],
         "gpt-4o",
       );

2469-2543: Complete transformation test has contradictory provider assertions.

Lines 2506-2513 assert ATTR_GEN_AI_PROVIDER_NAME equals both "OpenAI" and "openai", which is impossible.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)

373-397: Token transforms delete canonical attributes, causing data loss.

transformPromptTokens (lines 373-384):

  1. Copies ai.usage.promptTokensATTR_GEN_AI_USAGE_PROMPT_TOKENS (line 378)
  2. Deletes both the source and the target (lines 382-383)

transformCompletionTokens (lines 386-397):

  1. Copies ai.usage.completionTokensATTR_GEN_AI_USAGE_OUTPUT_TOKENS (line 391)
  2. Deletes ATTR_GEN_AI_USAGE_COMPLETION_TOKENS (line 396) — this is a different key, so OUTPUT_TOKENS survives

Additionally, calculateTotalTokens (lines 448-456) reads ATTR_GEN_AI_USAGE_INPUT_TOKENS, but transformPromptTokens writes to ATTR_GEN_AI_USAGE_PROMPT_TOKENS (different key). The prompt token value is never available for total calculation.

 const transformPromptTokens = (attributes: Record<string, any>): void => {
   if (
-    !(SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS in attributes) &&
+    !(SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS in attributes) &&
     AI_USAGE_PROMPT_TOKENS in attributes
   ) {
-    attributes[SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS] =
+    attributes[SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS] =
       attributes[AI_USAGE_PROMPT_TOKENS];
   }

   delete attributes[AI_USAGE_PROMPT_TOKENS];
-  delete attributes[SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS];
 };
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)

459-480: Provider name is set twice, second assignment overwrites first.

Line 466 sets ATTR_GEN_AI_PROVIDER_NAME to the raw provider prefix (e.g., "openai"), then lines 476-477 overwrite it with either the mapped vendor name (e.g., "OpenAI") or the original provider value.

The first assignment at line 466 is always overwritten and serves no purpose. If you intend to keep both a raw provider ID and a human-readable vendor name, use separate attributes.

 const transformVendor = (attributes: Record<string, any>): void => {
   if (AI_MODEL_PROVIDER in attributes) {
     const vendor = attributes[AI_MODEL_PROVIDER];

     let mappedVendor = null;
     if (typeof vendor === "string" && vendor.length > 0) {
-      const providerName = vendor.split(".")[0];
-      attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName;
-
       for (const prefix of Object.keys(VENDOR_MAPPING)) {
         if (vendor.startsWith(prefix)) {
           mappedVendor = VENDOR_MAPPING[prefix];
           break;
         }
       }
     }

     attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] =
       mappedVendor || vendor;
     delete attributes[AI_MODEL_PROVIDER];
   }
 };
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8528f24 and ff1d6a8.

📒 Files selected for processing (5)
  • packages/instrumentation-anthropic/src/instrumentation.ts (5 hunks)
  • packages/instrumentation-bedrock/src/instrumentation.ts (15 hunks)
  • packages/instrumentation-together/test/instrumentation.test.ts (12 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (19 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (45 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
packages/instrumentation-*/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
🧬 Code graph analysis (3)
packages/instrumentation-bedrock/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformLLMSpans (656-677)
packages/instrumentation-together/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
🪛 GitHub Actions: CI
packages/instrumentation-bedrock/src/instrumentation.ts

[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix code style issues.

packages/instrumentation-together/test/instrumentation.test.ts

[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix code style issues.

packages/instrumentation-anthropic/src/instrumentation.ts

[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix code style issues.

🔇 Additional comments (38)
packages/instrumentation-together/test/instrumentation.test.ts (6)

126-186: LGTM – attribute key migration for function calling test.

The attribute key updates (ATTR_GEN_AI_PROMPT, ATTR_GEN_AI_TOOL_NAME, ATTR_GEN_AI_COMPLETION, ATTR_GEN_AI_USAGE_*) align with the new OpenTelemetry GenAI semantic conventions. The assertions remain structurally correct.


206-224: LGTM – chat test attribute migration.

Attribute references updated consistently to the new ATTR_GEN_AI_* keys.


256-280: LGTM – streaming chat test attribute migration.

Correctly migrated to new semantic attribute keys for prompts, completions, and token usage.


302-331: LGTM – completion test attribute migration.

Attribute keys properly updated throughout the completion assertions.


358-393: LGTM – streaming completion test attribute migration.

Attribute key migration is consistent with other tests.


432-493: LGTM – tool calling test attribute migration.

All attribute references updated to the new ATTR_GEN_AI_* naming convention.

packages/instrumentation-bedrock/src/instrumentation.ts (7)

201-207: Dependent on fixing the attribute key mismatch above.

Once the _startSpan keys are corrected to use ATTR_GEN_AI_*, this conditional check and model reassignment will work as intended.


219-241: LGTM – streaming response token usage migration.

Token attributes correctly use ATTR_GEN_AI_USAGE_PROMPT_TOKENS and ATTR_GEN_AI_USAGE_COMPLETION_TOKENS.


298-316: LGTM – ai21 vendor request attributes.

Request and prompt attributes properly migrated to ATTR_GEN_AI_* keys.


318-336: LGTM – amazon vendor request attributes.

Attribute key migration is consistent.


337-377: LGTM – anthropic vendor request attributes.

Both messages API and legacy prompt format handling properly use ATTR_GEN_AI_PROMPT keys.


378-410: LGTM – cohere and meta vendor request attributes.

Attribute migrations are consistent with the new naming scheme.


422-544: LGTM – response attribute migrations for all vendors.

Completion and token usage attributes consistently use ATTR_GEN_AI_COMPLETION and ATTR_GEN_AI_USAGE_* keys across all vendor branches.

packages/instrumentation-anthropic/src/instrumentation.ts (3)

206-230: LGTM – system and request attribute migration.

The attribute keys are correctly migrated to ATTR_GEN_AI_SYSTEM, ATTR_GEN_AI_REQUEST_MODEL, ATTR_GEN_AI_REQUEST_TEMPERATURE, ATTR_GEN_AI_REQUEST_TOP_P, and ATTR_GEN_AI_REQUEST_MAX_TOKENS.


241-272: LGTM – prompt attribute migration.

System prompt and message handling correctly use ATTR_GEN_AI_PROMPT keys with proper indexing.


480-522: LGTM – response and completion attribute migration.

Response model, token usage, finish reason, and completion content attributes are all correctly migrated to ATTR_GEN_AI_* keys.

packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (15)

22-61: LGTM – response text transformation tests.

Tests correctly verify the new ATTR_GEN_AI_COMPLETION attribute keys and cleanup of original attributes.


64-95: LGTM – response object transformation tests.

Attribute key migrations are correctly tested.


97-183: LGTM – response tool calls transformation tests.

Tests properly verify tool call attributes under ATTR_GEN_AI_COMPLETION.


185-424: LGTM – prompt messages transformation tests.

Comprehensive coverage of prompt transformation including edge cases (object content, arrays, JSON strings, escape sequences).


426-561: LGTM – single prompt and messages array transformation tests.

Tests cover both legacy prompt format and new messages array format with ATTR_GEN_AI_INPUT_MESSAGES.


563-851: LGTM – tools transformation tests.

Tests properly verify tool attributes including edge cases (missing properties, string format, mixed formats).


853-952: LGTM – token transformation tests.

Tests correctly verify that legacy tokens are cleaned up and ATTR_GEN_AI_USAGE_INPUT_TOKENS/OUTPUT_TOKENS are preserved.


954-1014: LGTM – total tokens calculation tests.

Tests verify calculation from ATTR_GEN_AI_USAGE_INPUT_TOKENS and OUTPUT_TOKENS.


1016-1111: LGTM – vendor transformation tests.

Tests cover various provider mappings and edge cases.


1329-1682: LGTM – gen_ai input/output messages tests.

Comprehensive tests for the new OpenTelemetry standard message format.


1684-1963: LGTM – telemetry metadata and agent transformation tests.

Tests properly verify association properties and agent attribute handling.


2008-2107: LGTM – operation name transformation tests.

Tests cover all operation name mappings (chat, execute_tool).


2174-2262: LGTM – model id and finish reason transformation tests.

Tests correctly verify ATTR_GEN_AI_REQUEST_MODEL and ATTR_GEN_AI_RESPONSE_FINISH_REASONS.


2264-2345: LGTM – tool call attributes transformation tests.

Tests verify OpenTelemetry standard tool call attributes.


2347-2400: LGTM – conversation id transformation tests.

Tests verify precedence of conversationId over sessionId.

packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (7)

74-96: LGTM – transformResponseText migration.

Correctly uses ATTR_GEN_AI_COMPLETION and ATTR_GEN_AI_OUTPUT_MESSAGES.


98-120: LGTM – transformResponseObject migration.

Attribute keys properly migrated.


122-167: LGTM – transformResponseToolCalls migration.

Tool call attributes correctly use ATTR_GEN_AI_COMPLETION paths.


268-371: LGTM – transformPrompts migration.

Prompt attributes correctly use ATTR_GEN_AI_PROMPT and ATTR_GEN_AI_INPUT_MESSAGES.


482-576: LGTM – new transformation functions.

transformOperationName, transformModelId, transformFinishReason, transformToolCallAttributes, transformConversationId, and transformResponseMetadata are well-structured and handle their transformations correctly.


578-654: LGTM – transformTelemetryMetadata update.

Correctly references ATTR_GEN_AI_AGENT_NAME and the new input/output messages attributes.


656-677: LGTM – transformLLMSpans orchestration.

New transformers are properly integrated into the transformation pipeline.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/instrumentation-vertexai/tests/gemini.test.ts (1)

87-108: Add missing SpanAttributes import from @traceloop/ai-semantic-conventions.

The test uses SpanAttributes.ATTR_GEN_AI_SYSTEM, SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL, SpanAttributes.ATTR_GEN_AI_REQUEST_TOP_P, SpanAttributes.ATTR_GEN_AI_RESPONSE_MODEL, and SpanAttributes.ATTR_GEN_AI_REQUEST_MAX_TOKENS (lines 87, 90, 94, 100, 149, 152, 156, 160, 166) but SpanAttributes is not imported. Add the following import at the top of the file:

import { SpanAttributes } from "@traceloop/ai-semantic-conventions";
🧹 Nitpick comments (16)
packages/instrumentation-cohere/tests/rerank.test.ts (1)

130-139: Minor: duplicate model assertion and completion attribute checks

The completion assertions using ${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.relevanceScore and .0.content correctly reflect the response shape and new Gen‑AI completion schema—these look solid.

You also assert ATTR_GEN_AI_REQUEST_MODEL twice (lines 111 and 130) with the same expectation. That duplication is harmless but could be removed to keep the test focused.

packages/instrumentation-cohere/tests/chat.test.ts (2)

138-141: Duplicate assertion for ATTR_GEN_AI_REQUEST_MODEL.

This assertion is already performed on lines 104-107 with the same expected value. Consider removing this duplicate to reduce test noise.

-    assert.strictEqual(
-      attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL],
-      params?.model ?? "command",
-    );

238-241: Same duplicate assertion as in the first test.

This ATTR_GEN_AI_REQUEST_MODEL assertion duplicates lines 204-207. Consider removing for consistency with the suggested fix in the first test.

-    assert.strictEqual(
-      attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL],
-      params?.model ?? "command",
-    );
packages/instrumentation-anthropic/test/instrumentation.test.ts (1)

106-153: Gen‑AI attribute migration and token accounting look correct

The assertions on ATTR_GEN_AI_REQUEST_MODEL, ATTR_GEN_AI_RESPONSE_MODEL, ATTR_GEN_AI_REQUEST_MAX_TOKENS, prompt/completion paths, and prompt/completion token usage correctly target the new Gen‑AI attributes while still validating LLM_USAGE_TOTAL_TOKENS as the sum. This is a good, backwards‑compatible check that the new fields are populated consistently.

You can optionally simplify chatSpan.attributes[${SpanAttributes.ATTR_GEN_AI_...}] to chatSpan.attributes[SpanAttributes.ATTR_GEN_AI_...] since the constants are already strings; same comment applies to similar uses below. Based on learnings, this also keeps tests aligned with the central semantic-attribute definitions.

packages/instrumentation-bedrock/tests/cohere.test.ts (1)

184-290: Streaming Bedrock Cohere test validates GEN AI usage tokens and content with new keys

Streaming assertions now use ATTR_GEN_AI_* for system, request parameters, prompts, completions, and prompt/completion token usage (including the amazon-bedrock-invocationMetrics path), while keeping total tokens on LLM_USAGE_TOTAL_TOKENS. This gives good coverage of the new naming without changing behavior.

You might optionally drop one of the duplicate ATTR_GEN_AI_REQUEST_MODEL assertions to reduce noise, but it’s not functionally necessary.

packages/instrumentation-openai/test/instrumentation.test.ts (1)

752-920: Skipped image edit/variation and token-coverage tests updated to new GEN AI usage/prompt/completion keys

All skipped image tests now reference ATTR_GEN_AI_SYSTEM, ATTR_GEN_AI_PROMPT, ATTR_GEN_AI_COMPLETION, and ATTR_GEN_AI_USAGE_COMPLETION_TOKENS (plus total tokens) instead of legacy LLM keys. This prepares them for future activation without further changes.

If you plan to unskip these later, consider centralizing the expected token counts (e.g., 1056, 4160) in constants to make future spec or pricing changes easier to adapt.

packages/instrumentation-bedrock/tests/anthropic.test.ts (2)

166-169: Remove duplicate assertion for ATTR_GEN_AI_REQUEST_MODEL.

This assertion is identical to the one on lines 141-144. The duplicate check adds no value and clutters the test.

-    assert.strictEqual(
-      attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL],
-      model,
-    );

249-252: Remove duplicate assertion for ATTR_GEN_AI_REQUEST_MODEL.

Same issue as in the non-streaming test—this assertion duplicates lines 224-227.

-        assert.strictEqual(
-          attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL],
-          model,
-        );
packages/instrumentation-bedrock/tests/amazon.test.ts (2)

167-170: Remove duplicate assertion for ATTR_GEN_AI_REQUEST_MODEL.

This assertion is identical to the one on lines 143-146.

-    assert.strictEqual(
-      attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL],
-      model,
-    );

261-264: Remove duplicate assertion for ATTR_GEN_AI_REQUEST_MODEL.

Same duplicate pattern in the streaming test—this repeats lines 237-240.

-        assert.strictEqual(
-          attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL],
-          model,
-        );
packages/instrumentation-openai/src/image-wrappers.ts (3)

137-173: Centralize remaining gen_ai.* request attributes in SemanticAttributes

The new usages of ATTR_GEN_AI_REQUEST_MODEL and ATTR_GEN_AI_PROMPT look correct, but in this function (and similar ones below) you still hardcode AI attribute keys like:

  • gen_ai.request.image.size
  • gen_ai.request.image.quality
  • gen_ai.request.image.style
  • gen_ai.request.image.count

For consistency with the rest of the Gen AI semconv migration and to follow the repo convention of defining AI/LLM span attributes in @traceloop/ai-semantic-conventions, consider adding constants for these to SpanAttributes and using them here instead of string literals (same applies to later occurrences in this file, e.g., image count/size and gen_ai.response.revised_prompt / gen_ai.request.type).

Based on learnings, these AI/LLM attributes should live in SemanticAttributes.ts.


284-390: Token usage + completion attributes are correct; consider also populating input/output aliases

The migration to:

  • SpanAttributes.ATTR_GEN_AI_USAGE_COMPLETION_TOKENS
  • SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS
  • SpanAttributes.ATTR_GEN_AI_COMPLETION (with .0.content / .0.role)

looks consistent with the new Gen AI semconv and preserves SpanAttributes.LLM_USAGE_TOTAL_TOKENS for backward compatibility.

If you want full alignment with the newer token fields exposed in SpanAttributes, it might be worth also:

  • Mapping prompt tokens to SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS
  • Mapping completion tokens to SpanAttributes.ATTR_GEN_AI_USAGE_OUTPUT_TOKENS

so downstream consumers that key off input_tokens / output_tokens don’t have to special-case images vs text. This can reuse the same estimatedPromptTokens / completionTokens already computed here.

Please double-check against the current OpenTelemetry Gen AI semantic conventions for whether input_tokens / output_tokens are expected aliases for prompt/completion in your version.


399-585: Span gen_ai.system attribute looks good; consider also setting provider/operation attributes

Using SpanAttributes.ATTR_GEN_AI_SYSTEM with "OpenAI" on the image spans (wrapImageGeneration, wrapImageEdit, wrapImageVariation) is aligned with the new semconv and the repo’s Gen AI attribute strategy.

Two optional improvements:

  1. Provider/operation attributes
    Since SpanAttributes also exposes ATTR_GEN_AI_PROVIDER_NAME and ATTR_GEN_AI_OPERATION_NAME, consider setting them here with values like:

    • provider: "openai"
    • operation: "images.generate", "images.edit", "images.createVariation"
      This would make these spans line up better with the rest of the SDK’s Gen AI telemetry.
  2. Consistent gen_ai.request.type handling
    You still use hardcoded gen_ai.request.type strings here; if this field is part of your AI/LLM schema rather than purely local, it may be worth promoting them to SpanAttributes constants (same remark as for other gen_ai.* strings in this file).

Please verify against your existing instrumentations (e.g., text/chat wrappers) that the provider/operation values and casing ("OpenAI" vs "openai") are consistent across the codebase.

packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (3)

451-537: Prompt + input-messages tests look consistent with new Gen AI schema

The new ai.prompt tests with messages arrays correctly assert both ATTR_GEN_AI_PROMPT and ATTR_GEN_AI_INPUT_MESSAGES, and they keep using SpanAttributes constants instead of raw strings, which aligns with the ai-semantic-conventions contract. No issues here from a behavior or maintainability standpoint.

If you want to go further, you could add a small negative test where promptData.messages is malformed (e.g., not an array) to assert we don’t accidentally emit a broken gen_ai.input.messages. Based on learnings, this keeps SDK behavior predictable around partially invalid AI SDK payloads.


853-897: Token-normalization tests are good; consider adding ai.usage‑only coverage

These suites nicely enforce that ATTR_GEN_AI_USAGE_INPUT_TOKENS / ATTR_GEN_AI_USAGE_OUTPUT_TOKENS are the canonical fields and that legacy ai.usage.promptTokens / ai.usage.completionTokens are removed while LLM_USAGE_TOTAL_TOKENS is derived from the Gen AI attributes.

One gap: there’s no explicit test for the case where only ai.usage.promptTokens / ai.usage.completionTokens are present and gen_ai.usage.input_tokens / gen_ai.usage.output_tokens are absent. If we still rely on the transformer to backfill Gen AI usage fields from the AI SDK ones (for older SDKs/instrumentations), adding such tests would guard that behavior from regressions.

Also applies to: 899-943, 945-1004


2488-2563: End‑to‑end “complete transformation” scenario is well‑structured

This scenario test now validates a coherent end state for all the new Gen AI attributes, including operation name, request/response model, provider+system, finish reasons, response ID, conversation ID, and association properties, while ensuring all original ai.* fields are removed. It also resolves the previous conflict about provider name expectations. Looks good as an integration-style guardrail for future refactors.

If you later expand message normalization, you might extend this test to also assert ATTR_GEN_AI_INPUT_MESSAGES / ATTR_GEN_AI_OUTPUT_MESSAGES for a truly full-path check.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ff1d6a8 and ac489cb.

📒 Files selected for processing (32)
  • packages/instrumentation-anthropic/test/instrumentation.test.ts (6 hunks)
  • packages/instrumentation-bedrock/src/instrumentation.ts (16 hunks)
  • packages/instrumentation-bedrock/tests/ai21.test.ts (2 hunks)
  • packages/instrumentation-bedrock/tests/amazon.test.ts (4 hunks)
  • packages/instrumentation-bedrock/tests/anthropic.test.ts (2 hunks)
  • packages/instrumentation-bedrock/tests/cohere.test.ts (2 hunks)
  • packages/instrumentation-bedrock/tests/meta.test.ts (4 hunks)
  • packages/instrumentation-cohere/src/instrumentation.ts (11 hunks)
  • packages/instrumentation-cohere/tests/chat.test.ts (8 hunks)
  • packages/instrumentation-cohere/tests/generate.test.ts (8 hunks)
  • packages/instrumentation-cohere/tests/rerank.test.ts (2 hunks)
  • packages/instrumentation-langchain/src/callback_handler.ts (6 hunks)
  • packages/instrumentation-langchain/test/instrumentation.test.ts (2 hunks)
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts (4 hunks)
  • packages/instrumentation-openai/src/image-wrappers.ts (14 hunks)
  • packages/instrumentation-openai/src/instrumentation.ts (7 hunks)
  • packages/instrumentation-openai/test/instrumentation.test.ts (20 hunks)
  • packages/instrumentation-together/src/instrumentation.ts (7 hunks)
  • packages/instrumentation-together/test/instrumentation.test.ts (12 hunks)
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (7 hunks)
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (3 hunks)
  • packages/instrumentation-vertexai/tests/gemini.test.ts (2 hunks)
  • packages/instrumentation-vertexai/tests/palm2.test.ts (2 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (18 hunks)
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/manual.ts (4 hunks)
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts (3 hunks)
  • packages/traceloop-sdk/test/agent_decorator.test.ts (3 hunks)
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (6 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (6 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (41 hunks)
  • packages/traceloop-sdk/test/decorators.test.ts (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧰 Additional context used
📓 Path-based instructions (4)
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
packages/instrumentation-*/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/test/decorators.test.ts
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/traceloop-sdk/test/decorators.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/traceloop-sdk/test/decorators.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Applied to files:

  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately

Applied to files:

  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition

Applied to files:

  • packages/instrumentation-anthropic/test/instrumentation.test.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (25)
packages/instrumentation-vertexai/tests/palm2.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/src/lib/tracing/manual.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-anthropic/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-cohere/tests/rerank.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/test/agent_decorator.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-bedrock/tests/amazon.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-bedrock/tests/meta.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-langchain/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-openai/src/image-wrappers.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/src/lib/tracing/decorators.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-bedrock/tests/cohere.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformLLMSpans (660-681)
packages/instrumentation-langchain/src/callback_handler.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-cohere/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-bedrock/tests/anthropic.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-cohere/tests/generate.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-vertexai/tests/gemini.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-bedrock/tests/ai21.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/traceloop-sdk/test/decorators.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-cohere/tests/chat.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test

Comment thread packages/instrumentation-cohere/src/instrumentation.ts Outdated
Comment thread packages/instrumentation-cohere/src/instrumentation.ts Outdated
Comment thread packages/instrumentation-cohere/src/instrumentation.ts
Comment thread packages/instrumentation-cohere/src/instrumentation.ts Outdated
Comment thread packages/instrumentation-cohere/tests/generate.test.ts Outdated
Comment thread packages/instrumentation-vertexai/tests/palm2.test.ts Outdated
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed ac489cb in 10 minutes and 48 seconds. Click for details.
  • Reviewed 4065 lines of code in 32 files
  • Skipped 0 files when reviewing.
  • Skipped posting 8 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:12
  • Draft comment:
    Very comprehensive tests covering transformation of response texts, objects, tool calls, tokens, vendor strings, and metadata. Consider adding additional edge-case tests for unexpected types (e.g. numeric tokens as strings, deeply nested JSON structures) for extra robustness.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:264
  • Draft comment:
    The tests for prompt message transformation (both from 'ai.prompt.messages' and 'ai.prompt') look good. Consider extracting helper functions if the assertions become repetitive in future tests.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
3. packages/traceloop-sdk/test/decorators.test.ts:40
  • Draft comment:
    Decorator tests are extensive and clearly verify workflow, agent, and task span attribute propagation. For maintainability, consider creating small helpers to compare common attribute sets to reduce repetition.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/decorators.test.ts:180
  • Draft comment:
    The tests for agent decoration and propagation to child spans are very detailed. Ensure that, as more decorators are added, similar edge cases (e.g., missing metadata, empty inputs) are covered.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:90
  • Draft comment:
    Integration tests verify that spans from multiple providers (OpenAI, Google, Anthropic) are correctly transformed. Consider adding commentary on why certain token attributes are expected (input/output tokens), which may help maintainers.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
6. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2420
  • Draft comment:
    Tests for vendor/provider transformations are well structured. Verify that the mapping logic (e.g. for 'azure-openai' and 'openai') is future-proofed against provider naming changes. Consider referencing the VENDOR_MAPPING in a comment for maintainability.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2510
  • Draft comment:
    The complete scenario test combines multiple transformations (model, vendor, tokens, metadata). This test is complex and valuable; consider splitting into sub-tests if it grows further or adding inline comments to document each transformation section.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
8. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2460
  • Draft comment:
    Typographical Suggestion: The string "gpt-4o" appears potentially mistyped. Did you mean "gpt-4"?
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_pWzA5IBV6lzAdb8x

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/instrumentation-together/test/instrumentation.test.ts (1)

137-162: Instrumentation not fully migrated to Gen AI semantic conventions.

The instrumentation source code uses legacy LLM_REQUEST_FUNCTIONS for function/tool calls (lines 243-264 in instrumentation.ts) despite Gen AI equivalents being defined in SemanticAttributes.ts (ATTR_GEN_AI_TOOL_NAME, ATTR_GEN_AI_TOOL_CALL_ARGUMENTS, etc.).

Additionally, token usage attributes show inconsistent migration:

  • LLM_USAGE_TOTAL_TOKENS is still emitted (line 509)
  • But ATTR_GEN_AI_USAGE_COMPLETION_TOKENS and ATTR_GEN_AI_USAGE_PROMPT_TOKENS are also emitted (lines 513, 517)

This directly contradicts the PR objective to "rename and unify Gen AI semantic attributes to the ATTR_GEN_AI_* naming scheme." Update the instrumentation to consistently use the Gen AI attribute names for function/tool calls and clarify the token usage strategy (whether to deprecate LLM_USAGE_TOTAL_TOKENS entirely).

♻️ Duplicate comments (2)
packages/instrumentation-cohere/src/instrumentation.ts (2)

219-227: Remove duplicate request model assignment (or use for response model).

attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL] = model; is executed twice back‑to‑back, which is redundant and likely an oversight.

Consider either:

  • Removing the second line, or
  • Changing the second line to set ATTR_GEN_AI_RESPONSE_MODEL if you intend to track both request and response models here.

244-275: Use .content instead of .user for prompt body fields.

These prompt attributes use the suffix .user:

  • Completion prompt: ATTR_GEN_AI_PROMPT}.0.user
  • Chat last user message: ...${len}.user
  • Rerank query: ATTR_GEN_AI_PROMPT}.0.user

Other instrumentations (Bedrock, LangChain, OpenAI) and the GEN_AI conventions use .content for the message body, so this is inconsistent and will surprise consumers.

Recommend changing all three to .content, e.g.:

- attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.user`] = params.prompt;
+ attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.content`] = params.prompt;

- `${SpanAttributes.ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user`
+ `${SpanAttributes.ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.content`

- attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.user`] = params.query;
+ attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.content`] = params.query;
🧹 Nitpick comments (2)
packages/instrumentation-openai/src/image-wrappers.ts (2)

137-174: Image generation request: switch to GenAI model/prompt attributes looks correct

Using SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL for params.model and ATTR_GEN_AI_PROMPT.0.[content/role] for the single text prompt matches the new GenAI naming and the indexed prompt schema; no functional issues here. As a follow-up, it would be good to promote the gen_ai.request.image.* keys (size/quality/style/count) used in this function into shared semantic-conventions constants so AI attributes aren’t hardcoded in multiple places.
As per coding guidelines, these AI attributes should ideally be defined centrally in the semantic-conventions package.


344-347: Image response: completion messages migrated to ATTR_GEN_AI_COMPLETION consistently

All branches now populate the assistant response under SpanAttributes.ATTR_GEN_AI_COMPLETION.0.[content/role], regardless of whether the image is uploaded, fetched and re-uploaded, or referenced by the original URL, which keeps completion structure uniform across scenarios. If you want to reduce duplication later, the repeated JSON envelope + attribute writes could be pushed into a small helper, but that's optional.

Also applies to: 368-373, 376-381, 384-389

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ac489cb and 7f9e86c.

📒 Files selected for processing (10)
  • packages/instrumentation-anthropic/src/instrumentation.ts (5 hunks)
  • packages/instrumentation-bedrock/src/instrumentation.ts (16 hunks)
  • packages/instrumentation-cohere/src/instrumentation.ts (11 hunks)
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts (4 hunks)
  • packages/instrumentation-openai/src/image-wrappers.ts (14 hunks)
  • packages/instrumentation-openai/src/instrumentation.ts (7 hunks)
  • packages/instrumentation-together/src/instrumentation.ts (7 hunks)
  • packages/instrumentation-together/test/instrumentation.test.ts (12 hunks)
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (7 hunks)
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
🧰 Additional context used
📓 Path-based instructions (3)
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
packages/instrumentation-*/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
🧠 Learnings (9)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-openai/src/instrumentation.ts
🧬 Code graph analysis (7)
packages/instrumentation-together/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-cohere/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-openai/src/image-wrappers.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-anthropic/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
packages/instrumentation-bedrock/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (45-111)
🔇 Additional comments (13)
packages/instrumentation-together/test/instrumentation.test.ts (1)

34-34: LGTM!

The import correctly follows the coding guidelines by importing semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings.

packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (2)

140-177: Gen AI attribute migration in _startSpan is consistent and uses shared constants

The migration from LLM_* to SpanAttributes.ATTR_GEN_AI_* for system, model, generation config, and prompt attributes looks internally consistent, and you’re correctly using the shared constants from @traceloop/ai-semantic-conventions instead of hard‑coded strings. Prompt indexing (system at 0, then contents) and role/content handling are preserved and sound.

As per coding guidelines, this aligns instrumentation attributes with the central semantic-conventions module.


232-261: Usage and completion attributes now align with Gen AI semantics without breaking legacy usage

Updating usage and completion attributes to SpanAttributes.ATTR_GEN_AI_USAGE_COMPLETION_TOKENS, SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS, and SpanAttributes.ATTR_GEN_AI_COMPLETION.* while keeping LLM_USAGE_TOTAL_TOKENS for backward compatibility is a clean migration. Token counts and candidate finish reasons/contents are wired through correctly, guarded by _shouldSendPrompts().

packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (1)

152-227: Vertex AI GEN_AI attribute migration looks correct and consistent.

The span start/end logic uses the new ATTR_GEN_AI_* attributes for system, model, request params, prompts, completions, and usage tokens while keeping LLM_TOP_K/LLM_USAGE_TOTAL_TOKENS for legacy fields. The prompt/completion indexing (.0.role/.0.content) matches the shared conventions and other instrumentations. Based on learnings, this keeps attribute definitions centralized in @traceloop/ai-semantic-conventions and preserves token extraction behavior.

Also applies to: 261-357

packages/instrumentation-cohere/src/instrumentation.ts (1)

356-401: Cohere response GEN_AI attributes are wired correctly.

The rerank/chat/generate response helpers now emit ATTR_GEN_AI_COMPLETION.* and ATTR_GEN_AI_USAGE_* attributes in a way that mirrors other providers (finish_reason, role, content, and token usage). The conditional prompt guarding with _shouldSendPrompts() is preserved, and total tokens continue to be reported via LLM_USAGE_TOTAL_TOKENS. Based on learnings, this keeps response extraction aligned with the shared semantic conventions.

Also applies to: 403-486, 488-556

packages/instrumentation-bedrock/src/instrumentation.ts (1)

146-183: Bedrock GEN_AI attributes and streaming aggregation look sound.

The Bedrock instrumentation now correctly uses ATTR_GEN_AI_SYSTEM, request/response model, request hyperparameters, prompt/completion attributes, and ATTR_GEN_AI_USAGE_* tokens across all vendors (ai21/amazon/anthropic/cohere/meta), while retaining LLM_USAGE_TOTAL_TOKENS and LLM_TOP_K. The streaming reassembly logic was updated to read/write ${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.content and still accumulates full streamed content safely under the prompt‑gating guard.

Also applies to: 185-291, 293-557

packages/instrumentation-anthropic/src/instrumentation.ts (1)

190-288: Anthropic GEN_AI migration is consistent and preserves behavior.

Anthropic spans now use ATTR_GEN_AI_SYSTEM, ATTR_GEN_AI_REQUEST_*, and ATTR_GEN_AI_PROMPT for inputs, and populate ATTR_GEN_AI_RESPONSE_MODEL, ATTR_GEN_AI_USAGE_PROMPT_TOKENS/_COMPLETION_TOKENS, plus ATTR_GEN_AI_COMPLETION.* for finish_reason/role/content. Streaming aggregation feeds the same _endSpan path, so semantics stay aligned across streaming and non‑streaming calls. Based on learnings, this satisfies the requirement to extract request/response data and token usage using centralized semantic attributes.

Also applies to: 473-538

packages/instrumentation-openai/src/instrumentation.ts (1)

279-402: OpenAI GEN_AI attributes are wired correctly and consistent across chat/completion.

The OpenAI instrumentation now sets ATTR_GEN_AI_SYSTEM based on provider detection, uses ATTR_GEN_AI_REQUEST_* for model and hyperparameters, and emits prompts under ATTR_GEN_AI_PROMPT.{idx}.role/content. On completion, it records ATTR_GEN_AI_RESPONSE_MODEL, ATTR_GEN_AI_USAGE_PROMPT_TOKENS/_COMPLETION_TOKENS plus LLM_USAGE_TOTAL_TOKENS, and maps finish_reason/role/content, function calls, and tool calls into ATTR_GEN_AI_COMPLETION.{idx}.*. This matches the shared semantic conventions while preserving existing behavior and token enrichment.

Also applies to: 652-744

packages/instrumentation-openai/src/image-wrappers.ts (5)

181-200: Image edit request: GenAI model and text prompt attributes are wired correctly

Mapping the edit model to SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL and recording the text prompt as ATTR_GEN_AI_PROMPT.0.[content/role="user"] is consistent with the generation path and the new GenAI schema. No further changes needed here.


219-223: Image edit: second prompt entry for uploaded image is consistent

Storing the uploaded image reference as ATTR_GEN_AI_PROMPT.1.content (with a multimodal JSON envelope) and ATTR_GEN_AI_PROMPT.1.role = "user" cleanly preserves the ordering of text (index 0) then image (index 1) and matches how other multimodal prompts are represented.


240-243: Image variation request: GenAI model and image prompt attributes look good

Using SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL for variations and representing the source image as ATTR_GEN_AI_PROMPT.0.[content/role="user"] keeps this operation aligned with the other image endpoints and the GenAI prompt convention. Implementation here looks correct.

Also applies to: 270-274


293-327: Image response: GenAI usage tokens plus total tokens are handled correctly

Completion usage is now emitted via SpanAttributes.ATTR_GEN_AI_USAGE_COMPLETION_TOKENS, with ATTR_GEN_AI_USAGE_PROMPT_TOKENS set only when prompt-enrichment is enabled, while SpanAttributes.LLM_USAGE_TOTAL_TOKENS still carries the aggregate count. The control flow (including the try/catch fallback) ensures total tokens are always populated when images are returned, and the logic around estimatedPromptTokens and completionTokens is sound.


413-418: Image spans: system attribute migration to ATTR_GEN_AI_SYSTEM is consistent

Setting SpanAttributes.ATTR_GEN_AI_SYSTEM = "OpenAI" on the generate, edit, and variation spans aligns these wrappers with the new GenAI system attribute while preserving the existing "gen_ai.request.type" classifier. This keeps span metadata consistent across all three image operations.

Also applies to: 472-478, 539-544

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 7f9e86c in 10 minutes and 1 seconds. Click for details.
  • Reviewed 463 lines of code in 9 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-cohere/src/instrumentation.ts:220
  • Draft comment:
    Duplicate assignment of the ATTR_GEN_AI_REQUEST_MODEL attribute in _startSpan. Remove the redundant assignment.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/instrumentation-vertexai/src/vertexai-instrumentation.ts:190
  • Draft comment:
    Consider replacing 'new Promise((resolve) => resolve(result))' with 'Promise.resolve(result)' for conciseness.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/instrumentation-vertexai/src/vertexai-instrumentation.ts:280
  • Draft comment:
    The deep nested optional chaining used to extract token usage metadata is complex. Consider refactoring for improved readability.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/instrumentation-vertexai/src/vertexai-instrumentation.ts:285
  • Draft comment:
    Type checking with 'typeof ... === "number"' for token count fields is correct but could benefit from an explanatory comment.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/instrumentation-together/src/instrumentation.ts:145
  • Draft comment:
    Consider using Promise.resolve(result) in the _wrapPromise implementation for consistency and clarity.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_bl0TD3JoJMbfmExY

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 0d82d9e in 3 minutes and 15 seconds. Click for details.
  • Reviewed 12 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-cohere/src/instrumentation.ts:223
  • Draft comment:
    Duplicate assignment of the model attribute removed. Avoid redundant code to prevent confusion and potential side-effects.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.

Workflow ID: wflow_Seqs7aJFqGsEWXJs

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts Outdated
Comment thread packages/ai-semantic-conventions/src/SemanticAttributes.ts
// Find matching vendor prefix in mapping
let mappedVendor = null;
if (typeof vendor === "string" && vendor.length > 0) {
const providerName = vendor.split(".")[0];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why?

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
packages/instrumentation-cohere/tests/generate.test.ts (2)

125-139: Trivial assertions on undefined optional parameters.

The test asserts that ATTR_GEN_AI_REQUEST_TOP_P, presencePenalty, and frequencyPenalty span attributes equal the corresponding params values, but these params are undefined (not set in the params object at lines 94-99). These assertions pass trivially as undefined === undefined, without validating the instrumentation's optional parameter handling.

The test should conditionally assert these attributes only when the corresponding params are defined.


205-211: Same trivial assertions in streaming test.

This streaming test variant has the same issue with undefined optional parameters (params.p, params.presencePenalty, params.frequencyPenalty) being compared to span attributes without guard checks.

packages/instrumentation-cohere/tests/chat.test.ts (2)

133-147: Trivial assertions on undefined optional parameters.

Similar to the issue in generate.test.ts, this test asserts that ATTR_GEN_AI_REQUEST_TOP_P, presencePenalty, and frequencyPenalty span attributes equal the params values, but these params are undefined (not set in the params object at lines 94-106). The assertions pass trivially without validating instrumentation behavior.

Add conditional checks to only assert these attributes when the corresponding params are defined.


233-247: Same undefined parameter assertions in streaming test.

This streaming test variant has the same issue with undefined optional parameters being compared to span attributes without proper guards.

🧹 Nitpick comments (11)
packages/traceloop-sdk/src/lib/tracing/span-processor.ts (1)

18-20: Agent name handling via GenAI semantic attribute is correct; consider centralizing the constant

The updated logic cleanly wires agent names through the new attribute:

  • On span start, the active AGENT_NAME_KEY is written to the span via the GenAI agent attribute.
  • On span end, that same attribute is used to:
    • Cache the agent name per trace ID, and
    • Propagate it to downstream spans on the same trace when they lack an explicit agent name.

This achieves the PR’s goal of using the new semantic convention while preserving the existing propagation behavior.

To keep AI/LLM attribute definitions centralized (and allow future changes in one place), consider re-exporting and importing ATTR_GEN_AI_AGENT_NAME via @traceloop/ai-semantic-conventions instead of the OTEL package directly, in line with the existing pattern for SpanAttributes. Based on learnings, this keeps all AI-related attribute wiring going through ai-semantic-conventions.

Also applies to: 190-196, 280-291

packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (6)

876-920: Consider asserting cleanup of deprecated gen_ai.usage.prompt_tokens alias

These tests validate keeping gen_ai.usage.input_tokens via ATTR_GEN_AI_USAGE_INPUT_TOKENS and deleting ai.usage.promptTokens, which matches the desired migration. If the intent is to fully converge on the new OTEL naming, it may be worth explicitly asserting that the legacy gen_ai.usage.prompt_tokens alias is also removed when present (similar to how you drop ai.usage.promptTokens).

For example:

   it("should handle zero input tokens", () => {
     const attributes = {
       "ai.usage.promptTokens": 0,
       "gen_ai.usage.input_tokens": 0,
       "gen_ai.usage.prompt_tokens": 0,
     };

@@
   transformLLMSpans(attributes);

   assert.strictEqual(
     attributes[ATTR_GEN_AI_USAGE_INPUT_TOKENS],
     0,
   );
-  assert.strictEqual(attributes["ai.usage.promptTokens"], undefined);
+  assert.strictEqual(attributes["ai.usage.promptTokens"], undefined);
+  // Optional: ensure legacy alias is not kept
+  assert.strictEqual(attributes["gen_ai.usage.prompt_tokens"], undefined);
 });

This would lock in the cleanup behavior and prevent regressions if the transformer stops dropping the alias.


922-966: Mirror prompt-token alias cleanup for completion tokens

Same idea as prompt tokens: these tests confirm that ai.usage.completionTokens is dropped and gen_ai.usage.output_tokens survives via ATTR_GEN_AI_USAGE_OUTPUT_TOKENS. If you also want to ensure any legacy gen_ai.usage.completion_tokens alias doesn’t linger, you could add an assertion in the zero-token case:

   it("should handle zero output tokens", () => {
     const attributes = {
       "ai.usage.completionTokens": 0,
       "gen_ai.usage.output_tokens": 0,
       "gen_ai.usage.completion_tokens": 0,
     };

@@
   transformLLMSpans(attributes);

   assert.strictEqual(
     attributes[ATTR_GEN_AI_USAGE_OUTPUT_TOKENS],
     0,
   );
-  assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
+  assert.strictEqual(attributes["ai.usage.completionTokens"], undefined);
+  // Optional: ensure legacy alias is not kept
+  assert.strictEqual(attributes["gen_ai.usage.completion_tokens"], undefined);
 });

Not required, but it would more tightly specify the migration behavior.


2050-2150: Operation-name mapping tests are consistent with OTEL GenAI conventions

Mapping:

  • ai.generateText / ai.streamText / ai.generateObject / ai.streamObjectATTR_GEN_AI_OPERATION_NAME = "chat", and
  • ai.toolCall / *.tool"execute_tool",

matches the GenAI semconv’s recommended gen_ai.operation.name values for chat and tool execution. (opentelemetry.io)

You might optionally add a future test if the AI SDK ever introduces explicit “embeddings” or separate “text_completion” operations, but for the current scope this looks correct and complete.


2246-2304: Finish-reason tests look good; consider one more edge case

You cover scalar string, string-array, multiple allowed values, and “not present” for ai.response.finishReasonATTR_GEN_AI_RESPONSE_FINISH_REASONS. That’s solid.

If you want to be extra-defensive, you could add a small case for an empty string or null value to pin down behavior there, but it’s not strictly necessary given current usage.


2390-2442: Conversation-id tests don’t assert metadata cleanup / association properties

The new transformLLMSpans - conversation id tests verify ATTR_GEN_AI_CONVERSATION_ID derivation from conversationId and sessionId, including precedence, which is good.

If the intent is to treat ai.telemetry.metadata.conversationId consistently with the other metadata keys (i.e., also mirror it into TRACELOOP_ASSOCIATION_PROPERTIES and remove the original metadata attribute), the tests currently wouldn’t catch regressions there. You could tighten the expectations, for example:

 it("should transform conversationId from metadata", () => {
   const attributes = {
     "ai.telemetry.metadata.conversationId": "conv_123",
   };

   transformLLMSpans(attributes);

   assert.strictEqual(
     attributes[ATTR_GEN_AI_CONVERSATION_ID],
     "conv_123",
   );
+
+  // Optional: if conversationId should also be an association property
+  // and the raw metadata key removed (like other telemetry metadata):
+  // assert.strictEqual(
+  //   attributes[
+  //     `${SpanAttributes.TRACELOOP_ASSOCIATION_PROPERTIES}.conversationId`
+  //   ],
+  //   "conv_123",
+  // );
+  // assert.strictEqual(
+  //   attributes["ai.telemetry.metadata.conversationId"],
+  //   undefined,
+  // );
 });

Same idea could be applied to the “sessionId fallback” and “prefer conversationId over sessionId” cases if that invariance is desired.


2511-2586: End-to-end “complete transformation” test is strong; minor opportunity around conversation metadata

This scenario ties together operation name, request/response model, provider, finish reasons, response ID, conversation ID, and association properties, which is great for guarding the full transformLLMSpans pipeline.

If ai.telemetry.metadata.conversationId is also supposed to be turned into an association property (similar to userId / sessionId in the earlier metadata tests), you might extend the expectations here to cover that too; currently only userId association is asserted:

   // Check metadata transformation
   assert.strictEqual(
     attributes[`${SpanAttributes.TRACELOOP_ASSOCIATION_PROPERTIES}.userId`],
     "user_789",
   );
+
+  // Optional: if conversationId should also appear as an association property
+  // assert.strictEqual(
+  //   attributes[
+  //     `${SpanAttributes.TRACELOOP_ASSOCIATION_PROPERTIES}.conversationId`
+  //   ],
+  //   "conv_456",
+  // );

Otherwise, the coverage here looks very good.

packages/instrumentation-together/src/instrumentation.ts (1)

253-278: Duplicate iteration over params.tools.

The code iterates over params.tools twice (lines 253-263 and 264-278), setting the same attributes. This appears to be redundant code that sets identical values.

Remove the duplicate loop:

           params.tools?.forEach((func, index) => {
             attributes[
               `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name`
             ] = func.function?.name;
             attributes[
               `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.description`
             ] = func.function?.description;
             attributes[
               `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.arguments`
             ] = JSON.stringify(func.function?.parameters);
           });
-          params.tools?.forEach((tool, index) => {
-            if (!tool.function) {
-              return;
-            }
-
-            attributes[
-              `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name`
-            ] = tool.function.name;
-            attributes[
-              `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.description`
-            ] = tool.function.description;
-            attributes[
-              `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.arguments`
-            ] = JSON.stringify(tool.function.parameters);
-          });
packages/instrumentation-anthropic/test/instrumentation.test.ts (1)

257-354: Thinking / beta messages coverage is well-shaped; consider centralizing keys later

The beta “thinking” test correctly uses the new GenAI request/response attributes and verifies that completion content includes both thinking and text blocks, plus token usage. The literal "llm.request.thinking.*" keys are fine for now, but once these stabilize it may be worth adding them to SpanAttributes so both instrumentation and tests can reference shared constants.

packages/instrumentation-bedrock/tests/anthropic.test.ts (1)

191-287: Streamed Bedrock Anthropic test: attributes OK, but async pattern should await assertions

The streaming test’s assertions use the right ATTR_GEN_AI_* keys and map Bedrock invocation metrics to GenAI usage tokens plus LLM_USAGE_TOTAL_TOKENS correctly. However, the inner async IIFE isn’t awaited, so failures may surface as late unhandled rejections instead of proper test failures. Inlining the for await loop or awaiting the IIFE would make this deterministic.

-    const response = await bedrockRuntimeClient.send(command);
-    if (response.body) {
-      (async function () {
-        let content = "";
-        let result;
-        for await (const value of response.body!) {
+    const response = await bedrockRuntimeClient.send(command);
+    if (response.body) {
+      let content = "";
+      let result;
+      for await (const value of response.body!) {
           const jsonString = new TextDecoder().decode(value.chunk?.bytes);
           const parsedResponse = JSON.parse(jsonString);
@@
-        if ("amazon-bedrock-invocationMetrics" in result) {
+        if ("amazon-bedrock-invocationMetrics" in result) {
           assert.strictEqual(
             attributes[ATTR_GEN_AI_USAGE_PROMPT_TOKENS],
             result["amazon-bedrock-invocationMetrics"]["inputTokenCount"],
           );
@@
           );
-        }
-      })();
+        }
+      }
     }
packages/instrumentation-openai/src/image-wrappers.ts (1)

292-335: Optional: consider also setting a GenAI response-model attribute for images

If the OTEL GenAI conventions treat image models the same as text models, you may want to also populate ATTR_GEN_AI_RESPONSE_MODEL (e.g., from params.model) in image spans, similar to the text/chat instrumentation. This would make querying by response model consistent across modalities, but it’s not required for correctness.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0d82d9e and 93009b1.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (62)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/ai-semantic-conventions/tsconfig.json (1 hunks)
  • packages/instrumentation-anthropic/package.json (1 hunks)
  • packages/instrumentation-anthropic/src/instrumentation.ts (6 hunks)
  • packages/instrumentation-anthropic/test/instrumentation.test.ts (7 hunks)
  • packages/instrumentation-anthropic/tsconfig.json (1 hunks)
  • packages/instrumentation-bedrock/package.json (1 hunks)
  • packages/instrumentation-bedrock/src/instrumentation.ts (17 hunks)
  • packages/instrumentation-bedrock/tests/ai21.test.ts (3 hunks)
  • packages/instrumentation-bedrock/tests/amazon.test.ts (5 hunks)
  • packages/instrumentation-bedrock/tests/anthropic.test.ts (3 hunks)
  • packages/instrumentation-bedrock/tests/cohere.test.ts (3 hunks)
  • packages/instrumentation-bedrock/tests/meta.test.ts (5 hunks)
  • packages/instrumentation-bedrock/tsconfig.json (1 hunks)
  • packages/instrumentation-chromadb/package.json (1 hunks)
  • packages/instrumentation-chromadb/tsconfig.json (1 hunks)
  • packages/instrumentation-cohere/package.json (1 hunks)
  • packages/instrumentation-cohere/src/instrumentation.ts (12 hunks)
  • packages/instrumentation-cohere/tests/chat.test.ts (9 hunks)
  • packages/instrumentation-cohere/tests/generate.test.ts (9 hunks)
  • packages/instrumentation-cohere/tests/rerank.test.ts (3 hunks)
  • packages/instrumentation-cohere/tsconfig.json (1 hunks)
  • packages/instrumentation-langchain/package.json (1 hunks)
  • packages/instrumentation-langchain/src/callback_handler.ts (7 hunks)
  • packages/instrumentation-langchain/test/instrumentation.test.ts (3 hunks)
  • packages/instrumentation-langchain/tsconfig.json (1 hunks)
  • packages/instrumentation-llamaindex/package.json (1 hunks)
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts (5 hunks)
  • packages/instrumentation-llamaindex/tsconfig.json (1 hunks)
  • packages/instrumentation-mcp/package.json (1 hunks)
  • packages/instrumentation-mcp/tsconfig.json (1 hunks)
  • packages/instrumentation-openai/package.json (1 hunks)
  • packages/instrumentation-openai/src/image-wrappers.ts (15 hunks)
  • packages/instrumentation-openai/src/instrumentation.ts (8 hunks)
  • packages/instrumentation-openai/test/instrumentation.test.ts (21 hunks)
  • packages/instrumentation-openai/tsconfig.json (1 hunks)
  • packages/instrumentation-pinecone/package.json (1 hunks)
  • packages/instrumentation-pinecone/tsconfig.json (1 hunks)
  • packages/instrumentation-qdrant/tsconfig.json (1 hunks)
  • packages/instrumentation-together/package.json (1 hunks)
  • packages/instrumentation-together/src/instrumentation.ts (8 hunks)
  • packages/instrumentation-together/test/instrumentation.test.ts (13 hunks)
  • packages/instrumentation-together/tsconfig.json (1 hunks)
  • packages/instrumentation-vertexai/package.json (1 hunks)
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (8 hunks)
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (4 hunks)
  • packages/instrumentation-vertexai/tests/gemini.test.ts (2 hunks)
  • packages/instrumentation-vertexai/tests/palm2.test.ts (2 hunks)
  • packages/instrumentation-vertexai/tsconfig.json (1 hunks)
  • packages/traceloop-sdk/package.json (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (19 hunks)
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts (2 hunks)
  • packages/traceloop-sdk/src/lib/tracing/manual.ts (5 hunks)
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts (4 hunks)
  • packages/traceloop-sdk/src/lib/tracing/tracing.ts (1 hunks)
  • packages/traceloop-sdk/test/agent_decorator.test.ts (4 hunks)
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (7 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (7 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (42 hunks)
  • packages/traceloop-sdk/test/decorators.test.ts (8 hunks)
  • packages/traceloop-sdk/tsconfig.json (1 hunks)
  • tsconfig.base.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (14)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
🧰 Additional context used
📓 Path-based instructions (7)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/traceloop-sdk/src/lib/tracing/tracing.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/traceloop-sdk/src/lib/tracing/tracing.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/tsconfig.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-chromadb/tsconfig.json
  • packages/instrumentation-qdrant/tsconfig.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-bedrock/tsconfig.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-together/tsconfig.json
  • packages/instrumentation-llamaindex/tsconfig.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-openai/tsconfig.json
  • packages/instrumentation-vertexai/tsconfig.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-together/package.json
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-mcp/tsconfig.json
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-anthropic/tsconfig.json
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/tsconfig.json
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/instrumentation-pinecone/tsconfig.json
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
packages/*/package.json

📄 CodeRabbit inference engine (CLAUDE.md)

Use workspace:* for intra-repo package dependencies in package.json

Files:

  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-langchain/package.json
  • packages/traceloop-sdk/package.json
  • packages/instrumentation-together/package.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-bedrock/package.json
packages/instrumentation-*/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Files:

  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
packages/traceloop-sdk/package.json

📄 CodeRabbit inference engine (CLAUDE.md)

When adding a new instrumentation package, add it to the main SDK dependencies

Files:

  • packages/traceloop-sdk/package.json
packages/ai-semantic-conventions/src/SemanticAttributes.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (17)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/tsconfig.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-chromadb/tsconfig.json
  • packages/instrumentation-qdrant/tsconfig.json
  • packages/instrumentation-pinecone/package.json
  • packages/traceloop-sdk/tsconfig.json
  • packages/instrumentation-bedrock/tsconfig.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-together/tsconfig.json
  • packages/instrumentation-llamaindex/tsconfig.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-openai/tsconfig.json
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-vertexai/tsconfig.json
  • packages/traceloop-sdk/package.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-together/package.json
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-mcp/tsconfig.json
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-mcp/package.json
  • packages/ai-semantic-conventions/tsconfig.json
  • tsconfig.base.json
  • packages/instrumentation-anthropic/tsconfig.json
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/traceloop-sdk/src/lib/tracing/tracing.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/tsconfig.json
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-pinecone/tsconfig.json
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/ai-semantic-conventions/tsconfig.json
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-chromadb/tsconfig.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-together/tsconfig.json
  • packages/instrumentation-llamaindex/tsconfig.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-openai/tsconfig.json
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-vertexai/tsconfig.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-together/package.json
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-anthropic/tsconfig.json
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/tsconfig.json
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/traceloop-sdk/tsconfig.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-openai/package.json
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-langchain/package.json
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/traceloop-sdk/package.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-together/package.json
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/traceloop-sdk/src/lib/tracing/tracing.ts
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/traceloop-sdk/tsconfig.json
  • packages/instrumentation-llamaindex/tsconfig.json
  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/traceloop-sdk/package.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/traceloop-sdk/src/lib/tracing/tracing.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Applied to files:

  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/tsconfig.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-chromadb/tsconfig.json
  • packages/instrumentation-qdrant/tsconfig.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-bedrock/tsconfig.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-together/tsconfig.json
  • packages/instrumentation-llamaindex/tsconfig.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-openai/tsconfig.json
  • packages/instrumentation-vertexai/tsconfig.json
  • packages/traceloop-sdk/package.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-together/package.json
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-mcp/tsconfig.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-anthropic/tsconfig.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.json
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-pinecone/tsconfig.json
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies

Applied to files:

  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/traceloop-sdk/tsconfig.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-langchain/package.json
  • packages/traceloop-sdk/package.json
  • packages/instrumentation-together/package.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-bedrock/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-anthropic/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-together/tsconfig.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-openai/tsconfig.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-together/package.json
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-anthropic/tsconfig.json
  • packages/instrumentation-cohere/tests/generate.test.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/tsconfig.json
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately

Applied to files:

  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/tsconfig.json
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-anthropic/tsconfig.json
  • packages/instrumentation-cohere/tests/generate.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition

Applied to files:

  • packages/instrumentation-cohere/tsconfig.json
  • packages/instrumentation-chromadb/tsconfig.json
  • packages/instrumentation-bedrock/tsconfig.json
  • packages/instrumentation-together/tsconfig.json
  • packages/instrumentation-llamaindex/tsconfig.json
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-openai/tsconfig.json
  • packages/instrumentation-mcp/tsconfig.json
  • packages/instrumentation-langchain/tsconfig.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/src/lib/node-server-sdk.ts : Register new instrumentation packages during SDK initialization

Applied to files:

  • packages/traceloop-sdk/tsconfig.json
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/traceloop-sdk/package.json
📚 Learning: 2025-07-28T14:30:22.017Z
Learnt from: galkleinman
Repo: traceloop/openllmetry-js PR: 620
File: packages/instrumentation-llamaindex/package.json:43-45
Timestamp: 2025-07-28T14:30:22.017Z
Learning: Lerna automatically converts `workspace:*` dependencies to actual published versions during the `lerna publish` command, so `workspace:*` in package.json files is safe and expected in Lerna-managed monorepos.

Applied to files:

  • packages/traceloop-sdk/package.json
📚 Learning: 2025-08-10T06:42:06.412Z
Learnt from: nirga
Repo: traceloop/openllmetry-js PR: 623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.412Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.

Applied to files:

  • packages/instrumentation-openai/src/image-wrappers.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (10)
packages/instrumentation-anthropic/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/traceloop-sdk/test/agent_decorator.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-langchain/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-together/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-bedrock/tests/ai21.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-bedrock/tests/anthropic.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-cohere/tests/generate.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)
packages/instrumentation-cohere/tests/chat.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (18-59)

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 93009b1 in 9 minutes and 20 seconds. Click for details.
  • Reviewed 6432 lines of code in 48 files
  • Skipped 15 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. tsconfig.base.json:1
  • Draft comment:
    The tsconfig.base.json has a well-configured set of compiler options for strict TypeScript development. The strict settings (e.g., 'strict', 'strictNullChecks', 'noImplicitReturns') ensure good type safety, and 'resolveJsonModule' enables importing JSON modules. The inclusion of 'moduleResolution': 'node' and 'esModuleInterop': true are also best practices. No issues found.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. packages/instrumentation-bedrock/src/instrumentation.ts:390
  • Draft comment:
    There appears to be a typographical error in the comment on line 390. The phrase "The format is removing when we are setting span attribute" might be intended to read "The format is removed when setting the span attribute" or similar. Please review and adjust for clarity.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:28
  • Draft comment:
    Typo: It seems ASSOCATION_PROPERTIES_KEY is misspelled. Consider renaming it to ASSOCIATION_PROPERTIES_KEY.
  • Reason this comment was not posted:
    Comment was on unchanged code.
4. packages/traceloop-sdk/test/decorators.test.ts:578
  • Draft comment:
    Typo suggestion: The test string "Tell me a joke about Typescript" might need to be "Tell me a joke about TypeScript" with a capital 'S' for consistency.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_TvZWWutMajl8O8P5

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed c331f0b in 5 minutes and 31 seconds. Click for details.
  • Reviewed 12 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 0 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.

Workflow ID: wflow_SJHzPlVB79x9yD4s

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed d2921fc in 2 minutes and 39 seconds. Click for details.
  • Reviewed 252 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 6 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:11
  • Draft comment:
    Added import for ATTR_GEN_AI_PROVIDER_NAME; ensure this attribute exists in the dependency and matches the new semantic conventions.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that an attribute exists in a dependency and matches new semantic conventions. This falls under asking the author to ensure something is correct, which is against the rules.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:97
  • Draft comment:
    Reformatted the role assignment in transformResponseText into a single line for better readability; no functional change observed.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it only describes a change made for readability without suggesting any action or raising any concerns. It doesn't provide any actionable feedback or ask for confirmation on any specific aspect of the code.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:300
  • Draft comment:
    Consolidated multi‐line role assignment in transformPrompts; the change improves clarity without affecting functionality.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it praises the change without providing any actionable feedback or suggestions. It doesn't align with the rules for useful comments, which should either provide a suggestion, ask for clarification on specific code changes, or point out potential issues.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:480
  • Draft comment:
    Introduced providerName extraction and assignment to ATTR_GEN_AI_PROVIDER_NAME in transformVendor. Verify that vendor strings starting with a dot are handled as expected.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to verify that vendor strings starting with a dot are handled as expected. This falls under the category of asking the author to ensure behavior is intended, which is against the rules. Therefore, this comment should not be approved.
5. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:525
  • Draft comment:
    Reformatted the finish reason assignment using a ternary operator for improved clarity; this is purely a formatting change.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it only describes a formatting change without suggesting any improvements or identifying potential issues. It doesn't provide any actionable feedback or specific suggestions for the PR author.
6. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:677
  • Draft comment:
    Overall, the updates consolidate legacy mappings into normalized GenAI attributes per the new opentelemetry semantic conventions. The reformatting across transformation functions improves readability without impacting functionality.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_7baJY2DJjBMg6IcZ

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)

328-356: Consider extracting common message processing logic.

This block duplicates the message-processing logic from lines 282-323 (handling AI_PROMPT_MESSAGES). Both iterate over messages, call processMessageContent, set ATTR_GEN_AI_PROMPT attributes, and build inputMessages. Consider extracting a shared helper function to reduce duplication and improve maintainability.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c331f0b and d2921fc.

📒 Files selected for processing (1)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (18 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test
🔇 Additional comments (7)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (7)

6-26: LGTM! Proper use of OpenTelemetry GenAI semantic conventions.

The imports correctly use the standardized ATTR_GEN_AI_* constants from @opentelemetry/semantic-conventions/incubating, aligning with the PR's goal to adopt OpenTelemetry semantic conventions.


380-402: LGTM! Token transforms correctly preserve canonical attributes.

The fix from previous reviews is properly applied. Both functions now:

  1. Check if canonical GEN_AI_USAGE_* attributes don't exist
  2. Copy from legacy ai.usage.* keys if present
  3. Delete only the legacy keys (not the canonical ones)

This ensures calculateTotalTokens can read the preserved canonical attributes.


463-488: LGTM! Vendor transformation correctly handles provider name extraction.

The function properly:

  • Extracts the provider name prefix (before first dot) for compound identifiers like "openai.chat"
  • Falls back to the full vendor string if no dot exists
  • Maps known prefixes to standardized vendor names via VENDOR_MAPPING
  • Sets both ATTR_GEN_AI_SYSTEM (mapped vendor) and ATTR_GEN_AI_PROVIDER_NAME (extracted prefix)

490-532: LGTM! New transformer functions are well-structured.

The added transformers properly normalize AI SDK attributes to OpenTelemetry GenAI semantic conventions:

  • transformOperationName: Correctly derives operation type ("chat" vs "execute_tool") from span name patterns
  • transformModelId: Clean rename of ai.model.idATTR_GEN_AI_REQUEST_MODEL
  • transformFinishReason: Properly normalizes to array format
  • transformConversationId: Appropriate fallback from conversationId to sessionId
  • transformResponseMetadata: Straightforward response model/ID mapping

Also applies to: 557-581


660-681: LGTM! Orchestrator properly integrates all transformations.

The transformLLMSpans function correctly:

  • Invokes new transformers (operation name, model ID, finish reason, response metadata, conversation ID, tool call attributes) in the appropriate sequence
  • Maintains existing transformations for responses, prompts, tools, tokens, and provider metadata
  • Coordinates the complete migration to GenAI semantic conventions

235-279: Verify migration of LLM_REQUEST_FUNCTIONS to GenAI semantic conventions.

This function still uses SpanAttributes.LLM_REQUEST_FUNCTIONS while the PR migrates to GEN_AI_* semantic conventions. Confirm whether a GenAI equivalent exists (e.g., ATTR_GEN_AI_REQUEST_TOOLS or similar) and should be used here, or if this legacy attribute is intentionally retained for backward compatibility.


453-461: Verify GenAI semantic convention for total tokens attribute.

This function reads from ATTR_GEN_AI_USAGE_INPUT_TOKENS and ATTR_GEN_AI_USAGE_OUTPUT_TOKENS but writes to SpanAttributes.LLM_USAGE_TOTAL_TOKENS. Check if a GenAI equivalent (e.g., ATTR_GEN_AI_USAGE_TOTAL_TOKENS) exists in @traceloop/ai-semantic-conventions and should be used for consistency with the migration to GenAI semantic conventions.

Comment thread packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 5890edb in 2 minutes and 45 seconds. Click for details.
  • Reviewed 3131 lines of code in 33 files
  • Skipped 0 files when reviewing.
  • Skipped posting 8 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/decorators.test.ts:50
  • Draft comment:
    The test suite for decorators is very comprehensive. Consider adding a comment near the shared initialization (line 76) to explain the purpose of using shared initialization with Polly – it helps readers understand the testing context.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:35
  • Draft comment:
    The transformation tests are very detailed. It might be useful to add a couple of edge-case tests that cover scenarios where input attributes are empty objects or unexpected types, to further enforce robustness of the transformation function.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:64
  • Draft comment:
    Good use of environment variable setup for replay mode. Consider adding inline comments on why certain API keys are set to dummy values for clarity.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts:60
  • Draft comment:
    Tests for agent integration and association properties are thorough. A minor suggestion: document expected structure of association properties in a comment for future maintainers.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:904
  • Draft comment:
    The total tokens calculation tests handle both numeric and string token values. It could be beneficial to include a test for negative or non-numeric string values (if applicable), to ensure robustness in type conversion.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
6. packages/traceloop-sdk/test/decorators.test.ts:288
  • Draft comment:
    The decorator tests check for proper span attribute propagation. It might help to add a brief comment on the purpose of the ‘withWorkflow’ vs. ‘withAgent’ tests, so new contributors understand the distinction.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. packages/instrumentation-bedrock/src/instrumentation.ts:375
  • Draft comment:
    Typographical note: The comment 'The format is removing when we are setting span attribute' may be clearer if reworded. Consider changing it to 'The format is being removed when setting the span attribute'.
  • Reason this comment was not posted:
    Comment was on unchanged code.
8. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2271
  • Draft comment:
    Typo alert: The string "gpt-4o" may be a typo. Was it meant to be "gpt-4"?
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_s2mtQDIcV5NUexGx

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
packages/instrumentation-cohere/src/instrumentation.ts (1)

412-445: Namespace mixing for token attributes—this is intentional but inconsistent. Token usage attributes currently split across gen_ai.usage.* (prompt/completion) and llm.usage.total_tokens. Tests confirm this is expected behavior, but it conflicts with the intent to consolidate under GenAI semantic conventions. The proposed fix is valid: import ATTR_GEN_AI_USAGE_TOTAL_TOKENS from @opentelemetry/semantic-conventions/incubating (available in v1.38.0) instead of using SpanAttributes.LLM_USAGE_TOTAL_TOKENS.

This pattern affects all instrumentation packages (anthropic, openai, cohere, bedrock, etc.), so consider a systemic fix: add GEN_AI_USAGE_TOTAL_TOKENS = "gen_ai.usage.total_tokens" to packages/ai-semantic-conventions/src/SemanticAttributes.ts and update all instrumentations to import and use it consistently. Alternatively, standardize all token attributes to the llm.usage.* namespace if that is the intended stable convention.

Also applies to: lines 486–515 (and the same pattern in other instrumentations).

packages/instrumentation-openai/src/instrumentation.ts (1)

685-723: Record tool_calls with correct GenAI semconv structure.

The OpenTelemetry GenAI semantic conventions specify tool call attributes must use gen_ai.completion.${index}.tool_calls.${toolIndex}.function.name and .function.arguments (nested under function), not directly under tool_calls.${toolIndex}. Additionally, tool call id and type attributes should be recorded at gen_ai.completion.${index}.tool_calls.${toolIndex}.id and .type.

Update to match the spec:

  • Nest name and arguments under .function
  • Add .id and .type from toolCall.id and toolCall.type
packages/instrumentation-openai/src/image-wrappers.ts (4)

292-334: calculateImageGenerationTokens() docs mention gpt-image-1, but code never handles it.
Right now any model === "gpt-image-1" falls into the default fallback (1056), contradicting the comment table and producing wrong “completion tokens”.

 function calculateImageGenerationTokens(
   params: any,
   imageCount: number,
 ): number {
   const size = params?.size || "1024x1024";
   const model = params?.model || "dall-e-2";
   const quality = params?.quality || "standard";
@@
-  } else {
-    // Default fallback for unknown models
-    tokensPerImage = 1056;
-  }
+  } else if (model === "gpt-image-1") {
+    // gpt-image-1 costs depend on quality and size (per doc table in the comment above)
+    const gptImage1Costs: Record<string, Record<string, number>> = {
+      low: {
+        "1024x1024": 272,
+        "1024x1536": 408,
+        "1536x1024": 400,
+      },
+      medium: {
+        "1024x1024": 1056,
+        "1024x1536": 1584,
+        "1536x1024": 1568,
+      },
+      high: {
+        "1024x1024": 4160,
+        "1024x1536": 6240,
+        "1536x1024": 6208,
+      },
+    };
+    tokensPerImage =
+      gptImage1Costs[quality]?.[size] || gptImage1Costs["medium"]["1024x1024"];
+  } else {
+    // Default fallback for unknown models
+    tokensPerImage = 1056;
+  }

539-587: wrapImageVariation: sync branch never sets request attributes (missing prompt/image metadata).
Even if “probably always Promise”, this is a real functional gap if the sync branch triggers.

       } else {
         try {
+          // If sync path is reachable, at least capture request attributes.
+          // Note: this is async today due to uploadCallback usage; if you keep a sync path, consider a sync-only variant.
+          void setImageVariationRequestAttributes(span, params, uploadCallback).catch((e) =>
+            console.error("Error setting image variation request attributes:", e),
+          );
           return response;
         } catch (error) {
           span.recordException(error as Error);
           throw error;
         } finally {
           span.end();
         }
       }

472-525: Remove the unreachable else branch in wrapImageEdit—the sync path creates a race condition that, while currently unreachable (OpenAI SDK always returns a Promise), should not exist as dead code.

OpenAI's image methods (images.generate, images.edit, images.createVariation) always return a Promise—they are never synchronous. The else branch at lines 516–525 is dead code that would end the span before setRequestAttributesPromise completes if somehow reached. The Promise path itself properly awaits the promise before ending (lines 494, 511), so the current code is safe in practice. However, the unreachable else branch should be removed to avoid confusion and eliminate the latent race condition.


151-174: Define and import image attribute constants from @traceloop/ai-semantic-conventions.

The hardcoded strings gen_ai.request.image.size, gen_ai.request.image.quality, gen_ai.request.image.style, and gen_ai.request.image.count violate the repo guideline: "Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings."

Add these constants to packages/ai-semantic-conventions/src/SemanticAttributes.ts:

  • GEN_AI_REQUEST_IMAGE_SIZE
  • GEN_AI_REQUEST_IMAGE_QUALITY
  • GEN_AI_REQUEST_IMAGE_STYLE
  • GEN_AI_REQUEST_IMAGE_COUNT

Then import and use them in image-wrappers.ts (lines 156, 160, 164, 168) instead of hardcoded string keys.

packages/instrumentation-bedrock/src/instrumentation.ts (1)

229-251: Replace incorrect constant names: ATTR_GEN_AI_USAGE_PROMPT_TOKENS and ATTR_GEN_AI_USAGE_COMPLETION_TOKENS do not exist in @opentelemetry/semantic-conventions v1.38.0

These constants are not exported by @opentelemetry/semantic-conventions/incubating v1.38.0. The correct constant names are:

  • ATTR_GEN_AI_USAGE_INPUT_TOKENS (for prompt/input tokens)
  • ATTR_GEN_AI_USAGE_OUTPUT_TOKENS (for completion/output tokens)

Update the import statement (lines 46–47) and all usages at lines 231, 237, 445–446, 513–514, 528–529.

packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (1)

171-227: Avoid setting span attributes to undefined (invalid OTel attribute value).

Line 179–182 (model can be undefined) and the numeric parameter reads (Line 185–201) can also resolve to undefined depending on payload shape. Guard before assigning.

-const model = params.endpoint.split("/").pop();
-attributes[ATTR_GEN_AI_REQUEST_MODEL] = model;
-attributes[ATTR_GEN_AI_RESPONSE_MODEL] = model;
+const model = params.endpoint.split("/").pop();
+if (model) {
+  attributes[ATTR_GEN_AI_REQUEST_MODEL] = model;
+  attributes[ATTR_GEN_AI_RESPONSE_MODEL] = model;
+}

-if (params?.parameters.structValue?.fields?.maxOutputTokens.numberValue) {
+const maxOutputTokens =
+  params?.parameters.structValue?.fields?.maxOutputTokens.numberValue;
+if (typeof maxOutputTokens === "number") {
   attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] =
-    params?.parameters.structValue?.fields?.maxOutputTokens.numberValue;
+    maxOutputTokens;
 }
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)

156-169: Guard generationConfig fields before setting attrs (avoid undefined AttributeValue).

On Line 161–166, max_output_tokens / temperature / top_p can be absent. Prefer typeof === "number" checks (and only set when defined).

♻️ Duplicate comments (2)
packages/instrumentation-cohere/src/instrumentation.ts (1)

252-278: Prompt message body key should be .content, not .user (consistency + likely consumer expectations).
You already use .content for chat history entries, but completion/chat final user message + rerank query still use .user.

This matches prior review feedback on this file.

         if (type === "completion" && "prompt" in params) {
           attributes[`${ATTR_GEN_AI_PROMPT}.0.role`] = "user";
-          attributes[`${ATTR_GEN_AI_PROMPT}.0.user`] = params.prompt;
+          attributes[`${ATTR_GEN_AI_PROMPT}.0.content`] = params.prompt;
         } else if (type === "chat" && "message" in params) {
           params.chatHistory?.forEach((msg, index) => {
             attributes[`${ATTR_GEN_AI_PROMPT}.${index}.role`] = msg.role;
             if (msg.role !== "TOOL") {
               attributes[`${ATTR_GEN_AI_PROMPT}.${index}.content`] =
                 msg.message;
             }
           });

           attributes[
             `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.role`
           ] = "user";
           attributes[
-            `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user`
+            `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.content`
           ] = params.message;
         } else if (type === "rerank" && "query" in params) {
           attributes[`${ATTR_GEN_AI_PROMPT}.0.role`] = "user";
-          attributes[`${ATTR_GEN_AI_PROMPT}.0.user`] = params.query;
+          attributes[`${ATTR_GEN_AI_PROMPT}.0.content`] = params.query;
           params.documents.forEach((doc, index) => {
             attributes[`documents.${index}.index`] =
               typeof doc === "string" ? doc : doc.text;
           });
         }
packages/instrumentation-vertexai/tests/palm2.test.ts (1)

17-28: Fix missing imports for ATTR_GEN_AI_* and avoid hardcoded llm.* attribute keys.
As written, ATTR_GEN_AI_SYSTEM / ATTR_GEN_AI_REQUEST_MODEL / etc. are undefined in this file, and "llm.request.type" / "llm.top_k" should use constants (per repo guidelines and the earlier review).

 import { context } from "@opentelemetry/api";
 import { AsyncHooksContextManager } from "@opentelemetry/context-async-hooks";
 import { AIPlatformInstrumentation } from "../src/aiplatform-instrumentation";
 import * as assert from "assert";
+import { SpanAttributes } from "@traceloop/ai-semantic-conventions";
+import {
+  ATTR_GEN_AI_REQUEST_MODEL,
+  ATTR_GEN_AI_REQUEST_TOP_P,
+  ATTR_GEN_AI_RESPONSE_MODEL,
+  ATTR_GEN_AI_SYSTEM,
+} from "@opentelemetry/semantic-conventions/incubating";
 import {
   BasicTracerProvider,
   InMemorySpanExporter,
   SimpleSpanProcessor,
 } from "@opentelemetry/sdk-trace-base";
-    assert.strictEqual(attributes["llm.request.type"], "completion");
+    assert.strictEqual(attributes[SpanAttributes.LLM_REQUEST_TYPE], "completion");
 ...
-    assert.strictEqual(attributes["llm.top_k"], parameter.topK);
+    assert.strictEqual(attributes[SpanAttributes.LLM_TOP_K], parameter.topK);

(Apply the same replacements in the second test block.)

Also applies to: 95-103, 168-178

🧹 Nitpick comments (20)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (5)

197-411: Prompt messages parsing/flattening tests are thorough (object content, arrays, mixed content, escaping, invalid JSON).
One small gotcha: several cases don’t assert removal of ai.prompt.messages after transform (some do); if deletion is part of the contract, consider making that consistent across cases.


824-862: Token tests correctly assert “delete legacy AI SDK keys, keep normalized input_tokens”.
Consider adding one explicit fallback test for the case where only ai.usage.promptTokens exists (no gen_ai.usage.input_tokens) to document whether you want to (a) drop it, or (b) map it into input_tokens.


864-902: Completion tokens: same comment as prompt tokens—tests are consistent with “keep output_tokens”.


1594-1869: Telemetry metadata → association properties tests are valuable, but one expectation is lossy ("[object Object]").
If feasible, prefer JSON-stringifying object metadata values in the transformer and update the test accordingly; current behavior makes debugging/analytics harder.


2292-2351: “Complete transformation” scenario is a good integration-style assertion set for the new attribute suite.
Minor: if tokens are intentionally only computed from gen_ai.usage.{input,output}_tokens, consider adding them here (or explicitly assert they remain unset) to make the scenario’s token expectations unambiguous.

packages/instrumentation-openai/src/instrumentation.ts (2)

347-398: Function/tool metadata is still on llm.request.functions.* — decide if you want it migrated/normalized too
Right now prompts/completions move to gen_ai.*, but params.functions/params.tools still emit SpanAttributes.LLM_REQUEST_FUNCTIONS.*. If this is intentional (no GenAI equivalent), consider at least documenting it in-code; otherwise, map to the new convention and/or define new constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts. Based on learnings/coding guidelines, prefer centralizing attribute keys in @traceloop/ai-semantic-conventions where possible.


667-681: Emit GenAI “total tokens” too (you currently only emit prompt/completion tokens under GenAI)
If ATTR_GEN_AI_USAGE_TOTAL_TOKENS exists for your semconv version, set it alongside the legacy SpanAttributes.LLM_USAGE_TOTAL_TOKENS for a smoother migration.

 import {
   ATTR_GEN_AI_COMPLETION,
   ATTR_GEN_AI_PROMPT,
@@
   ATTR_GEN_AI_USAGE_COMPLETION_TOKENS,
   ATTR_GEN_AI_USAGE_PROMPT_TOKENS,
+  ATTR_GEN_AI_USAGE_TOTAL_TOKENS,
 } from "@opentelemetry/semantic-conventions/incubating";
@@
       if (result.usage) {
         span.setAttribute(
           SpanAttributes.LLM_USAGE_TOTAL_TOKENS,
           result.usage?.total_tokens,
         );
+        span.setAttribute(
+          ATTR_GEN_AI_USAGE_TOTAL_TOKENS,
+          result.usage?.total_tokens,
+        );
packages/instrumentation-openai/src/image-wrappers.ts (2)

350-390: fetch(firstImage.url) has no response.ok check + no timeout; can hang or upload garbage.
At minimum, check response.ok before reading arrayBuffer(). Consider an AbortController timeout since this is instrumentation code on the request path.

-        const response = await fetch(firstImage.url);
+        const response = await fetch(firstImage.url);
+        if (!response.ok) {
+          throw new Error(`Failed to fetch image: ${response.status} ${response.statusText}`);
+        }
         const arrayBuffer = await response.arrayBuffer();

190-231: The prompt structure violates OpenTelemetry GenAI semantic conventions for multi-modal input.

Per the OpenTelemetry GenAI specification, multi-modal user input (text + image) must be represented as multiple parts within a single message, not as separate message entries. The current code uses .0 for text and .1 for image, each with their own role assignment, which treats them as two separate messages. This deviates from the semantic convention that expects:

  • A single message at index .0 with role="user"
  • Multiple parts within that message (one text part, one image part)

Consolidate the text prompt and image into a single .0 entry with a multi-part content array that includes both the text and image components, following the parts-based structure defined in the OpenTelemetry GenAI input messages schema.

packages/instrumentation-bedrock/src/instrumentation.ts (1)

37-48: Guard ATTR_GEN_AI_RESPONSE_MODEL before parsing vendor/model in _endSpan (prevents setting empty/incorrect response model when span attributes are incomplete).

-        if (ATTR_GEN_AI_SYSTEM in attributes) {
-          const modelId = attributes[ATTR_GEN_AI_RESPONSE_MODEL] as string;
+        if (ATTR_GEN_AI_SYSTEM in attributes) {
+          const modelId = attributes[ATTR_GEN_AI_RESPONSE_MODEL];
+          if (typeof modelId !== "string" || modelId.length === 0) {
+            // best-effort: still end span, but skip vendor/model normalization
+            span.setStatus({ code: SpanStatusCode.OK });
+            span.end();
+            return;
+          }
           const { modelVendor, model } = this._extractVendorAndModel(modelId);

Also applies to: 171-176, 213-218

packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)

17-32: Consider adding provenance for custom GEN_AI_USAGE_* keys (and keep this list intentionally small).

A short comment/link to the upstream semconv issue/spec (or rationale) would help justify why these live here given the repo direction to use official constants directly. (Based on past review comment + coding guidelines tension.)

packages/instrumentation-openai/test/instrumentation.test.ts (2)

38-45: Consider re-exporting GenAI constants from @traceloop/ai-semantic-conventions to avoid direct incubating coupling in instrumentation packages/tests.
This repo’s guideline prefers importing semantic attribute constants from @traceloop/ai-semantic-conventions; depending directly on @opentelemetry/semantic-conventions/incubating can introduce churn when the incubating entrypoint changes. Based on learnings/coding guidelines, consider re-exporting/pinning these keys centrally and importing from there.


672-686: Avoid hardcoded "gen_ai.request.*" strings when constants exist (or define missing ones in SpanAttributes).
The image assertions still use "gen_ai.request.type", "gen_ai.request.image.size", and "gen_ai.request.image.count" as raw strings; consider using constants (or adding them to packages/ai-semantic-conventions/src/SemanticAttributes.ts) for consistency.

Also applies to: 682-686

packages/instrumentation-vertexai/tests/gemini.test.ts (1)

91-98: Optional: migrate remaining "gen_ai.prompt.*" / "gen_ai.completion.*" string keys to constants for consistency.
Not required, but it’d match the direction taken in the other instrumentation tests.

Also applies to: 145-155

packages/instrumentation-cohere/tests/rerank.test.ts (2)

28-33: Incubating constants import: ensure workspace pins a semantic-conventions version that exports these.
Given @opentelemetry/semantic-conventions/incubating is an external API surface, make sure the monorepo dependency and TS moduleResolution settings support this entrypoint.


116-120: Drop duplicate request model assertion (same check twice).
Minor cleanup: lines 116-120 and 133-135 assert the same ATTR_GEN_AI_REQUEST_MODEL.

Also applies to: 133-135

packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (2)

35-46: Consider re-exporting ATTR_GEN_AI_* from @traceloop/ai-semantic-conventions to avoid OTel-incubating imports scattered across instrumentations.

This repo’s guideline prefers importing semantic attribute constants from @traceloop/ai-semantic-conventions; if the intent is to standardize on OTel’s incubating constants, a small re-export layer would keep call sites consistent and reduce future churn. As per coding guidelines, ...

-} from "@opentelemetry/semantic-conventions/incubating";
+} from "@traceloop/ai-semantic-conventions";

284-367: Normalize request/response model consistently (short name vs full resource) or document why they differ.

On Line 286–287 you set ATTR_GEN_AI_RESPONSE_MODEL from result[0].model, which may not match the request-side model derived from the endpoint (Line 179–182). Consider parsing the response model similarly (or storing both in different attrs if that’s the intent).

packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)

35-46: Same constants-surface concern as other instrumentations (prefer single import surface).

If the repo is standardizing on OTel incubating attrs, consider re-exporting these from @traceloop/ai-semantic-conventions and importing from there for consistency. As per coding guidelines, ...

packages/instrumentation-bedrock/tests/meta.test.ts (1)

145-165: Drop duplicate ATTR_GEN_AI_REQUEST_MODEL assertions to reduce noise.

You assert attributes[ATTR_GEN_AI_REQUEST_MODEL] twice (Line 151 and Line 163; similarly Line 224 and Line 239). Consider removing the duplicates.

Also applies to: 219-244

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d2921fc and 5890edb.

📒 Files selected for processing (33)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/instrumentation-anthropic/src/instrumentation.ts (6 hunks)
  • packages/instrumentation-anthropic/test/instrumentation.test.ts (7 hunks)
  • packages/instrumentation-bedrock/src/instrumentation.ts (16 hunks)
  • packages/instrumentation-bedrock/tests/ai21.test.ts (3 hunks)
  • packages/instrumentation-bedrock/tests/amazon.test.ts (5 hunks)
  • packages/instrumentation-bedrock/tests/anthropic.test.ts (3 hunks)
  • packages/instrumentation-bedrock/tests/cohere.test.ts (3 hunks)
  • packages/instrumentation-bedrock/tests/meta.test.ts (5 hunks)
  • packages/instrumentation-cohere/src/instrumentation.ts (12 hunks)
  • packages/instrumentation-cohere/tests/chat.test.ts (9 hunks)
  • packages/instrumentation-cohere/tests/generate.test.ts (9 hunks)
  • packages/instrumentation-cohere/tests/rerank.test.ts (3 hunks)
  • packages/instrumentation-langchain/src/callback_handler.ts (7 hunks)
  • packages/instrumentation-langchain/test/instrumentation.test.ts (3 hunks)
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts (5 hunks)
  • packages/instrumentation-openai/src/image-wrappers.ts (15 hunks)
  • packages/instrumentation-openai/src/instrumentation.ts (8 hunks)
  • packages/instrumentation-openai/test/instrumentation.test.ts (21 hunks)
  • packages/instrumentation-together/src/instrumentation.ts (8 hunks)
  • packages/instrumentation-together/test/instrumentation.test.ts (13 hunks)
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (8 hunks)
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (4 hunks)
  • packages/instrumentation-vertexai/tests/gemini.test.ts (2 hunks)
  • packages/instrumentation-vertexai/tests/palm2.test.ts (2 hunks)
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts (2 hunks)
  • packages/traceloop-sdk/src/lib/tracing/manual.ts (5 hunks)
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts (4 hunks)
  • packages/traceloop-sdk/test/agent_decorator.test.ts (4 hunks)
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (7 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (7 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (42 hunks)
  • packages/traceloop-sdk/test/decorators.test.ts (8 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
  • packages/instrumentation-anthropic/test/instrumentation.test.ts
  • packages/instrumentation-langchain/test/instrumentation.test.ts
  • packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
  • packages/traceloop-sdk/test/decorators.test.ts
  • packages/traceloop-sdk/src/lib/tracing/decorators.ts
  • packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
  • packages/instrumentation-anthropic/src/instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
  • packages/instrumentation-together/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/tests/anthropic.test.ts
  • packages/instrumentation-cohere/tests/generate.test.ts
🧰 Additional context used
📓 Path-based instructions (5)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
packages/instrumentation-*/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Files:

  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
packages/ai-semantic-conventions/src/SemanticAttributes.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (15)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-vertexai/tests/palm2.test.ts
  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/agent_decorator.test.ts
  • packages/traceloop-sdk/src/lib/tracing/manual.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/rerank.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-langchain/src/callback_handler.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/span-processor.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-13T08:00:42.668Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/interfaces/dataset.interface.ts:27-27
Timestamp: 2025-08-13T08:00:42.668Z
Learning: In packages/traceloop-sdk/src/lib/interfaces/dataset.interface.ts, when the user said "there is no date" in response to a comment about removing "date" from column type unions, they may have been referring to local changes not yet reflected in the PR branch, as the code still contained "date" in the union types at lines 27 and 43.

Applied to files:

  • packages/traceloop-sdk/src/lib/tracing/manual.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-bedrock/tests/amazon.test.ts
  • packages/instrumentation-vertexai/tests/gemini.test.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-openai/src/image-wrappers.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/cohere.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Applied to files:

  • packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-vertexai/src/vertexai-instrumentation.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/instrumentation-bedrock/src/instrumentation.ts
  • packages/instrumentation-openai/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/meta.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately

Applied to files:

  • packages/instrumentation-together/src/instrumentation.ts
  • packages/instrumentation-bedrock/tests/ai21.test.ts
  • packages/instrumentation-cohere/src/instrumentation.ts
📚 Learning: 2025-08-10T06:42:06.412Z
Learnt from: nirga
Repo: traceloop/openllmetry-js PR: 623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.412Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.

Applied to files:

  • packages/instrumentation-openai/src/image-wrappers.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition

Applied to files:

  • packages/instrumentation-cohere/src/instrumentation.ts
🧬 Code graph analysis (12)
packages/traceloop-sdk/test/agent_decorator.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/traceloop-sdk/src/lib/tracing/manual.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-cohere/tests/rerank.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-langchain/src/callback_handler.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformLLMSpans (660-681)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-bedrock/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-cohere/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-bedrock/tests/cohere.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
packages/instrumentation-cohere/tests/chat.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-58)
🔇 Additional comments (39)
packages/instrumentation-langchain/src/callback_handler.ts (4)

24-32: LGTM - Proper use of OpenTelemetry semantic conventions.

The imports correctly source the new ATTR_GEN_AI_* constants from @opentelemetry/semantic-conventions/incubating while retaining custom attributes from @traceloop/ai-semantic-conventions. This follows the coding guidelines for importing AI/LLM semantic attribute constants.


73-90: LGTM - Chat model attributes correctly migrated.

The migration from LLM_SYSTEM and LLM_PROMPTS to ATTR_GEN_AI_SYSTEM and ATTR_GEN_AI_PROMPT is correct. The indexed attribute pattern for multi-message prompts is appropriate for LangChain's message array structure.


114-126: LGTM - Completion model attributes correctly migrated.

Consistent migration with handleChatModelStart. The "user" role for completion prompts is appropriate since these are direct text prompts rather than structured chat messages.


148-207: LGTM - Response and usage attributes correctly migrated.

The completion content, model names, and token usage attributes are all properly migrated to the new ATTR_GEN_AI_* conventions. Correctly retains SpanAttributes.LLM_USAGE_TOTAL_TOKENS since there's no standard GenAI equivalent for total tokens in the OpenTelemetry semantic conventions. Both usage and tokenUsage compatibility paths are consistently updated.

packages/traceloop-sdk/src/lib/tracing/span-processor.ts (2)

193-193: LGTM: Agent name propagation logic is correct.

The migration from the legacy constant to ATTR_GEN_AI_AGENT_NAME is implemented consistently across all usage points. The agent name propagation logic correctly:

  • Sets the attribute on span start from context or AI SDK metadata
  • Propagates the agent name from parent spans when not directly available
  • Maintains a TTL-based cache for efficient propagation

Also applies to: 275-293


18-18: No changes needed. The import at line 18 correctly uses @opentelemetry/semantic-conventions/incubating for standard OpenTelemetry semantic attributes. The coding guideline applies only to custom Traceloop-specific attributes defined in SpanAttributes; ATTR_GEN_AI_AGENT_NAME is a standard OpenTelemetry convention and is not available in the Traceloop package. This import pattern is consistent across the codebase.

Likely an incorrect or invalid review comment.

packages/instrumentation-together/src/instrumentation.ts (3)

28-39: LGTM! New semantic convention imports are correct.

The imports from @opentelemetry/semantic-conventions/incubating correctly bring in the new ATTR_GEN_AI_* constants that align with official OpenTelemetry semantic conventions for GenAI telemetry.


201-284: No changes needed—these are intentionally defined Traceloop semantic conventions, not legacy attributes.

The attributes using SpanAttributes.LLM_* (LLM_REQUEST_TYPE, LLM_FREQUENCY_PENALTY, LLM_PRESENCE_PENALTY, LLM_REQUEST_FUNCTIONS) are defined in @traceloop/ai-semantic-conventions specifically for LLM attributes not yet included in OpenTelemetry's incubating semantic conventions. They are used consistently across all instrumentations in this repo and are not deprecated. The code correctly uses both OpenTelemetry's ATTR_GEN_AI_* constants and Traceloop's custom SpanAttributes.LLM_* constants as intended.


509-524: The code is correct as-is. SpanAttributes.LLM_USAGE_TOTAL_TOKENS is a custom attribute defined in @traceloop/ai-semantic-conventions and does not have an ATTR_GEN_AI_* equivalent in OpenTelemetry semantic conventions. The codebase intentionally uses a mixed approach: OpenTelemetry's ATTR_GEN_AI_* constants for standard generative AI attributes (completion/prompt tokens) and Traceloop's custom SpanAttributes.LLM_* constants for additional attributes like total tokens. This pattern is consistent across all instrumentation packages.

Likely an incorrect or invalid review comment.

packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (16)

36-82: Response text → gen_ai.completion.* assertions look consistent and preserve unrelated attrs.


84-115: Response object → gen_ai.completion.* assertions look consistent.


117-195: Tool-calls → gen_ai.completion.0.tool_calls.* coverage is solid (incl. invalid JSON no-op).


413-508: ai.prompt (single + messages-array) coverage is good, including gen_ai.input.messages emission.
Double-check intent around escaping: the “single prompt” case expects the escaped string to remain escaped, while the messages test expects \n to be unescaped—if the transformer is meant to normalize both, align expectations accordingly.


904-963: Total token calculation tests are clear, including string-number inputs.


965-1053: Provider/system mapping expectations are now consistent (provider_name as normalized id, system as display name).


1055-1241: End-to-end “apply all transformations” assertions are coherent (new gen_ai keys + legacy deletions).


1243-1592: gen_ai.input.messages / gen_ai.output.messages serialization tests provide strong regression coverage (text, tool calls, object responses, mixed content).


1786-1868: Agent detection tests clearly distinguish root-span vs child-span behavior.


1913-1995: Operation-name mapping coverage is straightforward and matches expected span-name patterns.


1997-2038: Provider-name tests match the “normalized provider id + mapped system” contract.


2040-2062: Model id → gen_ai.request.model test is clear and checks deletion of the legacy key.


2064-2121: Finish-reason normalization to array is well-covered (scalar, array, missing).


2123-2186: Tool-call attribute mapping to OTel keys is well asserted (and the “keep args/result for Traceloop” nuance is documented).


2188-2232: Conversation-id precedence (conversationId over sessionId) is correctly tested.


2234-2290: Response metadata (ai.response.model/id) → gen_ai.response.* coverage is consistent, including combined case.

packages/traceloop-sdk/test/agent_decorator.test.ts (4)

144-155: Test assertions correctly updated to use OpenTelemetry semantic conventions.

The test assertions have been properly updated to use the new ATTR_GEN_AI_AGENT_NAME and ATTR_GEN_AI_PROMPT constants from the official OpenTelemetry semantic conventions. The template literal syntax for nested attributes (e.g., ${ATTR_GEN_AI_PROMPT}.0.role) is correct.


205-216: Test assertions correctly updated to use OpenTelemetry semantic conventions.

The decoration syntax test has been properly updated to use the new attribute constants, maintaining consistency with the withAgent syntax test.


267-286: Manual LLM instrumentation test correctly updated.

The test for manual LLM instrumentation using withLLMCall has been properly updated to verify the new OpenTelemetry semantic convention attributes, including ATTR_GEN_AI_AGENT_NAME, ATTR_GEN_AI_REQUEST_MODEL, and ATTR_GEN_AI_PROMPT.


271-271: Line 271 correctly uses the constant SpanAttributes.LLM_REQUEST_TYPE.

The code follows the established pattern for AI/LLM semantic attributes. LLM_REQUEST_TYPE ("llm.request.type") is intentionally defined as a custom attribute in the Traceloop ai-semantic-conventions package and is not equivalent to OpenTelemetry's gen_ai.* attributes. No migration is needed.

packages/instrumentation-openai/src/instrumentation.ts (1)

28-39: Use standard GenAI semantic conventions attributes; ATTR_GEN_AI_PROMPT and ATTR_GEN_AI_SYSTEM are deprecated in v1.38.0

The imports from @opentelemetry/semantic-conventions/incubating v1.38.0 are available. However, two attributes are deprecated:

  • ATTR_GEN_AI_PROMPT ("gen_ai.prompt") is deprecated; use gen_ai.input.messages instead
  • ATTR_GEN_AI_SYSTEM ("gen_ai.system") is deprecated; use gen_ai.provider.name instead

Additionally, ATTR_GEN_AI_USAGE_TOTAL_TOKENS is available in v1.38.0 but not used in the code; consider adding it for complete token usage tracking alongside prompt and completion tokens.

packages/instrumentation-openai/test/instrumentation.test.ts (1)

158-175: Assertion migrations look consistent with the new GenAI attribute namespace.
The updated key paths (${ATTR_GEN_AI_PROMPT}.*, ${ATTR_GEN_AI_COMPLETION}.*, and usage tokens) match the intended convention shape and keep the test intent intact.

Also applies to: 197-218, 240-266, 281-289, 310-318, 398-458, 496-557, 626-654, 672-705

packages/instrumentation-bedrock/tests/cohere.test.ts (2)

29-39: ATTR_GEN_AI_* import looks fine; just verify the incubating entrypoint is supported by your pinned version.


147-178: GenAI attribute assertions look consistent across non-stream and streaming paths.

Also applies to: 209-267

packages/instrumentation-bedrock/tests/amazon.test.ts (2)

29-39: ATTR_GEN_AI_* import looks fine; ensure @opentelemetry/semantic-conventions/incubating is available in the pinned version.


149-195: GenAI attribute migrations in assertions look correct and consistent with the PR direction.

Also applies to: 228-301

packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)

243-275: LGTM: completion + usage attrs mapped cleanly to ${ATTR_GEN_AI_COMPLETION}.* and ATTR_GEN_AI_USAGE_*.

The mapping is consistent with the new convention and keeps total tokens on SpanAttributes.LLM_USAGE_TOTAL_TOKENS.

packages/instrumentation-bedrock/tests/meta.test.ts (2)

29-39: Pin/verify incubating semantic convention constants compatibility with the repo’s OTel versions.

These tests now rely on @opentelemetry/semantic-conventions/incubating exports; please confirm the project’s resolved @opentelemetry/semantic-conventions version provides these constants in CI.


191-290: Streaming test: verify span is finished before reading getFinishedSpans() inside the chunk loop.

If the Bedrock instrumentation ends the span only when the stream completes, memoryExporter.getFinishedSpans() may be empty or stale mid-stream. If you’ve seen flakes, prefer asserting after the stream finishes (or wait/poll until span count is non-zero).

packages/instrumentation-cohere/tests/chat.test.ts (1)

28-37: Verify incubating convention imports are stable (tests + TS resolution).

Same concern as other packages: ensure @opentelemetry/semantic-conventions/incubating is available under the repo’s TS config and dependency graph.

Comment on lines +29 to +37
import {
ATTR_GEN_AI_COMPLETION,
ATTR_GEN_AI_PROMPT,
ATTR_GEN_AI_REQUEST_MAX_TOKENS,
ATTR_GEN_AI_REQUEST_MODEL,
ATTR_GEN_AI_REQUEST_TEMPERATURE,
ATTR_GEN_AI_REQUEST_TOP_P,
ATTR_GEN_AI_SYSTEM,
} from "@opentelemetry/semantic-conventions/incubating";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n packages/instrumentation-bedrock/tests/ai21.test.ts | sed -n '25,180p'

Repository: traceloop/openllmetry-js

Length of output: 6230


🏁 Script executed:

# Check what's being imported and used
rg "ATTR_GEN_AI" packages/instrumentation-bedrock/tests/ai21.test.ts -A 2 -B 2

Repository: traceloop/openllmetry-js

Length of output: 1694


🌐 Web query:

@opentelemetry/semantic-conventions v1.38.0 ATTR_GEN_AI_RESPONSE_MODEL incubating export

💡 Result:

Yes — the GenAI attribute gen_ai.response.model is defined in the GenAI semantic conventions and is available from the incubating entry of @opentelemetry/semantic-conventions (the incubating entry re-exports unstable conventions such as GenAI attributes). See the semantic-conventions docs and the GenAI pages for the attribute name/description. [1][2][3]

Sources:

  • @opentelemetry/semantic-conventions package docs (incubating entry explanation). [1]
  • OpenTelemetry GenAI semantic conventions (gen_ai.response.model). [2]
  • opentelemetry-js semconv releases (v1.38.0 series). [3]

🏁 Script executed:

# Check what's exported from @traceloop/ai-semantic-conventions
rg "ATTR_GEN_AI_RESPONSE_MODEL|export.*RESPONSE_MODEL" packages/ -t ts -t tsx

Repository: traceloop/openllmetry-js

Length of output: 95


🏁 Script executed:

# Look at the actual @traceloop/ai-semantic-conventions package
find packages -name "*semantic-conventions*" -type d

Repository: traceloop/openllmetry-js

Length of output: 100


🏁 Script executed:

# Check the traceloop ai-semantic-conventions exports
cat -n packages/traceloop-sdk/node_modules/@traceloop/ai-semantic-conventions/index.d.ts 2>/dev/null | head -100 || \
find packages -path "*/ai-semantic-conventions*" -name "*.d.ts" -o -name "index.ts" | head -5 | xargs cat -n

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Check the ai-semantic-conventions package structure
ls -la packages/ai-semantic-conventions/

Repository: traceloop/openllmetry-js

Length of output: 616


🏁 Script executed:

# Read the main export file
cat -n packages/ai-semantic-conventions/src/index.ts | head -150

Repository: traceloop/openllmetry-js

Length of output: 999


🏁 Script executed:

# Read the SemanticAttributes file
cat -n packages/ai-semantic-conventions/src/SemanticAttributes.ts | head -200

Repository: traceloop/openllmetry-js

Length of output: 5339


🏁 Script executed:

# Search for RESPONSE_MODEL usage in the codebase to see if it's used elsewhere
rg "ATTR_GEN_AI_RESPONSE_MODEL" packages/

Repository: traceloop/openllmetry-js

Length of output: 4593


🏁 Script executed:

# Check the OpenTelemetry semantic conventions package to verify ATTR_GEN_AI_RESPONSE_MODEL exists
find packages -path "*node_modules/@opentelemetry/semantic-conventions*" -name "*.d.ts" | head -3 | xargs grep "RESPONSE_MODEL" 2>/dev/null | head -10

Repository: traceloop/openllmetry-js

Length of output: 50


Add missing import and fix duplicate assertion at line 171

Line 171 incorrectly asserts ATTR_GEN_AI_REQUEST_MODEL (same as line 151); the response model should use ATTR_GEN_AI_RESPONSE_MODEL instead. Add the missing import and correct the assertion:

 import {
   ATTR_GEN_AI_COMPLETION,
   ATTR_GEN_AI_PROMPT,
   ATTR_GEN_AI_REQUEST_MAX_TOKENS,
   ATTR_GEN_AI_REQUEST_MODEL,
   ATTR_GEN_AI_REQUEST_TEMPERATURE,
   ATTR_GEN_AI_REQUEST_TOP_P,
+  ATTR_GEN_AI_RESPONSE_MODEL,
   ATTR_GEN_AI_SYSTEM,
 } from "@opentelemetry/semantic-conventions/incubating";
-    assert.strictEqual(attributes[ATTR_GEN_AI_REQUEST_MODEL], model);
+    assert.strictEqual(attributes[ATTR_GEN_AI_RESPONSE_MODEL], model);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/instrumentation-bedrock/tests/ai21.test.ts around lines 29 to 37 and
line 171, the test is missing the ATTR_GEN_AI_RESPONSE_MODEL import and
currently asserts ATTR_GEN_AI_REQUEST_MODEL again at line 171 (duplicating line
151). Add an import for ATTR_GEN_AI_RESPONSE_MODEL from
"@opentelemetry/semantic-conventions/incubating" alongside the existing imports,
then change the assertion at line 171 to assert ATTR_GEN_AI_RESPONSE_MODEL (the
response model) instead of ATTR_GEN_AI_REQUEST_MODEL.

Comment on lines +37 to +47
import {
ATTR_GEN_AI_COMPLETION,
ATTR_GEN_AI_PROMPT,
ATTR_GEN_AI_REQUEST_MAX_TOKENS,
ATTR_GEN_AI_REQUEST_MODEL,
ATTR_GEN_AI_REQUEST_TEMPERATURE,
ATTR_GEN_AI_REQUEST_TOP_P,
ATTR_GEN_AI_SYSTEM,
ATTR_GEN_AI_USAGE_COMPLETION_TOKENS,
ATTR_GEN_AI_USAGE_PROMPT_TOKENS,
} from "@opentelemetry/semantic-conventions/incubating";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find the traceloop/ai-semantic-conventions package and check what it exports
find . -path "*/ai-semantic-conventions/src/SemanticAttributes.ts" -o -path "*/ai-semantic-conventions/package.json" | head -20

Repository: traceloop/openllmetry-js

Length of output: 176


🏁 Script executed:

fd -t f "SemanticAttributes.ts" --max-one-result

Repository: traceloop/openllmetry-js

Length of output: 301


🏁 Script executed:

find . -type f -name "package.json" | xargs grep -l "ai-semantic-conventions" | head -5

Repository: traceloop/openllmetry-js

Length of output: 309


🏁 Script executed:

# Check the root package.json for @opentelemetry/semantic-conventions version
cat package.json | grep -A2 "semantic-conventions"

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

cat packages/ai-semantic-conventions/src/SemanticAttributes.ts

Repository: traceloop/openllmetry-js

Length of output: 4520


🏁 Script executed:

cat packages/ai-semantic-conventions/package.json

Repository: traceloop/openllmetry-js

Length of output: 1105


🏁 Script executed:

cat packages/instrumentation-cohere/package.json | grep -A5 -B5 "dependencies"

Repository: traceloop/openllmetry-js

Length of output: 394


🏁 Script executed:

# Check what the cohere instrumentation currently imports and uses
head -60 packages/instrumentation-cohere/src/instrumentation.ts | tail -30

Repository: traceloop/openllmetry-js

Length of output: 1039


🏁 Script executed:

cat packages/ai-semantic-conventions/src/index.ts

Repository: traceloop/openllmetry-js

Length of output: 845


🏁 Script executed:

# Check another instrumentation to see if it also imports from incubating
head -50 packages/instrumentation-anthropic/src/instrumentation.ts

Repository: traceloop/openllmetry-js

Length of output: 1620


🏁 Script executed:

# Search for where total tokens is used in cohere instrumentation
grep -n "total_tokens\|TOTAL_TOKENS" packages/instrumentation-cohere/src/instrumentation.ts | head -20

Repository: traceloop/openllmetry-js

Length of output: 402


🏁 Script executed:

# Check if ATTR_GEN_AI_USAGE_TOTAL_TOKENS is used anywhere in the cohere instrumentation
grep -n "ATTR_GEN_AI_USAGE_TOTAL_TOKENS\|ATTR_GEN_AI.*TOKENS" packages/instrumentation-cohere/src/instrumentation.ts

Repository: traceloop/openllmetry-js

Length of output: 463


🏁 Script executed:

# Check if other instrumentations have the same import pattern
grep -r "from \"@opentelemetry/semantic-conventions/incubating\"" packages/instrumentation-*/src/ | wc -l

Repository: traceloop/openllmetry-js

Length of output: 70


🏁 Script executed:

# Look at what's actually exported from @opentelemetry/semantic-conventions in node_modules or check semantic-conventions docs
# Let's check if there's any documentation or README in the repo about this choice
find packages/instrumentation-cohere -name "README*" -o -name "CONTRIBUTING*"

Repository: traceloop/openllmetry-js

Length of output: 109


🏁 Script executed:

cat packages/instrumentation-cohere/README.md

Repository: traceloop/openllmetry-js

Length of output: 2207


🏁 Script executed:

# Check if there are any open issues or comments in SemanticAttributes about re-exporting GenAI constants
grep -n "GenAI\|ATTR_GEN_AI\|re-export" packages/ai-semantic-conventions/src/SemanticAttributes.ts

Repository: traceloop/openllmetry-js

Length of output: 50


Import ATTR_GEN_AI_* constants from @traceloop/ai-semantic-conventions to align with repo conventions.

The instrumentation currently imports semantic attribute constants like ATTR_GEN_AI_PROMPT, ATTR_GEN_AI_COMPLETION, etc. from @opentelemetry/semantic-conventions/incubating. Per coding guidelines and learnings, these should be imported from @traceloop/ai-semantic-conventions to maintain a stable, centralized contract across instrumentations. This requires adding these re-exports to the @traceloop/ai-semantic-conventions package's SemanticAttributes.ts (if not already present) and updating the import statement.

🤖 Prompt for AI Agents
In packages/instrumentation-cohere/src/instrumentation.ts around lines 37 to 47,
the ATTR_GEN_AI_* constants are imported from
@opentelemetry/semantic-conventions/incubating but should come from
@traceloop/ai-semantic-conventions; update the import to pull
ATTR_GEN_AI_PROMPT, ATTR_GEN_AI_COMPLETION, ATTR_GEN_AI_REQUEST_*,
ATTR_GEN_AI_USAGE_* and any other used ATTR_GEN_AI_* symbols from
@traceloop/ai-semantic-conventions, and if those symbols are not yet exported
from that package add/re-export them in @traceloop/ai-semantic-conventions'
SemanticAttributes.ts (or index) so the instrumentation imports compile and
follow repo conventions.

Comment on lines 229 to 250
const attributes: Attributes = {
[SpanAttributes.LLM_SYSTEM]: "Cohere",
[ATTR_GEN_AI_SYSTEM]: "Cohere",
[SpanAttributes.LLM_REQUEST_TYPE]: this._getLlmRequestTypeByMethod(type),
};

try {
const model = params.model ?? "command";
attributes[SpanAttributes.LLM_REQUEST_MODEL] = model;
attributes[SpanAttributes.LLM_REQUEST_MODEL] = model;
attributes[ATTR_GEN_AI_REQUEST_MODEL] = model;

if (!("query" in params)) {
attributes[SpanAttributes.LLM_REQUEST_TOP_P] = params.p;
attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p;
attributes[SpanAttributes.LLM_TOP_K] = params.k;
attributes[SpanAttributes.LLM_REQUEST_TEMPERATURE] = params.temperature;
attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
attributes[SpanAttributes.LLM_FREQUENCY_PENALTY] =
params.frequencyPenalty;
attributes[SpanAttributes.LLM_PRESENCE_PENALTY] =
params.presencePenalty;
attributes[SpanAttributes.LLM_REQUEST_MAX_TOKENS] = params.maxTokens;
attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.maxTokens;
} else {
attributes["topN"] = params["topN"];
attributes["maxChunksPerDoc"] = params["maxChunksPerDoc"];
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid setting possibly-undefined attribute values on span start.
Assignments like attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p (and similar) can end up as undefined if the SDK request didn’t include a value, which is not a valid OpenTelemetry attribute value.

       if (!("query" in params)) {
-        attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p;
+        if (params.p !== undefined) attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p;
         attributes[SpanAttributes.LLM_TOP_K] = params.k;
-        attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
+        if (params.temperature !== undefined)
+          attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
         attributes[SpanAttributes.LLM_FREQUENCY_PENALTY] =
           params.frequencyPenalty;
         attributes[SpanAttributes.LLM_PRESENCE_PENALTY] =
           params.presencePenalty;
-        attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.maxTokens;
+        if (params.maxTokens !== undefined)
+          attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.maxTokens;
       } else {
         attributes["topN"] = params["topN"];
         attributes["maxChunksPerDoc"] = params["maxChunksPerDoc"];
       }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/instrumentation-cohere/src/instrumentation.ts around lines 229 to
250, the code unconditionally assigns request fields (e.g., params.p, params.k,
params.temperature, params.frequencyPenalty, params.presencePenalty,
params.maxTokens, and in the else branch params.topN and params.maxChunksPerDoc)
into the attributes object which can result in undefined values being set on
span start; update the code to only set each attribute when its corresponding
params field is not undefined (e.g., check params.p !== undefined before
assigning ATTR_GEN_AI_REQUEST_TOP_P) so only valid OpenTelemetry attribute
values are added.

Comment on lines 119 to 130
assert.strictEqual(
attributes[
`${SpanAttributes.LLM_PROMPTS}.${params.chatHistory?.length ?? 0}.role`
`${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.role`
],
"user",
);
assert.strictEqual(
attributes[
`${SpanAttributes.LLM_PROMPTS}.${params.chatHistory?.length ?? 0}.user`
`${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user`
],
params.message,
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bug: ${ATTR_GEN_AI_PROMPT}.*.user should almost certainly be .*.content.

Line 126–130 and Line 222–227 use .user, while the rest of the codebase uses .content for prompt message bodies.

- `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user`
+ `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.content`

Also applies to: 216-227

🤖 Prompt for AI Agents
In packages/instrumentation-cohere/tests/chat.test.ts around lines 119–130 and
216–227, the test asserts use the attribute key suffix `.user` for prompt
message bodies but the codebase standard (and other tests) use `.content`;
change the assertions to use `.content` instead of `.user` and assert the value
equals params.message (or the expected message content) so the attribute key
matches the rest of the codebase.

Comment on lines +121 to 125
assert.strictEqual(attributes[`${ATTR_GEN_AI_PROMPT}.0.role`], "user");
assert.strictEqual(
attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`],
"user",
);
assert.strictEqual(
attributes[`${SpanAttributes.LLM_PROMPTS}.0.user`],
attributes[`${ATTR_GEN_AI_PROMPT}.0.user`],
params.query,
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Likely wrong attribute key: ${ATTR_GEN_AI_PROMPT}.0.user should probably be ${ATTR_GEN_AI_PROMPT}.0.content.
Right now the test asserts params.query under a non-standard key; this likely won’t match what the transformer/instrumentation emits (and is inconsistent with other tests).

-    assert.strictEqual(
-      attributes[`${ATTR_GEN_AI_PROMPT}.0.user`],
-      params.query,
-    );
+    assert.strictEqual(
+      attributes[`${ATTR_GEN_AI_PROMPT}.0.content`],
+      params.query,
+    );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert.strictEqual(attributes[`${ATTR_GEN_AI_PROMPT}.0.role`], "user");
assert.strictEqual(
attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`],
"user",
);
assert.strictEqual(
attributes[`${SpanAttributes.LLM_PROMPTS}.0.user`],
attributes[`${ATTR_GEN_AI_PROMPT}.0.user`],
params.query,
);
assert.strictEqual(attributes[`${ATTR_GEN_AI_PROMPT}.0.role`], "user");
assert.strictEqual(
attributes[`${ATTR_GEN_AI_PROMPT}.0.content`],
params.query,
);
🤖 Prompt for AI Agents
In packages/instrumentation-cohere/tests/rerank.test.ts around lines 121-125,
the test asserts the attribute key `${ATTR_GEN_AI_PROMPT}.0.user` which is
incorrect; it should assert `${ATTR_GEN_AI_PROMPT}.0.content`. Update the
assertion to check attributes[`${ATTR_GEN_AI_PROMPT}.0.content`] equals
params.query (keeping the role assertion as-is) so the test matches the
transformer/instrumentation output and other tests.

Comment on lines 3 to +11
import { SpanAttributes } from "@traceloop/ai-semantic-conventions";
import {
ATTR_GEN_AI_COMPLETION,
ATTR_GEN_AI_PROMPT,
ATTR_GEN_AI_REQUEST_MODEL,
ATTR_GEN_AI_SYSTEM,
ATTR_GEN_AI_USAGE_COMPLETION_TOKENS,
ATTR_GEN_AI_USAGE_PROMPT_TOKENS,
} from "@opentelemetry/semantic-conventions/incubating";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check the package.json to see which version of @opentelemetry/semantic-conventions is installed
cat packages/instrumentation-openai/package.json | grep -A 2 "@opentelemetry/semantic-conventions"

Repository: traceloop/openllmetry-js

Length of output: 208


🏁 Script executed:

# Search for available constants in the codebase that reference semantic-conventions
find . -name "*.json" -path "*/node_modules/@opentelemetry/semantic-conventions*" -type f 2>/dev/null | head -5

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Check the actual image-wrappers.ts file to see the current state
cat -n packages/instrumentation-openai/src/image-wrappers.ts | head -100

Repository: traceloop/openllmetry-js

Length of output: 4005


🌐 Web query:

@opentelemetry/semantic-conventions incubating GenAI usage attributes ATTR_GEN_AI_USAGE_TOTAL_TOKENS

💡 Result:

Yes — the incubating GenAI semantic conventions define a total-tokens attribute named gen_ai.usage.total_tokens (the JS constant is typically exposed by @opentelemetry/semantic-conventions as ATTR_GEN_AI_USAGE_TOTAL_TOKENS). See the GenAI attribute registry and spans/metrics docs for the GenAI conventions. [1][2]

Sources:

  • OpenTelemetry GenAI attribute registry / GenAI docs (lists gen_ai.usage.* including gen_ai.usage.total_tokens). [1]
  • OpenTelemetry GenAI spans/metrics semantic conventions pages (usage attributes for input/output/total tokens). [2]

🏁 Script executed:

# Find the setImageGenerationResponseAttributes function to see where total tokens are set
grep -n "setImageGenerationResponseAttributes\|LLM_USAGE_TOTAL_TOKENS" packages/instrumentation-openai/src/image-wrappers.ts

Repository: traceloop/openllmetry-js

Length of output: 543


🏁 Script executed:

# Get more context around where total tokens are being set
sed -n '200,350p' packages/instrumentation-openai/src/image-wrappers.ts

Repository: traceloop/openllmetry-js

Length of output: 4035


🏁 Script executed:

# Also check if there are any hardcoded image request attributes
grep -n "gen_ai.request.image\|image.size\|image.quality" packages/instrumentation-openai/src/image-wrappers.ts

Repository: traceloop/openllmetry-js

Length of output: 573


🌐 Web query:

@opentelemetry/semantic-conventions 1.38.0 gen_ai.request.image constants ATTR_GEN_AI_REQUEST_IMAGE

💡 Result:

The constant ATTR_GEN_AI_REQUEST_IMAGE corresponds to the attribute name "gen_ai.request.image". [1][2]

Sources:

  • OpenTelemetry GenAI attribute list (gen_ai.request.* includes image-related attributes). [1]
  • @opentelemetry/semantic-conventions JS module docs (package defines semantic-convention attribute constants). [2]

References:
[1] https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/
[2] https://open-telemetry.github.io/opentelemetry-js/modules/_opentelemetry_semantic-conventions.html


🏁 Script executed:

# Check if the traceloop ai-semantic-conventions package has constants for these
cat packages/ai-semantic-conventions/src/SemanticAttributes.ts | head -100

Repository: traceloop/openllmetry-js

Length of output: 4224


🏁 Script executed:

# Look for any exports related to image attributes
grep -r "image.size\|image.quality\|image.style\|image.count" packages/ai-semantic-conventions/

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Search for ATTR_GEN_AI_USAGE_TOTAL_TOKENS in the OpenTelemetry package files
find node_modules/@opentelemetry/semantic-conventions -name "*.d.ts" -o -name "*.js" 2>/dev/null | xargs grep -l "ATTR_GEN_AI_USAGE_TOTAL_TOKENS" 2>/dev/null | head -5

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# If that doesn't work, check the index file or incubating module
cat node_modules/@opentelemetry/semantic-conventions/build/src/metrics/incubating.d.ts 2>/dev/null | grep -A 2 -B 2 "ATTR_GEN_AI_USAGE" | head -50

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Also check the attributes incubating module
cat node_modules/@opentelemetry/semantic-conventions/build/src/attribute-keys/incubating.d.ts 2>/dev/null | grep "GEN_AI_USAGE" | head -20

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# List the structure of the semantic-conventions package
ls -la node_modules/@opentelemetry/semantic-conventions/build/src/ 2>/dev/null | head -20

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Check what's actually exported in the incubating index
cat node_modules/@opentelemetry/semantic-conventions/build/src/incubating.d.ts 2>/dev/null | grep "USAGE" | head -30

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Try a broader search for any TOTAL_TOKENS constant
grep -r "TOTAL_TOKENS" node_modules/@opentelemetry/semantic-conventions/build/ 2>/dev/null | head -20

Repository: traceloop/openllmetry-js

Length of output: 50


Use ATTR_GEN_AI_USAGE_TOTAL_TOKENS consistently with other GenAI attributes.

You're mixing legacy SpanAttributes.LLM_USAGE_TOTAL_TOKENS with new ATTR_GEN_AI_USAGE_COMPLETION_TOKENS and ATTR_GEN_AI_USAGE_PROMPT_TOKENS in setImageGenerationResponseAttributes() (lines 325, 328, 331). This creates inconsistent attribute namespaces in spans: GenAI prompt/completion tokens under the new key, but total tokens under the old LLM key—breaking downstream queries/dashboards that expect all usage attributes in one namespace.

Import and use ATTR_GEN_AI_USAGE_TOTAL_TOKENS (if available in your @opentelemetry/semantic-conventions version) to replace SpanAttributes.LLM_USAGE_TOTAL_TOKENS, or define the constant in @traceloop/ai-semantic-conventions if the OTel package doesn't expose it yet.

Comment on lines 312 to 327
const attributes: Attributes = {
[SpanAttributes.LLM_SYSTEM]: provider,
[ATTR_GEN_AI_SYSTEM]: provider,
[SpanAttributes.LLM_REQUEST_TYPE]: type,
};

try {
attributes[SpanAttributes.LLM_REQUEST_MODEL] = params.model;
attributes[ATTR_GEN_AI_REQUEST_MODEL] = params.model;
if (params.max_tokens) {
attributes[SpanAttributes.LLM_REQUEST_MAX_TOKENS] = params.max_tokens;
attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.max_tokens;
}
if (params.temperature) {
attributes[SpanAttributes.LLM_REQUEST_TEMPERATURE] = params.temperature;
attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
}
if (params.top_p) {
attributes[SpanAttributes.LLM_REQUEST_TOP_P] = params.top_p;
attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.top_p;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t drop valid “0” request params (max_tokens / temperature / top_p)
Current truthy checks will skip setting attributes when values are 0. Prefer nullish checks.

-      if (params.max_tokens) {
+      if (params.max_tokens != null) {
         attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.max_tokens;
       }
-      if (params.temperature) {
+      if (params.temperature != null) {
         attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
       }
-      if (params.top_p) {
+      if (params.top_p != null) {
         attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.top_p;
       }
🤖 Prompt for AI Agents
In packages/instrumentation-openai/src/instrumentation.ts around lines 312 to
327, the current truthy checks (if (params.max_tokens), if (params.temperature),
if (params.top_p)) skip valid zero values; change these to nullish/explicit
undefined checks so 0 is preserved — e.g., test params.max_tokens !== undefined
&& params.max_tokens !== null (or use params.max_tokens ??) before assigning
ATTR_GEN_AI_REQUEST_MAX_TOKENS, and do the same for temperature and top_p so
zero values are recorded.

Comment on lines +87 to 94
assert.strictEqual(attributes[ATTR_GEN_AI_SYSTEM], "Google");
assert.strictEqual(attributes["llm.request.type"], "completion");
assert.strictEqual(attributes["gen_ai.request.model"], model);
assert.strictEqual(attributes["gen_ai.request.top_p"], 0.9);
assert.strictEqual(attributes[ATTR_GEN_AI_REQUEST_MODEL], model);
assert.strictEqual(attributes[ATTR_GEN_AI_REQUEST_TOP_P], 0.9);
assert.strictEqual(attributes["gen_ai.prompt.0.content"], prompt);
assert.strictEqual(attributes["gen_ai.prompt.0.role"], "user");
assert.strictEqual(attributes["gen_ai.response.model"], model);
assert.strictEqual(attributes[ATTR_GEN_AI_RESPONSE_MODEL], model);
assert.strictEqual(attributes["gen_ai.completion.0.role"], "model");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the test file to see current state of imports and attribute usage
cat -n packages/instrumentation-vertexai/tests/gemini.test.ts | head -100

Repository: traceloop/openllmetry-js

Length of output: 4209


🏁 Script executed:

# Also check the semantic attributes file
cat packages/ai-semantic-conventions/src/SemanticAttributes.ts

Repository: traceloop/openllmetry-js

Length of output: 4520


🏁 Script executed:

# Check if there's an index file that re-exports these constants
cat packages/ai-semantic-conventions/src/index.ts

Repository: traceloop/openllmetry-js

Length of output: 845


🏁 Script executed:

# Search for ATTR_GEN_AI_SYSTEM or similar constant definitions in the codebase
rg "ATTR_GEN_AI_SYSTEM|ATTR_GEN_AI_REQUEST_MODEL" --type ts --type tsx

Repository: traceloop/openllmetry-js

Length of output: 95


🏁 Script executed:

# Check if these constants might be imported from OpenTelemetry in other files
rg "@opentelemetry/semantic-conventions" --type ts --type tsx -A 3

Repository: traceloop/openllmetry-js

Length of output: 95


🏁 Script executed:

# Search for gen_ai. pattern usage in the instrumentation source code
rg "gen_ai\." packages/instrumentation-vertexai/src --type ts -B 2 -A 2

Repository: traceloop/openllmetry-js

Length of output: 50


🏁 Script executed:

# Search more broadly for ATTR_GEN_AI constants
rg "ATTR_GEN_AI"

Repository: traceloop/openllmetry-js

Length of output: 50381


🏁 Script executed:

# Check the vertexai instrumentation source code
cat packages/instrumentation-vertexai/src/vertexai-instrumentation.ts | head -150

Repository: traceloop/openllmetry-js

Length of output: 4427


🏁 Script executed:

# Look for how attributes are set in the instrumentation
rg "gen_ai\." packages/instrumentation-vertexai/src -B 2 -A 2 | head -100

Repository: traceloop/openllmetry-js

Length of output: 50


Add missing imports for ATTR_GEN_AI_* constants—test code won't compile without them.

Import from @opentelemetry/semantic-conventions/incubating:

 import type * as vertexAiImport from "@google-cloud/vertexai";
+import {
+  ATTR_GEN_AI_REQUEST_MAX_TOKENS,
+  ATTR_GEN_AI_REQUEST_MODEL,
+  ATTR_GEN_AI_REQUEST_TOP_P,
+  ATTR_GEN_AI_RESPONSE_MODEL,
+  ATTR_GEN_AI_SYSTEM,
+} from "@opentelemetry/semantic-conventions/incubating";

Also applies to: 140-148

🤖 Prompt for AI Agents
In packages/instrumentation-vertexai/tests/gemini.test.ts around lines 87-94
(and also apply the same change at 140-148), the test references ATTR_GEN_AI_*
constants but they are not imported; add an import from
"@opentelemetry/semantic-conventions/incubating" bringing in the required
ATTR_GEN_AI_SYSTEM, ATTR_GEN_AI_REQUEST_MODEL, ATTR_GEN_AI_REQUEST_TOP_P,
ATTR_GEN_AI_RESPONSE_MODEL (and any other ATTR_GEN_AI_* constants used in the
file) and update the top-of-file imports accordingly so the test compiles.

Comment on lines +8 to +16
import {
ATTR_GEN_AI_AGENT_NAME,
ATTR_GEN_AI_COMPLETION,
ATTR_GEN_AI_PROMPT,
ATTR_GEN_AI_REQUEST_MODEL,
ATTR_GEN_AI_RESPONSE_MODEL,
ATTR_GEN_AI_USAGE_INPUT_TOKENS,
ATTR_GEN_AI_USAGE_OUTPUT_TOKENS,
} from "@opentelemetry/semantic-conventions/incubating";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n packages/traceloop-sdk/src/lib/tracing/manual.ts | head -160

Repository: traceloop/openllmetry-js

Length of output: 5439


🏁 Script executed:

# Check what's actually exported from @opentelemetry/semantic-conventions
rg "ATTR_GEN_AI_AGENT_NAME|ATTR_GEN_AI_USAGE_INPUT_TOKENS|ATTR_GEN_AI_USAGE_OUTPUT_TOKENS" packages/ --type ts --type tsx

Repository: traceloop/openllmetry-js

Length of output: 95


🏁 Script executed:

# Find references to the constants mentioned in the imports
rg "ATTR_GEN_AI_" packages/traceloop-sdk/src/lib/tracing/manual.ts -A 2 -B 2

Repository: traceloop/openllmetry-js

Length of output: 1815


🌐 Web query:

@opentelemetry/api startActiveSpan signature and options parameter format for setting span attributes

💡 Result:

Signature (JS/TS)

  • tracer.startActiveSpan(name: string, fn: (span: Span) => T): T
  • tracer.startActiveSpan(name: string, options: SpanOptions, fn: (span: Span) => T): T
    (there is also tracer.startSpan(name: string, options?: SpanOptions, context?: Context) which uses the same SpanOptions shape). [1][2]

SpanOptions (for setting attributes)

  • The options object accepts an attributes field: attributes?: Attributes. Attributes is a map of key -> AttributeValue. In the JS implementation Attributes is typed as a plain object where keys are strings and values are AttributeValue. AttributeValue is a scalar or array of scalars (string | number | boolean | Array<string|number|boolean>). You can also set kind?: SpanKind and startTime?: TimeInput in the same options object. [2][3]

Example

  • TypeScript example setting attributes when starting an active span:
    const result = tracer.startActiveSpan('opName', { attributes: { 'http.method': 'GET', 'app.user_id': 123 } }, (span) => {
    // span is active here
    span.end();
    return something;
    }); [2][3]

Sources

  • Tracer API / startActiveSpan (opentelemetry-js docs). [1]
  • SDK Span / SpanOptions & Attributes types (opentelemetry-js sources / typings). [2]
  • OpenTelemetry Tracing spec (attributes rules & sampling note). [3]

🌐 Web query:

@opentelemetry/semantic-conventions incubating v1.38.0 ATTR_GEN_AI constants AGENT_NAME and USAGE token attributes

💡 Result:

Short answer: Yes — the GenAI semantic conventions in the incubating entry (semconv v1.38.0) define the agent-name attribute and the token-usage metric/attributes.

Key names you’ll see in the incubating semconv docs and v1.38.0 release:

  • gen_ai.agent.name (agent name attribute). [1][2]
  • gen_ai.client.token.usage (token-usage histogram metric). Required/associated attributes include gen_ai.token.type and provider/model attributes; token-count attributes used by some provider conventions include gen_ai.usage.input_tokens and gen_ai.usage.output_tokens. [1][3]

Sources:

  1. OpenTelemetry GenAI spans (agent name + agent spans). [opentelemetry.io]. [2]
  2. OpenTelemetry GenAI metrics (gen_ai.client.token.usage and gen_ai.token.type). [opentelemetry.io]. [1]
  3. OpenTelemetry semantic-conventions v1.38.0 release notes (v1.38.0). [github.com]. [4]

References:
[1] https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-metrics/
[2] https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/
[3] https://opentelemetry.io/docs/specs/semconv/gen-ai/openai/
[4] https://github.com/open-telemetry/semantic-conventions/releases/tag/v1.38.0


🏁 Script executed:

cat -n packages/traceloop-sdk/src/lib/tracing/manual.ts | sed -n '160,200p'

Repository: traceloop/openllmetry-js

Length of output: 1453


Fix startActiveSpan options shape and import semantic constants from correct source

At line 153 in withVectorDBCall, the options object is missing the attributes wrapper. Per the OpenTelemetry API, attributes must be nested under an attributes key:

  return getTracer().startActiveSpan(
    `${vendor}.${type}`,
-   { [SpanAttributes.LLM_REQUEST_TYPE]: type },
+   { attributes: { [SpanAttributes.LLM_REQUEST_TYPE]: type } },
    entityContext,
    (span: Span) => {

Additionally, import ATTR_GEN_AI_AGENT_NAME, ATTR_GEN_AI_USAGE_INPUT_TOKENS, and ATTR_GEN_AI_USAGE_OUTPUT_TOKENS from @traceloop/ai-semantic-conventions (lines 8-16) instead of @opentelemetry/semantic-conventions/incubating.

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
packages/instrumentation-langchain/tsconfig.test.json (1)

1-9: Consider noEmit (or separate outDir) for test tsconfig; references: [] may be redundant.

If this config is only used to typecheck/run tests (e.g., via ts-mocha/ts-node), setting "noEmit": true can avoid generating dist/test/** artifacts (or alternatively use a dedicated outDir like dist-test). Also, an explicit empty "references": [] usually isn’t needed unless tooling expects the key. Based on learnings, use ts-mocha as the TypeScript test runner for package tests.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5890edb and 1b1d804.

📒 Files selected for processing (24)
  • packages/instrumentation-anthropic/package.json (2 hunks)
  • packages/instrumentation-anthropic/tsconfig.test.json (1 hunks)
  • packages/instrumentation-bedrock/package.json (2 hunks)
  • packages/instrumentation-bedrock/tsconfig.test.json (1 hunks)
  • packages/instrumentation-chromadb/package.json (2 hunks)
  • packages/instrumentation-chromadb/tsconfig.test.json (1 hunks)
  • packages/instrumentation-cohere/package.json (2 hunks)
  • packages/instrumentation-cohere/tsconfig.test.json (1 hunks)
  • packages/instrumentation-langchain/package.json (2 hunks)
  • packages/instrumentation-langchain/tsconfig.test.json (1 hunks)
  • packages/instrumentation-llamaindex/package.json (2 hunks)
  • packages/instrumentation-llamaindex/tsconfig.test.json (1 hunks)
  • packages/instrumentation-mcp/package.json (2 hunks)
  • packages/instrumentation-mcp/tsconfig.test.json (1 hunks)
  • packages/instrumentation-openai/package.json (2 hunks)
  • packages/instrumentation-openai/tsconfig.test.json (1 hunks)
  • packages/instrumentation-pinecone/package.json (2 hunks)
  • packages/instrumentation-pinecone/tsconfig.test.json (1 hunks)
  • packages/instrumentation-qdrant/package.json (1 hunks)
  • packages/instrumentation-qdrant/tsconfig.test.json (1 hunks)
  • packages/instrumentation-together/package.json (2 hunks)
  • packages/instrumentation-together/tsconfig.test.json (1 hunks)
  • packages/instrumentation-vertexai/package.json (2 hunks)
  • packages/instrumentation-vertexai/tsconfig.test.json (1 hunks)
✅ Files skipped from review due to trivial changes (5)
  • packages/instrumentation-llamaindex/tsconfig.test.json
  • packages/instrumentation-openai/tsconfig.test.json
  • packages/instrumentation-cohere/tsconfig.test.json
  • packages/instrumentation-mcp/tsconfig.test.json
  • packages/instrumentation-bedrock/tsconfig.test.json
🚧 Files skipped from review as they are similar to previous changes (5)
  • packages/instrumentation-vertexai/package.json
  • packages/instrumentation-chromadb/package.json
  • packages/instrumentation-together/package.json
  • packages/instrumentation-openai/package.json
  • packages/instrumentation-anthropic/package.json
🧰 Additional context used
📓 Path-based instructions (2)
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-anthropic/tsconfig.test.json
  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-qdrant/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-chromadb/tsconfig.test.json
packages/*/package.json

📄 CodeRabbit inference engine (CLAUDE.md)

Use workspace:* for intra-repo package dependencies in package.json

Files:

  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-qdrant/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-mcp/package.json
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/instrumentation-anthropic/tsconfig.test.json
  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-qdrant/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Applied to files:

  • packages/instrumentation-anthropic/tsconfig.test.json
  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Use ts-mocha as the TypeScript test runner for package tests

Applied to files:

  • packages/instrumentation-anthropic/tsconfig.test.json
  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-qdrant/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/instrumentation-anthropic/tsconfig.test.json
  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls

Applied to files:

  • packages/instrumentation-anthropic/tsconfig.test.json
  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-qdrant/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-mcp/package.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition

Applied to files:

  • packages/instrumentation-qdrant/tsconfig.test.json
  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-vertexai/tsconfig.test.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-together/tsconfig.test.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately

Applied to files:

  • packages/instrumentation-pinecone/tsconfig.test.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-langchain/tsconfig.test.json
  • packages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them

Applied to files:

  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-mcp/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies

Applied to files:

  • packages/instrumentation-langchain/package.json
  • packages/instrumentation-bedrock/package.json
  • packages/instrumentation-llamaindex/package.json
  • packages/instrumentation-cohere/package.json
  • packages/instrumentation-pinecone/package.json
  • packages/instrumentation-mcp/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/instrumentation-llamaindex/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/instrumentation-llamaindex/package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and test
🔇 Additional comments (18)
packages/instrumentation-vertexai/tsconfig.test.json (1)

1-9: Configuration looks good.

This TypeScript test configuration file is appropriate and correctly structured. It properly extends the base configuration and includes both source and test files for compilation.

packages/instrumentation-anthropic/tsconfig.test.json (1)

1-9: Configuration looks good.

This test TypeScript configuration properly extends the base config (which includes the updated moduleResolution: node per the PR objectives) and correctly includes both source and test files. The setup aligns with ts-mocha test runner expectations and follows the repo's established patterns.

packages/instrumentation-llamaindex/package.json (1)

13-13: Changes align with PR objectives for OpenTelemetry semantic conventions migration.

The test script update to tsconfig.test.json and the version bump to @opentelemetry/semantic-conventions ^1.38.0 are consistent with the broader repository alignment to newer OpenTelemetry semantic conventions, and the workspace:* usage for @traceloop/ai-semantic-conventions follows coding guidelines correctly.

Verify that tsconfig.test.json exists in this package and is properly configured for TypeScript test compilation. If it doesn't exist or has not been added, it should be created with test-specific configurations.

Also applies to: 43-43

packages/instrumentation-together/tsconfig.test.json (1)

1-9: Configuration looks good.

This test TypeScript configuration follows the established pattern for per-package test configurations and correctly extends the base config. The includes cover both source and test files, and the compiler options are reasonable.

packages/instrumentation-qdrant/tsconfig.test.json (1)

1-9: Configuration looks good.

The test configuration correctly extends the base tsconfig and includes both src and test files with appropriate compiler options for test builds.

packages/instrumentation-qdrant/package.json (2)

40-46: LGTM on dependency configuration.

The package correctly uses workspace:* for the internal @traceloop/ai-semantic-conventions dependency and maintains proper OpenTelemetry and instrumentation dependencies. This aligns with the coding guidelines and PR objectives for GenAI semantic convention alignment.


13-13: No issues found. The tsconfig.test.json file already exists in packages/instrumentation-qdrant/, and this change aligns with the consistent pattern established across all other instrumentation packages in the monorepo.

packages/instrumentation-chromadb/tsconfig.test.json (1)

1-9: Configuration looks good.

The test TypeScript configuration is correctly structured to compile both source and test files with appropriate output and root directory settings. The extension of the base config and explicit file inclusion patterns are sound.

packages/instrumentation-mcp/package.json (3)

13-13: Test configuration updated to use separate tsconfig.

The test script now references tsconfig.test.json instead of tsconfig.json, aligning with the PR's TypeScript configuration standardization. Ensure that tsconfig.test.json exists in the package root and is properly configured for the test environment.


44-44: Semantic conventions version updated to support new GenAI attributes.

The bump from ^1.36.0 to ^1.38.0 aligns with the PR's goal to adopt OpenTelemetry's new ATTR_GEN_AI_* semantic attributes. Verify that the MCP instrumentation implementation uses the new semantic convention constants from @opentelemetry/semantic-conventions rather than hardcoding attribute strings.


48-59: Inconsistency between summary and provided package.json.

The AI summary claims that @opentelemetry/semantic-conventions was updated in devDependencies from ^1.36.0 to ^1.38.0, but the provided code does not show @opentelemetry/semantic-conventions in the devDependencies section, nor is there a change marker (~) on any devDependencies line. Additionally, @opentelemetry/semantic-conventions should only be a runtime dependency, not a dev dependency. Please clarify whether this was an unintended summary error or if the code snapshot is incomplete.

packages/instrumentation-cohere/package.json (1)

44-44: Semantic conventions version bump is necessary for new GenAI attributes.

The dependency bump from ^1.36.0 to ^1.38.0 enables the new ATTR_GEN_AI_* constants required by this PR. This aligns with the migration to OpenTelemetry semantic conventions for GenAI telemetry.

Please confirm that the instrumentation code in this package has been updated to use the new ATTR_GEN_AI_* attributes from the bumped version of @opentelemetry/semantic-conventions.

packages/instrumentation-pinecone/tsconfig.test.json (1)

1-9: Test configuration is correct and consistent with established patterns.

The tsconfig.test.json file properly extends tsconfig.base.json (which includes moduleResolution: node), uses the correct test directory reference, and matches the structure used across all other instrumentation packages in the repository.

packages/instrumentation-pinecone/package.json (1)

40-47: The @opentelemetry/semantic-conventions v1.38.0 bump is safe. This package uses @traceloop/ai-semantic-conventions for semantic attributes and does not depend on the k8s metrics that changed in v1.38.0. No compatibility issues with the current OpenTelemetry package versions.

packages/instrumentation-bedrock/package.json (2)

13-13: The test script change to use tsconfig.test.json is properly supported. The config file exists and is correctly configured to include both src and test directories, with appropriate compiler options. This change is consistent across all instrumentation packages in the repository.


44-44: Semantic conventions version bump is appropriate and consistently applied.

The @opentelemetry/semantic-conventions dependency was updated to ^1.38.0 across all instrumentation packages, aligning with the PR's goal of adopting newer OpenTelemetry semantic conventions for GenAI telemetry. The bedrock instrumentation correctly uses semantic convention constants (ATTR_GEN_AI_*, SpanAttributes) throughout its request and response handling for all supported vendors (ai21, amazon, anthropic, cohere, meta), with no hardcoded attribute strings.

packages/instrumentation-langchain/package.json (2)

13-13: No action needed—tsconfig.test.json is properly configured and test files exist.

The test script change is correctly set up. The tsconfig.test.json properly extends the base config, includes both source and test files, and the test glob pattern matches the existing test file location.


45-45: @opentelemetry/semantic-conventions@^1.38.0 is compatible with your OTel stack.

Version 1.38.0 has no peer dependencies and requires only Node >=14, which matches your engines constraint. The dependency is safely compatible with the @opentelemetry/api (^1.9.0), @opentelemetry/core (^2.0.1), and @opentelemetry/instrumentation (^0.203.0) versions used in this package. The codebase correctly uses both @opentelemetry/semantic-conventions/incubating for standard GenAI attributes and @traceloop/ai-semantic-conventions for custom semantic definitions.

@nina-kollman nina-kollman merged commit 8e05349 into main Dec 14, 2025
8 checks passed
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed d7df478 in 6 minutes and 17 seconds. Click for details.
  • Reviewed 37 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts:503
  • Draft comment:
    Good update: Replacing SpanAttributes.GEN_AI_AGENT_NAME with ATTR_GEN_AI_AGENT_NAME across outer, inner, tool, and child spans aligns the test with the new opentelemetry semantic conventions.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, explaining a change that aligns with new conventions. It doesn't provide a suggestion, ask for confirmation, or highlight a potential issue.

Workflow ID: wflow_Pqscx7LFOwM34CIU

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants