fix(ai-sdk): Add conversion to opentelemetry semantic convention #845
fix(ai-sdk): Add conversion to opentelemetry semantic convention #845nina-kollman merged 20 commits intomainfrom
Conversation
WalkthroughReplaced many LLM_* and legacy GEN_AI_* span attribute constants with incubating Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to 4047a13 in 44 seconds. Click for details.
- Reviewed
850lines of code in3files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:5
- Draft comment:
Unused import detected: 'ASSOCATION_PROPERTIES_KEY' is imported but never used. Consider removing it to keep the test file clean. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:40
- Draft comment:
The test file is very comprehensive but quite long. Consider splitting tests by logical grouping (e.g., response, prompt, token, metadata, tool calls) into separate files or using nested describe blocks to improve readability and maintainability. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:424
- Draft comment:
There is repeated usage of JSON.stringify and JSON.parse in multiple tests. Consider extracting helper functions for common patterns (e.g. for creating test messages and verifying transformed prompts) to reduce duplication and ease future updates. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_Ll4qRIXPDJZ9JlNI
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 3429596 in 1 minute and 33 seconds. Click for details.
- Reviewed
173lines of code in3files - Skipped
1files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/package.json:38
- Draft comment:
Bump dependency version for @opentelemetry/semantic-conventions to ^1.38.0. Verify this upgrade is compatible with downstream consumers. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is about a dependency version change and asks the author to verify compatibility with downstream consumers. According to the rules, comments on dependency changes and requests for verification are not allowed.
2. packages/ai-semantic-conventions/src/SemanticAttributes.ts:45
- Draft comment:
Refactored attribute mapping: using incubating constants and adding legacy aliases. Ensure these mappings remain consistent with the updated OpenTelemetry semantic conventions. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative and asks the author to ensure consistency with OpenTelemetry semantic conventions, which violates the rules against asking the author to ensure behavior is intended or tested.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:377
- Draft comment:
Updated token attribute mappings in transformPromptTokens and transformCompletionTokens. Confirm that legacy keys are correctly removed and new keys (ATTR_GEN_AI_USAGE_PROMPT_TOKENS / ATTR_GEN_AI_USAGE_OUTPUT_TOKENS) are used consistently. - Reason this comment was not posted:
Comment looked like it was already resolved.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:744
- Draft comment:
Consider adding unit tests to verify that the new semantic conversion and legacy key cleanup are working as expected. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
Workflow ID: wflow_EpSgJ8azMSuHBtlQ
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
454-462: Use ATTR_GEN_AI_USAGE_PROMPT_TOKENS instead of ATTR_GEN_AI_USAGE_INPUT_TOKENS in calculateTotalTokens.Line 455 reads
ATTR_GEN_AI_USAGE_INPUT_TOKENS, buttransformPromptTokens()(line 384) setsATTR_GEN_AI_USAGE_PROMPT_TOKENS. SinceINPUT_TOKENSis never populated, the total tokens calculation will always fail for the input side. Change line 455 to readATTR_GEN_AI_USAGE_PROMPT_TOKENSto match what is actually set.
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
313-347: Consider extracting duplicate message processing logic.The message processing code (lines 313-347) duplicates the logic from lines 280-304 in the
AI_PROMPT_MESSAGEShandling above. Both blocks iterate messages, process content, setLLM_PROMPTSattributes, and buildLLM_INPUT_MESSAGES.Consider extracting this into a shared helper function:
+const processMessagesArray = ( + messages: Array<{ role: string; content: any }>, + attributes: Record<string, any>, +): void => { + const inputMessages: any[] = []; + + messages.forEach((msg: { role: string; content: any }, index: number) => { + const processedContent = processMessageContent(msg.content); + attributes[`${SpanAttributes.LLM_PROMPTS}.${index}.content`] = processedContent; + attributes[`${SpanAttributes.LLM_PROMPTS}.${index}.role`] = msg.role; + + inputMessages.push({ + role: msg.role, + parts: [{ type: TYPE_TEXT, content: processedContent }], + }); + }); + + if (inputMessages.length > 0) { + attributes[SpanAttributes.LLM_INPUT_MESSAGES] = JSON.stringify(inputMessages); + } +};Then use it in both places:
if (AI_PROMPT_MESSAGES in attributes) { try { // ... JSON parsing ... const messages = JSON.parse(jsonString); - const inputMessages: any[] = []; - messages.forEach((msg, index) => { ... }); - if (inputMessages.length > 0) { ... } + processMessagesArray(messages, attributes); delete attributes[AI_PROMPT_MESSAGES]; } catch { ... } } if (AI_PROMPT in attributes) { try { const promptData = JSON.parse(attributes[AI_PROMPT] as string); if (promptData.messages && Array.isArray(promptData.messages)) { - const messages = promptData.messages; - const inputMessages: any[] = []; - messages.forEach((msg, index) => { ... }); - if (inputMessages.length > 0) { ... } + processMessagesArray(promptData.messages, attributes); delete attributes[AI_PROMPT]; } // ... } catch { ... } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (3)
packages/ai-semantic-conventions/package.json(1 hunks)packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(9 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
packages/*/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use workspace:* for intra-repo package dependencies in package.json
Files:
packages/ai-semantic-conventions/package.json
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/ai-semantic-conventions/package.jsonpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/ai-semantic-conventions/package.jsonpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies
Applied to files:
packages/ai-semantic-conventions/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-136)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (14)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (2)
45-100: LGTM! Good backward compatibility design.The SpanAttributes exports and legacy aliases are well-structured. The legacy aliases (e.g.,
LLM_SYSTEM,LLM_REQUEST_MODEL) correctly map to the newATTR_GEN_AI_*constants, maintaining backward compatibility while aligning with OpenTelemetry semantic conventions.
17-43: No action needed — all imported attributes are available in @opentelemetry/semantic-conventions v1.38.0.The
ATTR_GEN_AI_*constants you've imported from the/incubatingpath all exist in version 1.38.0. These are experimental/development conventions, and the@ts-expect-errordirective is appropriate since TypeScript's type definitions may not recognize the incubating entrypoint, but the imports work correctly at runtime.packages/ai-semantic-conventions/package.json (1)
38-38: Version bump aligns with new attribute support.The upgrade to
^1.38.0supports the newATTR_GEN_AI_*constants imported in SemanticAttributes.ts. The verification requested in the previous file will confirm availability.packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (11)
375-388: LGTM! Correctly migrates to new token attribute.The function properly maps
AI_USAGE_PROMPT_TOKENStoATTR_GEN_AI_USAGE_PROMPT_TOKENSand cleans up legacy attributes.
390-403: LGTM! Correctly migrates to new output token attribute.The function properly maps
AI_USAGE_COMPLETION_TOKENStoATTR_GEN_AI_USAGE_OUTPUT_TOKENSand cleans up legacy attributes.
464-486: LGTM! Provider name extraction is correct.Lines 471-473 properly extract the base provider name (e.g., "openai" from "openai.chat") and set
GEN_AI_PROVIDER_NAME, while maintaining existing vendor mapping forLLM_SYSTEM.
488-509: LGTM! Operation name mapping is correct.The function properly derives
GEN_AI_OPERATION_NAMEfrom span names, mapping text/object generation to "chat" and tool-related spans to "execute_tool".
511-517: LGTM! Model ID transformation is correct.The function properly maps
ai.model.idtoLLM_REQUEST_MODELand cleans up the source attribute.
519-531: LGTM! Finish reason transformation is correct.The function properly converts
ai.response.finishReasonto array format for OpenTelemetry standardGEN_AI_RESPONSE_FINISH_REASONS, handling both single values and existing arrays.
533-561: LGTM! Tool call attribute transformation is correct.The function properly mirrors AI SDK tool call attributes to OpenTelemetry format. The comments correctly note that some attributes are preserved for later cleanup by
transformToolCalls.
563-573: LGTM! Conversation ID extraction is correct.The function properly prioritizes
conversationIdoversessionIdwhen settingGEN_AI_CONVERSATION_ID.
575-589: LGTM! Response metadata transformation is correct.The function properly maps
ai.response.modeltoLLM_RESPONSE_MODELandai.response.idtoGEN_AI_RESPONSE_ID, cleaning up source attributes.
667-688: LGTM! Transformation pipeline is properly extended.The new transformation functions are correctly integrated into
transformLLMSpans, and the order is logical. The comment on line 684 accurately notes thattransformVendoralso setsGEN_AI_PROVIDER_NAME.
690-711: LGTM! Tool name cleanup is correct.Line 708 properly deletes
ai.toolCall.nameafter settingTRACELOOP_ENTITY_NAME, completing the deferred cleanup mentioned intransformToolCallAttributes.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 671117b in 1 minute and 14 seconds. Click for details.
- Reviewed
56lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2068
- Draft comment:
Multiple tests in the provider name suite have duplicate/conflicting assertions—for example, one assert expects the transformed provider to be “openai” while the next expects “OpenAI” (and similar for azure-openai and anthropic). Please clarify the intended normalization (e.g. should it be consistently capitalized?) and update the tests accordingly. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_uwlNoD2ug5Dil12Q
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 8528f24 in 1 minute and 16 seconds. Click for details.
- Reviewed
123lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:459
- Draft comment:
In transformVendor, the initial assignment using the base provider name (extracted via vendor.split(".")[0]) is later overwritten withmappedVendor || vendor. Consider using the base name as the fallback (e.g.mappedVendor || providerName) to ensure standardized naming if no mapping is found. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:310
- Draft comment:
Several inline comments have been removed (e.g. in transformPrompts, transformPromptTokens, etc.). This cleanup is fine if the function names and structure are clear; just ensure that any needed context for future maintainers isn’t lost. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_JOzFiXyb3jM2sdfE
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (5)
560-617:ai.prompt.toolstests now expect ATTR_GEN_AI_TOOL_NAME but implementation never sets it for toolsHere you expect:
attributes[`${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.0.name`] === "getWeather";but
transformToolsonly writesSpanAttributes.LLM_REQUEST_FUNCTIONS.*keys and never sets any${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.*attributes forai.prompt.tools. As a result, these new expectations will fail.Consider either:
- Updating
transformToolsto also populate the new GenAI tool-definition attributes, e.g.:- if (tool.name) { - attributes[ - `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name` - ] = tool.name; - } + if (tool.name) { + attributes[ + `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name` + ] = tool.name; + attributes[ + `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.name` + ] = tool.name; + }(and similarly for description/parameters), or
- Keeping these tests on
SpanAttributes.LLM_REQUEST_FUNCTIONSonly, and adding separate tests for the OpenTelemetryATTR_GEN_AI_TOOL_NAMEbehavior where it is actually set (tool-call attributes).
849-940: Prompt/completion token tests match current cleanup behavior but reveal loss of data when onlyai.usage.*is presentThe expectations around keeping
ATTR_GEN_AI_USAGE_INPUT_TOKENS/OUTPUT_TOKENSand deletingai.usage.*andATTR_GEN_AI_USAGE_PROMPT_TOKENS/ATTR_GEN_AI_USAGE_COMPLETION_TOKENSare consistent withtransformPromptTokens/transformCompletionTokensand will pass as long asgen_ai.usage.input_tokens/output_tokensare already set.Note though that in scenarios where only
ai.usage.promptTokens/ai.usage.completionTokensare present and nogen_ai.usage.*attributes exist, the current transformers will delete the legacy attributes without preserving their values anywhere. If that’s not desired, we should adjust the implementation and add tests for that case.
942-1001: Total-token tests mostly match implementation; missing edge case for zero tokensThe tests for calculating
LLM_USAGE_TOTAL_TOKENSfromATTR_GEN_AI_USAGE_INPUT_TOKENSand*_OUTPUT_TOKENS(both numeric and string) align withcalculateTotalTokens. However, with both values equal to0, the current implementation will skip settingLLM_USAGE_TOTAL_TOKENSbecause it checksif (inputTokens && outputTokens); there’s no test covering that case.You may want to add a test where both are zero and, if desired, adjust the implementation to treat zero as valid and still emit a total.
1003-1083: Existing vendor tests still assert human‑readable names; see conflict with new provider‑name tests belowThese tests verify that
ATTR_GEN_AI_PROVIDER_NAMEends up as"OpenAI","Azure","Anthropic", or"", matching the oldVENDOR_MAPPINGbehavior. On their own they are consistent with the currenttransformVendor.However, the new
describe("transformLLMSpans - provider name")block later in this file expects different values (base provider IDs like"openai","azure-openai","anthropic") for the same attribute, which cannot be simultaneously true. The semantics forATTR_GEN_AI_PROVIDER_NAMEneed to be clarified and the tests made consistent.
1-2483: Prettier formatting failureCI reports a Prettier formatting error for this file. After finalizing test changes, please run:
pnpm prettier --write packages/traceloop-sdk/test/ai-sdk-transformations.test.tsto satisfy the formatting check.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (4)
219-263:transformToolsstill only fills LLM_REQUEST_FUNCTIONS; no Gen‑AI tool‑definition attributesThis transformer maps
ai.prompt.toolsexclusively intoSpanAttributes.LLM_REQUEST_FUNCTIONS.*. Given the new tests that expect${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.nameetc. for tools, you likely need to extend this function to populate those Gen‑AI attributes as well.For example:
if (tool && typeof tool === "object") { if (tool.name) { attributes[ `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name` ] = tool.name; + attributes[ + `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.name` + ] = tool.name; } if (tool.description) { attributes[ `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.description` ] = tool.description; + attributes[ + `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.description` + ] = tool.description; } if (tool.parameters) { const params = typeof tool.parameters === "string" ? tool.parameters : JSON.stringify(tool.parameters); attributes[ `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.parameters` ] = params; + attributes[ + `${SpanAttributes.ATTR_GEN_AI_TOOL_NAME}.${index}.parameters` + ] = params; } }This would bring the implementation in line with the new tests and Gen‑AI semantic attributes.
454-462: Total‑token calculation ignores valid zero values
calculateTotalTokenscurrently does:const inputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS]; const outputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_OUTPUT_TOKENS]; if (inputTokens && outputTokens) { attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = Number(inputTokens) + Number(outputTokens); }This means that when either count is
0(number) the total is not computed, even though0is a valid value.A safer implementation would explicitly check for presence and numeric convertibility, for example:
const inputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS]; const outputTokens = attributes[SpanAttributes.ATTR_GEN_AI_USAGE_OUTPUT_TOKENS]; - if (inputTokens && outputTokens) { - attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = - Number(inputTokens) + Number(outputTokens); - } + const input = Number(inputTokens); + const output = Number(outputTokens); + if (!Number.isNaN(input) && !Number.isNaN(output)) { + attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = input + output; + }so that
0+0correctly yields a total of0.
464-485: Provider/vendor mapping currently overwrites extracted provider name and conflicts with tests
transformVendorfirst extracts a base provider name:const providerName = vendor.split(".")[0]; attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName;and then later overwrites that same attribute with
mappedVendor || vendor:attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor;So the final value is always the mapped human‑readable vendor (“OpenAI”, “Azure”, “Anthropic”, etc.) or the raw input string, never the base ID (
"openai","azure-openai","anthropic").Given the new tests under
describe("transformLLMSpans - provider name")that assert base provider IDs, and the existing tests that assert vendor display names, this implementation cannot satisfy both.You should decide:
- Whether
ATTR_GEN_AI_PROVIDER_NAMEshould hold the base ID (e.g.,"openai") or the display name (e.g.,"OpenAI"), and- Whether a second attribute (e.g.,
ATTR_GEN_AI_SYSTEM) should carry the vendor‑mapped name if both are needed.Then adjust
transformVendorand the tests accordingly—for example, one possible approach:if (typeof vendor === "string" && vendor.length > 0) { const providerName = vendor.split(".")[0]; - attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName; + attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName; for (const prefix of Object.keys(VENDOR_MAPPING)) { if (vendor.startsWith(prefix)) { mappedVendor = VENDOR_MAPPING[prefix]; break; } } } -attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor; +if (mappedVendor) { + attributes[SpanAttributes.ATTR_GEN_AI_SYSTEM] = mappedVendor; +}or another consistent scheme that matches your chosen semantics.
1-746: Prettier formatting failureCI indicates a Prettier issue in this file as well. After finalizing logic changes, run:
pnpm prettier --write packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tsto clear the formatting check.
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
667-688: Ordering of transformers intransformLLMSpansis reasonable, but consider dependency on token transformsThe call sequence (operation name/model ID → responses → prompts/tools → token transforms → provider metadata → finish reason/response metadata → total tokens → vendor mapping → conversation ID → tool‑call attributes → telemetry metadata) is coherent.
Just be aware that:
calculateTotalTokensdepends onATTR_GEN_AI_USAGE_INPUT_TOKENS/OUTPUT_TOKENSalready being set; combined with the currenttransformPromptTokens/transformCompletionTokens, spans that only haveai.usage.*will not get a total.transformToolCallAttributesruns beforetransformTelemetryMetadata, ensuring agent spans see both Gen‑AI tool attributes and raw tool args/result.Once the token‑handling tweaks above are applied, the overall pipeline should behave as intended.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(19 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(45 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (9)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(667-688)
🪛 GitHub Actions: CI
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
[error] 1-1: Prettier formatting check failed. Run 'pnpm prettier --write' to fix code style issues in this file.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
[error] 1-1: Prettier formatting check failed. Run 'pnpm prettier --write' to fix code style issues in this file.
🔇 Additional comments (18)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
17-71: ATTR_GEN_AI_ imports and SpanAttributes re‑export look consistent*The new ATTR_GEN_AI_* imports and their exposure via
SpanAttributesare coherent and align with the goal of centralizing AI/LLM span attribute constants in this module. No issues spotted here.packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (9)
13-222: Response text/object and basic prompt-message expectations align with implementationThe assertions around mapping
ai.response.*andai.prompt.messagesintoATTR_GEN_AI_COMPLETION/ATTR_GEN_AI_PROMPTplus input/output messages match the current transformer logic and cover a good range of edge cases (empty strings, JSON objects, arrays, invalid JSON).
423-533: Newai.prompt(object) tests match the addedtransformPromptsbehaviorThe tests for
ai.promptwithpromptandmessagescorrectly reflect the new branch that parsespromptData, fillsATTR_GEN_AI_PROMPT.*entries, and populatesATTR_GEN_AI_INPUT_MESSAGESwhile cleaning up the original attribute. These should pass with the current implementation.
1283-1635: Gen‑AI input/output messages tests are in line with current transformersThe new tests for
ATTR_GEN_AI_INPUT_MESSAGESandATTR_GEN_AI_OUTPUT_MESSAGES(for text, tool calls, object responses, and complex multi‑turn conversations) accurately reflect howtransformResponse*,transformPrompts, andprocessMessageContentconstruct these message arrays. They provide solid coverage for varied content shapes.
1638-1914: Telemetry metadata and agent‑related tests align withtransformTelemetryMetadatabehaviorThe expectations around turning
ai.telemetry.metadata.*into association properties, stringifying non‑string values, ignoring null/undefined, and settingATTR_GEN_AI_AGENT_NAMEplus Traceloop span kind/entity name for root agent spans all look correct relative totransformTelemetryMetadata. These tests should help prevent regressions in agent attribution.
1917-1957: Tool‑span attribute tests remain compatible with added tool‑call transformersThis block still checks only the Traceloop entity input/output and span‑kind behavior for tool spans. With
transformToolCallAttributesnow also setting OpenTelemetry tool‑call attributes, these assertions remain valid and non‑conflicting.
2116-2144: Model‑ID mapping tests look correctThe tests for mapping
ai.model.idintoATTR_GEN_AI_REQUEST_MODELand deleting the original attribute mirrortransformModelId’s behavior and should pass as written.
2146-2204: Finish‑reason tests correctly exercise new array‑based attributeThe finish‑reason tests match
transformFinishReason(string vs array, multiple allowed values, and the no‑attribute case). They appear consistent and comprehensive.
2206-2286: Tool‑call attribute tests align withtransformToolCallAttributesThese tests for mapping
ai.toolCall.*intoATTR_GEN_AI_TOOL_NAME/ATTR_GEN_AI_TOOL_CALL_*while selectively deleting onlyai.toolCall.idare consistent with the new transformer and should pass. Good coverage of presence/absence of ID and the no‑tool‑call case.
2289-2341: Conversation‑ID tests matchtransformConversationId’s precedence rulesThe expectations around preferring
conversationIdoversessionId, falling back tosessionId, and doing nothing when neither is present are all aligned with the helper’s current behavior.packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (8)
74-164: Response text/object/tool‑calls transformers correctly emit Gen‑AI completion and output messagesThe updated
transformResponseText,transformResponseObject, andtransformResponseToolCallsfunctions now write toATTR_GEN_AI_COMPLETION.*andATTR_GEN_AI_OUTPUT_MESSAGESin addition to cleaning up the original AI SDK attributes. The logic for buildingROLE_ASSISTANToutput messages and tool‑call parts is coherent and matches the expectations in the tests.
166-217:processMessageContentbehavior is sensible for mixed content and matches testsThe function’s handling of arrays (filtering
type === "text"), objects, and JSON‑stringified arrays is consistent with how the prompt/message tests are written. It falls back to stringifying complex/mixed content, which is acceptable given the need to preserve non‑text payloads.
265-373: Prompt transformations correctly emit Gen‑AI prompt and input‑message attributesBoth branches of
transformPrompts(ai.prompt.messagesandai.promptwithmessagesorprompt) now:
- Use
processMessageContentto normalize content.- Write to
ATTR_GEN_AI_PROMPT.{i}.content/.role.- Build OpenTelemetry
gen_ai.input.messagesunderATTR_GEN_AI_INPUT_MESSAGES.- Remove the original AI SDK attributes once processed.
This matches the updated tests and looks solid for common and complex prompt shapes.
488-589: New helper transformers (operation name, model ID, finish reason, tool‑call attributes, conversation ID, response metadata) look correct in isolation
transformOperationNamesensibly maps generate/stream spans to"chat"and tool spans to"execute_tool".transformModelIdandtransformResponseMetadatacleanly moveai.model.idandai.response.*into the new Gen‑AI attributes and delete the originals.transformFinishReasonnormalizesai.response.finishReasonto the array form expected by the semantic conventions.transformToolCallAttributesmirrorsai.toolCall.*into the OTel attributes while keeping args/result around for later Traceloop handling.transformConversationIdapplies a simple, reasonable precedence (conversationIdoversessionId).These helpers match the new tests (aside from the response‑model test still asserting
LLM_RESPONSE_MODEL) and slot well into the main pipeline.
591-665: Telemetry‑metadata transform cleanly adds Gen‑AI agent attributes and reuses Gen‑AI messages for entity I/O
transformTelemetryMetadatanow:
- Uses
getAgentNameFromAttributesand setsATTR_GEN_AI_AGENT_NAMEwhen present.- For top‑level spans (agent name span or top AI spans), sets Traceloop span kind and entity name to
AGENT.- Reuses
ATTR_GEN_AI_INPUT_MESSAGES/ATTR_GEN_AI_OUTPUT_MESSAGES(or tool args/result) to populateTRACELOOP_ENTITY_INPUT/OUTPUT.- Converts all
ai.telemetry.metadata.*values to string association properties and deletes the originals.This aligns with the telemetry metadata tests and keeps Gen‑AI and Traceloop concepts consistent.
690-709: Tool‑span cleanup remains compatible with new tool‑call attributes
transformToolCallsstill movesai.toolCall.args/resultinto Traceloop entity input/output and deletes those raw attributes, whiletransformToolCallAttributeshas already copied the values into the Gen‑AI tool‑call attributes. Deletingai.toolCall.namehere is fine, since the OTel tool name has already been set earlier.
724-746: Span‑name transformation logic unchanged and compatible with new operation‑name helper
transformAiSdkSpanNamescontinues to rewrite tool and AI SDK span names based onHANDLED_SPAN_NAMESand agent metadata. WithtransformOperationNamenow handling semanticgen_ai.operation.name, there’s a clear separation between human‑readable span names and semantic attributes. No issues here.
740-746: End‑to‑end span attribute transformation wiring looks correct
transformAiSdkSpanAttributesnow simply orchestratestransformLLMSpansfollowed bytransformToolCallsfor spans from the"ai"instrumentation scope. Given the new helpers and tests, this is an appropriate composition point.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)
455-475: Provider name fallback uses full vendor string instead of extracted base name.Lines 461-462 extract the base provider name (e.g.,
"openai"from"openai.chat"), but line 472 falls back to the full vendor string when no mapping is found:const providerName = vendor.split(".")[0]; // "openai" attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName; // Set but overwritten // ... mapping logic ... attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor; // Uses full "openai.chat" as fallbackFor unmapped providers (e.g.,
"custom.service"), this producesATTR_GEN_AI_PROVIDER_NAME = "custom.service"instead of the extracted base"custom". The OpenTelemetry semantic convention expects simple provider identifiers.Additionally, line 462 is redundant since it's immediately overwritten by line 472.
const transformVendor = (attributes: Record<string, any>): void => { if (AI_MODEL_PROVIDER in attributes) { const vendor = attributes[AI_MODEL_PROVIDER]; let mappedVendor = null; if (typeof vendor === "string" && vendor.length > 0) { const providerName = vendor.split(".")[0]; - attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName; - + for (const prefix of Object.keys(VENDOR_MAPPING)) { if (vendor.startsWith(prefix)) { mappedVendor = VENDOR_MAPPING[prefix]; break; } } + + attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || providerName; + } else { + attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = vendor; } - attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor; delete attributes[AI_MODEL_PROVIDER]; } };
1-1: Fix Prettier formatting issues.The CI pipeline detected code style issues. Please run Prettier to format the file:
npx prettier --write packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧹 Nitpick comments (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)
500-572: Consider moving inline constants to file-level for consistency.Functions
transformModelId,transformFinishReason, andtransformResponseMetadatadefine constants inline:
- Line 501:
AI_MODEL_ID- Line 509:
AI_RESPONSE_FINISH_REASON- Lines 559-560:
AI_RESPONSE_MODEL,AI_RESPONSE_IDFor consistency with other constants defined at lines 26-42 (e.g.,
AI_RESPONSE_TEXT,AI_PROMPT_MESSAGES), consider moving these to the top of the file.
313-343: Consider extracting message processing logic to reduce nesting.The
AI_PROMPThandling for messages (lines 313-343) duplicates logic fromAI_PROMPT_MESSAGEShandling (lines 266-308). Both contain similar forEach loops processing messages.Consider extracting the message processing logic into a helper function to reduce duplication and improve readability:
const processMessages = (messages: any[], attributes: Record<string, any>) => { const inputMessages: any[] = []; messages.forEach((msg: { role: string; content: any }, index: number) => { const processedContent = processMessageContent(msg.content); attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.${index}.content`] = processedContent; attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.${index}.role`] = msg.role; inputMessages.push({ role: msg.role, parts: [{ type: TYPE_TEXT, content: processedContent }], }); }); if (inputMessages.length > 0) { attributes[SpanAttributes.ATTR_GEN_AI_INPUT_MESSAGES] = JSON.stringify(inputMessages); } };Then use in both locations:
if (AI_PROMPT_MESSAGES in attributes) { const messages = JSON.parse(attributes[AI_PROMPT_MESSAGES]); processMessages(messages, attributes); delete attributes[AI_PROMPT_MESSAGES]; } if (AI_PROMPT in attributes) { const promptData = JSON.parse(attributes[AI_PROMPT]); if (promptData.messages && Array.isArray(promptData.messages)) { processMessages(promptData.messages, attributes); delete attributes[AI_PROMPT]; } // ... rest of logic }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(19 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (9)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
🪛 GitHub Actions: CI
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
[warning] 1-1: Code style issues detected by Prettier.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed ff1d6a8 in 1 minute and 26 seconds. Click for details.
- Reviewed
1131lines of code in5files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2110
- Draft comment:
The provider name tests include duplicate assertions that expect the same attribute to equal both a lowercase value (e.g. 'openai') and a mapped standardized value (e.g. 'OpenAI'). This is contradictory. Please clarify the intended final value and remove the redundant/conflicting assertion. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:427
- Draft comment:
Overall, the tests are comprehensive and well‐structured, covering a wide range of transformation scenarios. Consider adding brief inline comments to each describe block summarizing the expected transformation behavior to improve future maintainability. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
3. packages/instrumentation-bedrock/src/instrumentation.ts:370
- Draft comment:
Typo in the comment: "The format is removing when we are setting span attribute" does not read clearly. Consider rephrasing it to something like "The format is removed when setting the span attribute" or "Formatting is removed when setting the span attribute" for clarity. - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_tzoMbBDFfyez6jBe
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/instrumentation-together/test/instrumentation.test.ts (1)
563-590: Contradictory assertions on the same attribute index.Lines 563-590 assert that
ATTR_GEN_AI_COMPLETION.0.function_call.nameequals both"get_current_weather"(line 565) and"get_tomorrow_weather"(line 581). A single attribute cannot have two different values simultaneously—the second assertion will always fail since it tests the same key.If the intent is to verify two tool calls, the second should reference index
1:assert.strictEqual( completionSpan.attributes[ - `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.function_call.name` + `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.1.function_call.name` ], "get_tomorrow_weather", ); assert.deepEqual( JSON.parse( completionSpan.attributes[ - `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.function_call.arguments` + `${SpanAttributes.ATTR_GEN_AI_COMPLETION}.1.function_call.arguments` ]! as string, ), { location: "Chicago, IL", unit: "fahrenheit" }, );packages/instrumentation-bedrock/src/instrumentation.ts (1)
159-164: Inconsistent attribute keys between span start and end.
_startSpansetsLLM_SYSTEM,LLM_REQUEST_MODEL, andLLM_RESPONSE_MODEL(lines 160-162), but_endSpanchecks forATTR_GEN_AI_SYSTEM(line 201) and readsATTR_GEN_AI_RESPONSE_MODEL(line 203). Since these are different string values, the condition at line 201 will always be false, causing token/completion attributes to never be set.attributes = { - [SpanAttributes.LLM_SYSTEM]: "AWS", - [SpanAttributes.LLM_REQUEST_MODEL]: model, - [SpanAttributes.LLM_RESPONSE_MODEL]: input.modelId, + [SpanAttributes.ATTR_GEN_AI_SYSTEM]: "AWS", + [SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL]: model, + [SpanAttributes.ATTR_GEN_AI_RESPONSE_MODEL]: input.modelId, [SpanAttributes.LLM_REQUEST_TYPE]: LLMRequestTypeValues.COMPLETION, };
♻️ Duplicate comments (4)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (3)
2110-2171: Provider-name tests contain contradictory assertions.Each test asserts that
ATTR_GEN_AI_PROVIDER_NAMEequals both a lowercase ID (e.g.,"openai") and a capitalized vendor name (e.g.,"OpenAI") for the same attribute. This is impossible—an attribute can only hold one value.Looking at
transformVendorin the implementation (lines 459-480), it first sets the provider name to the prefix (providerName), then overwrites it with the mapped vendor or original value. Only the final value will exist.Remove the duplicate assertions and keep only the expected final value (the mapped vendor name).
2431-2449: Test uses non-existentLLM_RESPONSE_MODELattribute.Line 2440 asserts
SpanAttributes.LLM_RESPONSE_MODEL, but this key doesn't exist in theSpanAttributesobject (seepackages/ai-semantic-conventions/src/SemanticAttributes.ts). The correct key isATTR_GEN_AI_RESPONSE_MODEL.assert.strictEqual( - attributes[SpanAttributes.LLM_RESPONSE_MODEL], + attributes[SpanAttributes.ATTR_GEN_AI_RESPONSE_MODEL], "gpt-4o", );
2469-2543: Complete transformation test has contradictory provider assertions.Lines 2506-2513 assert
ATTR_GEN_AI_PROVIDER_NAMEequals both"OpenAI"and"openai", which is impossible.packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
373-397: Token transforms delete canonical attributes, causing data loss.
transformPromptTokens(lines 373-384):
- Copies
ai.usage.promptTokens→ATTR_GEN_AI_USAGE_PROMPT_TOKENS(line 378)- Deletes both the source and the target (lines 382-383)
transformCompletionTokens(lines 386-397):
- Copies
ai.usage.completionTokens→ATTR_GEN_AI_USAGE_OUTPUT_TOKENS(line 391)- Deletes
ATTR_GEN_AI_USAGE_COMPLETION_TOKENS(line 396) — this is a different key, so OUTPUT_TOKENS survivesAdditionally,
calculateTotalTokens(lines 448-456) readsATTR_GEN_AI_USAGE_INPUT_TOKENS, buttransformPromptTokenswrites toATTR_GEN_AI_USAGE_PROMPT_TOKENS(different key). The prompt token value is never available for total calculation.const transformPromptTokens = (attributes: Record<string, any>): void => { if ( - !(SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS in attributes) && + !(SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS in attributes) && AI_USAGE_PROMPT_TOKENS in attributes ) { - attributes[SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS] = + attributes[SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS] = attributes[AI_USAGE_PROMPT_TOKENS]; } delete attributes[AI_USAGE_PROMPT_TOKENS]; - delete attributes[SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS]; };
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
459-480: Provider name is set twice, second assignment overwrites first.Line 466 sets
ATTR_GEN_AI_PROVIDER_NAMEto the raw provider prefix (e.g.,"openai"), then lines 476-477 overwrite it with either the mapped vendor name (e.g.,"OpenAI") or the original provider value.The first assignment at line 466 is always overwritten and serves no purpose. If you intend to keep both a raw provider ID and a human-readable vendor name, use separate attributes.
const transformVendor = (attributes: Record<string, any>): void => { if (AI_MODEL_PROVIDER in attributes) { const vendor = attributes[AI_MODEL_PROVIDER]; let mappedVendor = null; if (typeof vendor === "string" && vendor.length > 0) { - const providerName = vendor.split(".")[0]; - attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = providerName; - for (const prefix of Object.keys(VENDOR_MAPPING)) { if (vendor.startsWith(prefix)) { mappedVendor = VENDOR_MAPPING[prefix]; break; } } } attributes[SpanAttributes.ATTR_GEN_AI_PROVIDER_NAME] = mappedVendor || vendor; delete attributes[AI_MODEL_PROVIDER]; } };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
packages/instrumentation-anthropic/src/instrumentation.ts(5 hunks)packages/instrumentation-bedrock/src/instrumentation.ts(15 hunks)packages/instrumentation-together/test/instrumentation.test.ts(12 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(19 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(45 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.ts
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.ts
🧬 Code graph analysis (3)
packages/instrumentation-bedrock/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(656-677)
packages/instrumentation-together/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
🪛 GitHub Actions: CI
packages/instrumentation-bedrock/src/instrumentation.ts
[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix code style issues.
packages/instrumentation-together/test/instrumentation.test.ts
[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix code style issues.
packages/instrumentation-anthropic/src/instrumentation.ts
[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix code style issues.
🔇 Additional comments (38)
packages/instrumentation-together/test/instrumentation.test.ts (6)
126-186: LGTM – attribute key migration for function calling test.The attribute key updates (
ATTR_GEN_AI_PROMPT,ATTR_GEN_AI_TOOL_NAME,ATTR_GEN_AI_COMPLETION,ATTR_GEN_AI_USAGE_*) align with the new OpenTelemetry GenAI semantic conventions. The assertions remain structurally correct.
206-224: LGTM – chat test attribute migration.Attribute references updated consistently to the new
ATTR_GEN_AI_*keys.
256-280: LGTM – streaming chat test attribute migration.Correctly migrated to new semantic attribute keys for prompts, completions, and token usage.
302-331: LGTM – completion test attribute migration.Attribute keys properly updated throughout the completion assertions.
358-393: LGTM – streaming completion test attribute migration.Attribute key migration is consistent with other tests.
432-493: LGTM – tool calling test attribute migration.All attribute references updated to the new
ATTR_GEN_AI_*naming convention.packages/instrumentation-bedrock/src/instrumentation.ts (7)
201-207: Dependent on fixing the attribute key mismatch above.Once the
_startSpankeys are corrected to useATTR_GEN_AI_*, this conditional check and model reassignment will work as intended.
219-241: LGTM – streaming response token usage migration.Token attributes correctly use
ATTR_GEN_AI_USAGE_PROMPT_TOKENSandATTR_GEN_AI_USAGE_COMPLETION_TOKENS.
298-316: LGTM – ai21 vendor request attributes.Request and prompt attributes properly migrated to
ATTR_GEN_AI_*keys.
318-336: LGTM – amazon vendor request attributes.Attribute key migration is consistent.
337-377: LGTM – anthropic vendor request attributes.Both messages API and legacy prompt format handling properly use
ATTR_GEN_AI_PROMPTkeys.
378-410: LGTM – cohere and meta vendor request attributes.Attribute migrations are consistent with the new naming scheme.
422-544: LGTM – response attribute migrations for all vendors.Completion and token usage attributes consistently use
ATTR_GEN_AI_COMPLETIONandATTR_GEN_AI_USAGE_*keys across all vendor branches.packages/instrumentation-anthropic/src/instrumentation.ts (3)
206-230: LGTM – system and request attribute migration.The attribute keys are correctly migrated to
ATTR_GEN_AI_SYSTEM,ATTR_GEN_AI_REQUEST_MODEL,ATTR_GEN_AI_REQUEST_TEMPERATURE,ATTR_GEN_AI_REQUEST_TOP_P, andATTR_GEN_AI_REQUEST_MAX_TOKENS.
241-272: LGTM – prompt attribute migration.System prompt and message handling correctly use
ATTR_GEN_AI_PROMPTkeys with proper indexing.
480-522: LGTM – response and completion attribute migration.Response model, token usage, finish reason, and completion content attributes are all correctly migrated to
ATTR_GEN_AI_*keys.packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (15)
22-61: LGTM – response text transformation tests.Tests correctly verify the new
ATTR_GEN_AI_COMPLETIONattribute keys and cleanup of original attributes.
64-95: LGTM – response object transformation tests.Attribute key migrations are correctly tested.
97-183: LGTM – response tool calls transformation tests.Tests properly verify tool call attributes under
ATTR_GEN_AI_COMPLETION.
185-424: LGTM – prompt messages transformation tests.Comprehensive coverage of prompt transformation including edge cases (object content, arrays, JSON strings, escape sequences).
426-561: LGTM – single prompt and messages array transformation tests.Tests cover both legacy prompt format and new messages array format with
ATTR_GEN_AI_INPUT_MESSAGES.
563-851: LGTM – tools transformation tests.Tests properly verify tool attributes including edge cases (missing properties, string format, mixed formats).
853-952: LGTM – token transformation tests.Tests correctly verify that legacy tokens are cleaned up and
ATTR_GEN_AI_USAGE_INPUT_TOKENS/OUTPUT_TOKENSare preserved.
954-1014: LGTM – total tokens calculation tests.Tests verify calculation from
ATTR_GEN_AI_USAGE_INPUT_TOKENSandOUTPUT_TOKENS.
1016-1111: LGTM – vendor transformation tests.Tests cover various provider mappings and edge cases.
1329-1682: LGTM – gen_ai input/output messages tests.Comprehensive tests for the new OpenTelemetry standard message format.
1684-1963: LGTM – telemetry metadata and agent transformation tests.Tests properly verify association properties and agent attribute handling.
2008-2107: LGTM – operation name transformation tests.Tests cover all operation name mappings (chat, execute_tool).
2174-2262: LGTM – model id and finish reason transformation tests.Tests correctly verify
ATTR_GEN_AI_REQUEST_MODELandATTR_GEN_AI_RESPONSE_FINISH_REASONS.
2264-2345: LGTM – tool call attributes transformation tests.Tests verify OpenTelemetry standard tool call attributes.
2347-2400: LGTM – conversation id transformation tests.Tests verify precedence of conversationId over sessionId.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (7)
74-96: LGTM – transformResponseText migration.Correctly uses
ATTR_GEN_AI_COMPLETIONandATTR_GEN_AI_OUTPUT_MESSAGES.
98-120: LGTM – transformResponseObject migration.Attribute keys properly migrated.
122-167: LGTM – transformResponseToolCalls migration.Tool call attributes correctly use
ATTR_GEN_AI_COMPLETIONpaths.
268-371: LGTM – transformPrompts migration.Prompt attributes correctly use
ATTR_GEN_AI_PROMPTandATTR_GEN_AI_INPUT_MESSAGES.
482-576: LGTM – new transformation functions.
transformOperationName,transformModelId,transformFinishReason,transformToolCallAttributes,transformConversationId, andtransformResponseMetadataare well-structured and handle their transformations correctly.
578-654: LGTM – transformTelemetryMetadata update.Correctly references
ATTR_GEN_AI_AGENT_NAMEand the new input/output messages attributes.
656-677: LGTM – transformLLMSpans orchestration.New transformers are properly integrated into the transformation pipeline.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/instrumentation-vertexai/tests/gemini.test.ts (1)
87-108: Add missing SpanAttributes import from @traceloop/ai-semantic-conventions.The test uses
SpanAttributes.ATTR_GEN_AI_SYSTEM,SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL,SpanAttributes.ATTR_GEN_AI_REQUEST_TOP_P,SpanAttributes.ATTR_GEN_AI_RESPONSE_MODEL, andSpanAttributes.ATTR_GEN_AI_REQUEST_MAX_TOKENS(lines 87, 90, 94, 100, 149, 152, 156, 160, 166) but SpanAttributes is not imported. Add the following import at the top of the file:import { SpanAttributes } from "@traceloop/ai-semantic-conventions";
🧹 Nitpick comments (16)
packages/instrumentation-cohere/tests/rerank.test.ts (1)
130-139: Minor: duplicate model assertion and completion attribute checksThe completion assertions using
${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.relevanceScoreand.0.contentcorrectly reflect the response shape and new Gen‑AI completion schema—these look solid.You also assert
ATTR_GEN_AI_REQUEST_MODELtwice (lines 111 and 130) with the same expectation. That duplication is harmless but could be removed to keep the test focused.packages/instrumentation-cohere/tests/chat.test.ts (2)
138-141: Duplicate assertion forATTR_GEN_AI_REQUEST_MODEL.This assertion is already performed on lines 104-107 with the same expected value. Consider removing this duplicate to reduce test noise.
- assert.strictEqual( - attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL], - params?.model ?? "command", - );
238-241: Same duplicate assertion as in the first test.This
ATTR_GEN_AI_REQUEST_MODELassertion duplicates lines 204-207. Consider removing for consistency with the suggested fix in the first test.- assert.strictEqual( - attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL], - params?.model ?? "command", - );packages/instrumentation-anthropic/test/instrumentation.test.ts (1)
106-153: Gen‑AI attribute migration and token accounting look correctThe assertions on
ATTR_GEN_AI_REQUEST_MODEL,ATTR_GEN_AI_RESPONSE_MODEL,ATTR_GEN_AI_REQUEST_MAX_TOKENS, prompt/completion paths, and prompt/completion token usage correctly target the new Gen‑AI attributes while still validatingLLM_USAGE_TOTAL_TOKENSas the sum. This is a good, backwards‑compatible check that the new fields are populated consistently.You can optionally simplify
chatSpan.attributes[${SpanAttributes.ATTR_GEN_AI_...}]tochatSpan.attributes[SpanAttributes.ATTR_GEN_AI_...]since the constants are already strings; same comment applies to similar uses below. Based on learnings, this also keeps tests aligned with the central semantic-attribute definitions.packages/instrumentation-bedrock/tests/cohere.test.ts (1)
184-290: Streaming Bedrock Cohere test validates GEN AI usage tokens and content with new keysStreaming assertions now use
ATTR_GEN_AI_*for system, request parameters, prompts, completions, and prompt/completion token usage (including theamazon-bedrock-invocationMetricspath), while keeping total tokens onLLM_USAGE_TOTAL_TOKENS. This gives good coverage of the new naming without changing behavior.You might optionally drop one of the duplicate
ATTR_GEN_AI_REQUEST_MODELassertions to reduce noise, but it’s not functionally necessary.packages/instrumentation-openai/test/instrumentation.test.ts (1)
752-920: Skipped image edit/variation and token-coverage tests updated to new GEN AI usage/prompt/completion keysAll skipped image tests now reference
ATTR_GEN_AI_SYSTEM,ATTR_GEN_AI_PROMPT,ATTR_GEN_AI_COMPLETION, andATTR_GEN_AI_USAGE_COMPLETION_TOKENS(plus total tokens) instead of legacy LLM keys. This prepares them for future activation without further changes.If you plan to unskip these later, consider centralizing the expected token counts (e.g., 1056, 4160) in constants to make future spec or pricing changes easier to adapt.
packages/instrumentation-bedrock/tests/anthropic.test.ts (2)
166-169: Remove duplicate assertion forATTR_GEN_AI_REQUEST_MODEL.This assertion is identical to the one on lines 141-144. The duplicate check adds no value and clutters the test.
- assert.strictEqual( - attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL], - model, - );
249-252: Remove duplicate assertion forATTR_GEN_AI_REQUEST_MODEL.Same issue as in the non-streaming test—this assertion duplicates lines 224-227.
- assert.strictEqual( - attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL], - model, - );packages/instrumentation-bedrock/tests/amazon.test.ts (2)
167-170: Remove duplicate assertion forATTR_GEN_AI_REQUEST_MODEL.This assertion is identical to the one on lines 143-146.
- assert.strictEqual( - attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL], - model, - );
261-264: Remove duplicate assertion forATTR_GEN_AI_REQUEST_MODEL.Same duplicate pattern in the streaming test—this repeats lines 237-240.
- assert.strictEqual( - attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL], - model, - );packages/instrumentation-openai/src/image-wrappers.ts (3)
137-173: Centralize remaininggen_ai.*request attributes inSemanticAttributesThe new usages of
ATTR_GEN_AI_REQUEST_MODELandATTR_GEN_AI_PROMPTlook correct, but in this function (and similar ones below) you still hardcode AI attribute keys like:
gen_ai.request.image.sizegen_ai.request.image.qualitygen_ai.request.image.stylegen_ai.request.image.countFor consistency with the rest of the Gen AI semconv migration and to follow the repo convention of defining AI/LLM span attributes in
@traceloop/ai-semantic-conventions, consider adding constants for these toSpanAttributesand using them here instead of string literals (same applies to later occurrences in this file, e.g., image count/size andgen_ai.response.revised_prompt/gen_ai.request.type).Based on learnings, these AI/LLM attributes should live in
SemanticAttributes.ts.
284-390: Token usage + completion attributes are correct; consider also populating input/output aliasesThe migration to:
SpanAttributes.ATTR_GEN_AI_USAGE_COMPLETION_TOKENSSpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENSSpanAttributes.ATTR_GEN_AI_COMPLETION(with.0.content/.0.role)looks consistent with the new Gen AI semconv and preserves
SpanAttributes.LLM_USAGE_TOTAL_TOKENSfor backward compatibility.If you want full alignment with the newer token fields exposed in
SpanAttributes, it might be worth also:
- Mapping prompt tokens to
SpanAttributes.ATTR_GEN_AI_USAGE_INPUT_TOKENS- Mapping completion tokens to
SpanAttributes.ATTR_GEN_AI_USAGE_OUTPUT_TOKENSso downstream consumers that key off
input_tokens/output_tokensdon’t have to special-case images vs text. This can reuse the sameestimatedPromptTokens/completionTokensalready computed here.Please double-check against the current OpenTelemetry Gen AI semantic conventions for whether
input_tokens/output_tokensare expected aliases for prompt/completion in your version.
399-585: Spangen_ai.systemattribute looks good; consider also setting provider/operation attributesUsing
SpanAttributes.ATTR_GEN_AI_SYSTEMwith"OpenAI"on the image spans (wrapImageGeneration,wrapImageEdit,wrapImageVariation) is aligned with the new semconv and the repo’s Gen AI attribute strategy.Two optional improvements:
Provider/operation attributes
SinceSpanAttributesalso exposesATTR_GEN_AI_PROVIDER_NAMEandATTR_GEN_AI_OPERATION_NAME, consider setting them here with values like:
- provider:
"openai"- operation:
"images.generate","images.edit","images.createVariation"
This would make these spans line up better with the rest of the SDK’s Gen AI telemetry.Consistent
gen_ai.request.typehandling
You still use hardcodedgen_ai.request.typestrings here; if this field is part of your AI/LLM schema rather than purely local, it may be worth promoting them toSpanAttributesconstants (same remark as for othergen_ai.*strings in this file).Please verify against your existing instrumentations (e.g., text/chat wrappers) that the provider/operation values and casing (
"OpenAI"vs"openai") are consistent across the codebase.packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (3)
451-537: Prompt + input-messages tests look consistent with new Gen AI schemaThe new
ai.prompttests withmessagesarrays correctly assert bothATTR_GEN_AI_PROMPTandATTR_GEN_AI_INPUT_MESSAGES, and they keep usingSpanAttributesconstants instead of raw strings, which aligns with the ai-semantic-conventions contract. No issues here from a behavior or maintainability standpoint.If you want to go further, you could add a small negative test where
promptData.messagesis malformed (e.g., not an array) to assert we don’t accidentally emit a brokengen_ai.input.messages. Based on learnings, this keeps SDK behavior predictable around partially invalid AI SDK payloads.
853-897: Token-normalization tests are good; consider adding ai.usage‑only coverageThese suites nicely enforce that
ATTR_GEN_AI_USAGE_INPUT_TOKENS/ATTR_GEN_AI_USAGE_OUTPUT_TOKENSare the canonical fields and that legacyai.usage.promptTokens/ai.usage.completionTokensare removed whileLLM_USAGE_TOTAL_TOKENSis derived from the Gen AI attributes.One gap: there’s no explicit test for the case where only
ai.usage.promptTokens/ai.usage.completionTokensare present andgen_ai.usage.input_tokens/gen_ai.usage.output_tokensare absent. If we still rely on the transformer to backfill Gen AI usage fields from the AI SDK ones (for older SDKs/instrumentations), adding such tests would guard that behavior from regressions.Also applies to: 899-943, 945-1004
2488-2563: End‑to‑end “complete transformation” scenario is well‑structuredThis scenario test now validates a coherent end state for all the new Gen AI attributes, including operation name, request/response model, provider+system, finish reasons, response ID, conversation ID, and association properties, while ensuring all original
ai.*fields are removed. It also resolves the previous conflict about provider name expectations. Looks good as an integration-style guardrail for future refactors.If you later expand message normalization, you might extend this test to also assert
ATTR_GEN_AI_INPUT_MESSAGES/ATTR_GEN_AI_OUTPUT_MESSAGESfor a truly full-path check.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (32)
packages/instrumentation-anthropic/test/instrumentation.test.ts(6 hunks)packages/instrumentation-bedrock/src/instrumentation.ts(16 hunks)packages/instrumentation-bedrock/tests/ai21.test.ts(2 hunks)packages/instrumentation-bedrock/tests/amazon.test.ts(4 hunks)packages/instrumentation-bedrock/tests/anthropic.test.ts(2 hunks)packages/instrumentation-bedrock/tests/cohere.test.ts(2 hunks)packages/instrumentation-bedrock/tests/meta.test.ts(4 hunks)packages/instrumentation-cohere/src/instrumentation.ts(11 hunks)packages/instrumentation-cohere/tests/chat.test.ts(8 hunks)packages/instrumentation-cohere/tests/generate.test.ts(8 hunks)packages/instrumentation-cohere/tests/rerank.test.ts(2 hunks)packages/instrumentation-langchain/src/callback_handler.ts(6 hunks)packages/instrumentation-langchain/test/instrumentation.test.ts(2 hunks)packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts(4 hunks)packages/instrumentation-openai/src/image-wrappers.ts(14 hunks)packages/instrumentation-openai/src/instrumentation.ts(7 hunks)packages/instrumentation-openai/test/instrumentation.test.ts(20 hunks)packages/instrumentation-together/src/instrumentation.ts(7 hunks)packages/instrumentation-together/test/instrumentation.test.ts(12 hunks)packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts(7 hunks)packages/instrumentation-vertexai/src/vertexai-instrumentation.ts(3 hunks)packages/instrumentation-vertexai/tests/gemini.test.ts(2 hunks)packages/instrumentation-vertexai/tests/palm2.test.ts(2 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(18 hunks)packages/traceloop-sdk/src/lib/tracing/decorators.ts(1 hunks)packages/traceloop-sdk/src/lib/tracing/manual.ts(4 hunks)packages/traceloop-sdk/src/lib/tracing/span-processor.ts(3 hunks)packages/traceloop-sdk/test/agent_decorator.test.ts(3 hunks)packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts(6 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(6 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(41 hunks)packages/traceloop-sdk/test/decorators.test.ts(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/instrumentation-together/test/instrumentation.test.ts
- packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧰 Additional context used
📓 Path-based instructions (4)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-cohere/tests/chat.test.ts
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-cohere/tests/chat.test.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/traceloop-sdk/test/decorators.test.tspackages/instrumentation-cohere/tests/chat.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/test/decorators.test.ts
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/traceloop-sdk/test/decorators.test.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/traceloop-sdk/test/decorators.test.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/traceloop-sdk/test/decorators.test.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/traceloop-sdk/test/decorators.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/src/lib/tracing/decorators.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/traceloop-sdk/test/decorators.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/instrumentation-together/src/instrumentation.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-vertexai/tests/gemini.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Applied to files:
packages/instrumentation-together/src/instrumentation.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-llamaindex/src/custom-llm-instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately
Applied to files:
packages/instrumentation-together/src/instrumentation.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Applied to files:
packages/instrumentation-anthropic/test/instrumentation.test.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (25)
packages/instrumentation-vertexai/tests/palm2.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/src/lib/tracing/manual.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-anthropic/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-cohere/tests/rerank.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/test/agent_decorator.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-bedrock/tests/amazon.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-bedrock/tests/meta.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-langchain/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-openai/src/image-wrappers.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/src/lib/tracing/decorators.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-bedrock/tests/cohere.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(660-681)
packages/instrumentation-langchain/src/callback_handler.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-cohere/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-bedrock/tests/anthropic.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-cohere/tests/generate.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-vertexai/tests/gemini.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-bedrock/tests/ai21.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/traceloop-sdk/test/decorators.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-cohere/tests/chat.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed ac489cb in 10 minutes and 48 seconds. Click for details.
- Reviewed
4065lines of code in32files - Skipped
0files when reviewing. - Skipped posting
8draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:12
- Draft comment:
Very comprehensive tests covering transformation of response texts, objects, tool calls, tokens, vendor strings, and metadata. Consider adding additional edge-case tests for unexpected types (e.g. numeric tokens as strings, deeply nested JSON structures) for extra robustness. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:264
- Draft comment:
The tests for prompt message transformation (both from 'ai.prompt.messages' and 'ai.prompt') look good. Consider extracting helper functions if the assertions become repetitive in future tests. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
3. packages/traceloop-sdk/test/decorators.test.ts:40
- Draft comment:
Decorator tests are extensive and clearly verify workflow, agent, and task span attribute propagation. For maintainability, consider creating small helpers to compare common attribute sets to reduce repetition. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/decorators.test.ts:180
- Draft comment:
The tests for agent decoration and propagation to child spans are very detailed. Ensure that, as more decorators are added, similar edge cases (e.g., missing metadata, empty inputs) are covered. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:90
- Draft comment:
Integration tests verify that spans from multiple providers (OpenAI, Google, Anthropic) are correctly transformed. Consider adding commentary on why certain token attributes are expected (input/output tokens), which may help maintainers. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
6. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2420
- Draft comment:
Tests for vendor/provider transformations are well structured. Verify that the mapping logic (e.g. for 'azure-openai' and 'openai') is future-proofed against provider naming changes. Consider referencing the VENDOR_MAPPING in a comment for maintainability. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2510
- Draft comment:
The complete scenario test combines multiple transformations (model, vendor, tokens, metadata). This test is complex and valuable; consider splitting into sub-tests if it grows further or adding inline comments to document each transformation section. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
8. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2460
- Draft comment:
Typographical Suggestion: The string "gpt-4o" appears potentially mistyped. Did you mean "gpt-4"? - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_pWzA5IBV6lzAdb8x
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/instrumentation-together/test/instrumentation.test.ts (1)
137-162: Instrumentation not fully migrated to Gen AI semantic conventions.The instrumentation source code uses legacy
LLM_REQUEST_FUNCTIONSfor function/tool calls (lines 243-264 ininstrumentation.ts) despite Gen AI equivalents being defined in SemanticAttributes.ts (ATTR_GEN_AI_TOOL_NAME,ATTR_GEN_AI_TOOL_CALL_ARGUMENTS, etc.).Additionally, token usage attributes show inconsistent migration:
LLM_USAGE_TOTAL_TOKENSis still emitted (line 509)- But
ATTR_GEN_AI_USAGE_COMPLETION_TOKENSandATTR_GEN_AI_USAGE_PROMPT_TOKENSare also emitted (lines 513, 517)This directly contradicts the PR objective to "rename and unify Gen AI semantic attributes to the ATTR_GEN_AI_* naming scheme." Update the instrumentation to consistently use the Gen AI attribute names for function/tool calls and clarify the token usage strategy (whether to deprecate
LLM_USAGE_TOTAL_TOKENSentirely).
♻️ Duplicate comments (2)
packages/instrumentation-cohere/src/instrumentation.ts (2)
219-227: Remove duplicate request model assignment (or use for response model).
attributes[SpanAttributes.ATTR_GEN_AI_REQUEST_MODEL] = model;is executed twice back‑to‑back, which is redundant and likely an oversight.Consider either:
- Removing the second line, or
- Changing the second line to set
ATTR_GEN_AI_RESPONSE_MODELif you intend to track both request and response models here.
244-275: Use.contentinstead of.userfor prompt body fields.These prompt attributes use the suffix
.user:
- Completion prompt:
ATTR_GEN_AI_PROMPT}.0.user- Chat last user message:
...${len}.user- Rerank query:
ATTR_GEN_AI_PROMPT}.0.userOther instrumentations (Bedrock, LangChain, OpenAI) and the GEN_AI conventions use
.contentfor the message body, so this is inconsistent and will surprise consumers.Recommend changing all three to
.content, e.g.:- attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.user`] = params.prompt; + attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.content`] = params.prompt; - `${SpanAttributes.ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user` + `${SpanAttributes.ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.content` - attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.user`] = params.query; + attributes[`${SpanAttributes.ATTR_GEN_AI_PROMPT}.0.content`] = params.query;
🧹 Nitpick comments (2)
packages/instrumentation-openai/src/image-wrappers.ts (2)
137-174: Image generation request: switch to GenAI model/prompt attributes looks correctUsing
SpanAttributes.ATTR_GEN_AI_REQUEST_MODELforparams.modelandATTR_GEN_AI_PROMPT.0.[content/role]for the single text prompt matches the new GenAI naming and the indexed prompt schema; no functional issues here. As a follow-up, it would be good to promote thegen_ai.request.image.*keys (size/quality/style/count) used in this function into shared semantic-conventions constants so AI attributes aren’t hardcoded in multiple places.
As per coding guidelines, these AI attributes should ideally be defined centrally in the semantic-conventions package.
344-347: Image response: completion messages migrated to ATTR_GEN_AI_COMPLETION consistentlyAll branches now populate the assistant response under
SpanAttributes.ATTR_GEN_AI_COMPLETION.0.[content/role], regardless of whether the image is uploaded, fetched and re-uploaded, or referenced by the original URL, which keeps completion structure uniform across scenarios. If you want to reduce duplication later, the repeated JSON envelope + attribute writes could be pushed into a small helper, but that's optional.Also applies to: 368-373, 376-381, 384-389
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (10)
packages/instrumentation-anthropic/src/instrumentation.ts(5 hunks)packages/instrumentation-bedrock/src/instrumentation.ts(16 hunks)packages/instrumentation-cohere/src/instrumentation.ts(11 hunks)packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts(4 hunks)packages/instrumentation-openai/src/image-wrappers.ts(14 hunks)packages/instrumentation-openai/src/instrumentation.ts(7 hunks)packages/instrumentation-together/src/instrumentation.ts(7 hunks)packages/instrumentation-together/test/instrumentation.test.ts(12 hunks)packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts(7 hunks)packages/instrumentation-vertexai/src/vertexai-instrumentation.ts(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/instrumentation-together/src/instrumentation.ts
- packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
🧰 Additional context used
📓 Path-based instructions (3)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
🧠 Learnings (9)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/instrumentation-together/test/instrumentation.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-openai/src/instrumentation.ts
🧬 Code graph analysis (7)
packages/instrumentation-together/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-cohere/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-openai/src/image-wrappers.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-anthropic/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
packages/instrumentation-bedrock/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(45-111)
🔇 Additional comments (13)
packages/instrumentation-together/test/instrumentation.test.ts (1)
34-34: LGTM!The import correctly follows the coding guidelines by importing semantic attribute constants from
@traceloop/ai-semantic-conventionsrather than hardcoding strings.packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (2)
140-177: Gen AI attribute migration in_startSpanis consistent and uses shared constantsThe migration from
LLM_*toSpanAttributes.ATTR_GEN_AI_*for system, model, generation config, and prompt attributes looks internally consistent, and you’re correctly using the shared constants from@traceloop/ai-semantic-conventionsinstead of hard‑coded strings. Prompt indexing (system at 0, then contents) and role/content handling are preserved and sound.As per coding guidelines, this aligns instrumentation attributes with the central semantic-conventions module.
232-261: Usage and completion attributes now align with Gen AI semantics without breaking legacy usageUpdating usage and completion attributes to
SpanAttributes.ATTR_GEN_AI_USAGE_COMPLETION_TOKENS,SpanAttributes.ATTR_GEN_AI_USAGE_PROMPT_TOKENS, andSpanAttributes.ATTR_GEN_AI_COMPLETION.*while keepingLLM_USAGE_TOTAL_TOKENSfor backward compatibility is a clean migration. Token counts and candidate finish reasons/contents are wired through correctly, guarded by_shouldSendPrompts().packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (1)
152-227: Vertex AI GEN_AI attribute migration looks correct and consistent.The span start/end logic uses the new
ATTR_GEN_AI_*attributes for system, model, request params, prompts, completions, and usage tokens while keepingLLM_TOP_K/LLM_USAGE_TOTAL_TOKENSfor legacy fields. The prompt/completion indexing (.0.role/.0.content) matches the shared conventions and other instrumentations. Based on learnings, this keeps attribute definitions centralized in@traceloop/ai-semantic-conventionsand preserves token extraction behavior.Also applies to: 261-357
packages/instrumentation-cohere/src/instrumentation.ts (1)
356-401: Cohere response GEN_AI attributes are wired correctly.The rerank/chat/generate response helpers now emit
ATTR_GEN_AI_COMPLETION.*andATTR_GEN_AI_USAGE_*attributes in a way that mirrors other providers (finish_reason, role, content, and token usage). The conditional prompt guarding with_shouldSendPrompts()is preserved, and total tokens continue to be reported viaLLM_USAGE_TOTAL_TOKENS. Based on learnings, this keeps response extraction aligned with the shared semantic conventions.Also applies to: 403-486, 488-556
packages/instrumentation-bedrock/src/instrumentation.ts (1)
146-183: Bedrock GEN_AI attributes and streaming aggregation look sound.The Bedrock instrumentation now correctly uses
ATTR_GEN_AI_SYSTEM, request/response model, request hyperparameters, prompt/completion attributes, andATTR_GEN_AI_USAGE_*tokens across all vendors (ai21/amazon/anthropic/cohere/meta), while retainingLLM_USAGE_TOTAL_TOKENSandLLM_TOP_K. The streaming reassembly logic was updated to read/write${SpanAttributes.ATTR_GEN_AI_COMPLETION}.0.contentand still accumulates full streamed content safely under the prompt‑gating guard.Also applies to: 185-291, 293-557
packages/instrumentation-anthropic/src/instrumentation.ts (1)
190-288: Anthropic GEN_AI migration is consistent and preserves behavior.Anthropic spans now use
ATTR_GEN_AI_SYSTEM,ATTR_GEN_AI_REQUEST_*, andATTR_GEN_AI_PROMPTfor inputs, and populateATTR_GEN_AI_RESPONSE_MODEL,ATTR_GEN_AI_USAGE_PROMPT_TOKENS/_COMPLETION_TOKENS, plusATTR_GEN_AI_COMPLETION.*for finish_reason/role/content. Streaming aggregation feeds the same_endSpanpath, so semantics stay aligned across streaming and non‑streaming calls. Based on learnings, this satisfies the requirement to extract request/response data and token usage using centralized semantic attributes.Also applies to: 473-538
packages/instrumentation-openai/src/instrumentation.ts (1)
279-402: OpenAI GEN_AI attributes are wired correctly and consistent across chat/completion.The OpenAI instrumentation now sets
ATTR_GEN_AI_SYSTEMbased on provider detection, usesATTR_GEN_AI_REQUEST_*for model and hyperparameters, and emits prompts underATTR_GEN_AI_PROMPT.{idx}.role/content. On completion, it recordsATTR_GEN_AI_RESPONSE_MODEL,ATTR_GEN_AI_USAGE_PROMPT_TOKENS/_COMPLETION_TOKENSplusLLM_USAGE_TOTAL_TOKENS, and maps finish_reason/role/content, function calls, and tool calls intoATTR_GEN_AI_COMPLETION.{idx}.*. This matches the shared semantic conventions while preserving existing behavior and token enrichment.Also applies to: 652-744
packages/instrumentation-openai/src/image-wrappers.ts (5)
181-200: Image edit request: GenAI model and text prompt attributes are wired correctlyMapping the edit
modeltoSpanAttributes.ATTR_GEN_AI_REQUEST_MODELand recording the text prompt asATTR_GEN_AI_PROMPT.0.[content/role="user"]is consistent with the generation path and the new GenAI schema. No further changes needed here.
219-223: Image edit: second prompt entry for uploaded image is consistentStoring the uploaded image reference as
ATTR_GEN_AI_PROMPT.1.content(with a multimodal JSON envelope) andATTR_GEN_AI_PROMPT.1.role = "user"cleanly preserves the ordering of text (index 0) then image (index 1) and matches how other multimodal prompts are represented.
240-243: Image variation request: GenAI model and image prompt attributes look goodUsing
SpanAttributes.ATTR_GEN_AI_REQUEST_MODELfor variations and representing the source image asATTR_GEN_AI_PROMPT.0.[content/role="user"]keeps this operation aligned with the other image endpoints and the GenAI prompt convention. Implementation here looks correct.Also applies to: 270-274
293-327: Image response: GenAI usage tokens plus total tokens are handled correctlyCompletion usage is now emitted via
SpanAttributes.ATTR_GEN_AI_USAGE_COMPLETION_TOKENS, withATTR_GEN_AI_USAGE_PROMPT_TOKENSset only when prompt-enrichment is enabled, whileSpanAttributes.LLM_USAGE_TOTAL_TOKENSstill carries the aggregate count. The control flow (including the try/catch fallback) ensures total tokens are always populated when images are returned, and the logic aroundestimatedPromptTokensandcompletionTokensis sound.
413-418: Image spans: system attribute migration to ATTR_GEN_AI_SYSTEM is consistentSetting
SpanAttributes.ATTR_GEN_AI_SYSTEM = "OpenAI"on the generate, edit, and variation spans aligns these wrappers with the new GenAI system attribute while preserving the existing"gen_ai.request.type"classifier. This keeps span metadata consistent across all three image operations.Also applies to: 472-478, 539-544
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 7f9e86c in 10 minutes and 1 seconds. Click for details.
- Reviewed
463lines of code in9files - Skipped
0files when reviewing. - Skipped posting
5draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-cohere/src/instrumentation.ts:220
- Draft comment:
Duplicate assignment of the ATTR_GEN_AI_REQUEST_MODEL attribute in _startSpan. Remove the redundant assignment. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/instrumentation-vertexai/src/vertexai-instrumentation.ts:190
- Draft comment:
Consider replacing 'new Promise((resolve) => resolve(result))' with 'Promise.resolve(result)' for conciseness. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/instrumentation-vertexai/src/vertexai-instrumentation.ts:280
- Draft comment:
The deep nested optional chaining used to extract token usage metadata is complex. Consider refactoring for improved readability. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/instrumentation-vertexai/src/vertexai-instrumentation.ts:285
- Draft comment:
Type checking with 'typeof ... === "number"' for token count fields is correct but could benefit from an explanatory comment. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/instrumentation-together/src/instrumentation.ts:145
- Draft comment:
Consider using Promise.resolve(result) in the _wrapPromise implementation for consistency and clarity. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
Workflow ID: wflow_bl0TD3JoJMbfmExY
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 0d82d9e in 3 minutes and 15 seconds. Click for details.
- Reviewed
12lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-cohere/src/instrumentation.ts:223
- Draft comment:
Duplicate assignment of the model attribute removed. Avoid redundant code to prevent confusion and potential side-effects. - Reason this comment was not posted:
Comment looked like it was already resolved.
Workflow ID: wflow_Seqs7aJFqGsEWXJs
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
| // Find matching vendor prefix in mapping | ||
| let mappedVendor = null; | ||
| if (typeof vendor === "string" && vendor.length > 0) { | ||
| const providerName = vendor.split(".")[0]; |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (4)
packages/instrumentation-cohere/tests/generate.test.ts (2)
125-139: Trivial assertions on undefined optional parameters.The test asserts that
ATTR_GEN_AI_REQUEST_TOP_P,presencePenalty, andfrequencyPenaltyspan attributes equal the corresponding params values, but these params are undefined (not set in the params object at lines 94-99). These assertions pass trivially asundefined === undefined, without validating the instrumentation's optional parameter handling.The test should conditionally assert these attributes only when the corresponding params are defined.
205-211: Same trivial assertions in streaming test.This streaming test variant has the same issue with undefined optional parameters (
params.p,params.presencePenalty,params.frequencyPenalty) being compared to span attributes without guard checks.packages/instrumentation-cohere/tests/chat.test.ts (2)
133-147: Trivial assertions on undefined optional parameters.Similar to the issue in
generate.test.ts, this test asserts thatATTR_GEN_AI_REQUEST_TOP_P,presencePenalty, andfrequencyPenaltyspan attributes equal the params values, but these params are undefined (not set in the params object at lines 94-106). The assertions pass trivially without validating instrumentation behavior.Add conditional checks to only assert these attributes when the corresponding params are defined.
233-247: Same undefined parameter assertions in streaming test.This streaming test variant has the same issue with undefined optional parameters being compared to span attributes without proper guards.
🧹 Nitpick comments (11)
packages/traceloop-sdk/src/lib/tracing/span-processor.ts (1)
18-20: Agent name handling via GenAI semantic attribute is correct; consider centralizing the constantThe updated logic cleanly wires agent names through the new attribute:
- On span start, the active
AGENT_NAME_KEYis written to the span via the GenAI agent attribute.- On span end, that same attribute is used to:
- Cache the agent name per trace ID, and
- Propagate it to downstream spans on the same trace when they lack an explicit agent name.
This achieves the PR’s goal of using the new semantic convention while preserving the existing propagation behavior.
To keep AI/LLM attribute definitions centralized (and allow future changes in one place), consider re-exporting and importing
ATTR_GEN_AI_AGENT_NAMEvia@traceloop/ai-semantic-conventionsinstead of the OTEL package directly, in line with the existing pattern forSpanAttributes. Based on learnings, this keeps all AI-related attribute wiring going throughai-semantic-conventions.Also applies to: 190-196, 280-291
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (6)
876-920: Consider asserting cleanup of deprecatedgen_ai.usage.prompt_tokensaliasThese tests validate keeping
gen_ai.usage.input_tokensviaATTR_GEN_AI_USAGE_INPUT_TOKENSand deletingai.usage.promptTokens, which matches the desired migration. If the intent is to fully converge on the new OTEL naming, it may be worth explicitly asserting that the legacygen_ai.usage.prompt_tokensalias is also removed when present (similar to how you dropai.usage.promptTokens).For example:
it("should handle zero input tokens", () => { const attributes = { "ai.usage.promptTokens": 0, "gen_ai.usage.input_tokens": 0, "gen_ai.usage.prompt_tokens": 0, }; @@ transformLLMSpans(attributes); assert.strictEqual( attributes[ATTR_GEN_AI_USAGE_INPUT_TOKENS], 0, ); - assert.strictEqual(attributes["ai.usage.promptTokens"], undefined); + assert.strictEqual(attributes["ai.usage.promptTokens"], undefined); + // Optional: ensure legacy alias is not kept + assert.strictEqual(attributes["gen_ai.usage.prompt_tokens"], undefined); });This would lock in the cleanup behavior and prevent regressions if the transformer stops dropping the alias.
922-966: Mirror prompt-token alias cleanup for completion tokensSame idea as prompt tokens: these tests confirm that
ai.usage.completionTokensis dropped andgen_ai.usage.output_tokenssurvives viaATTR_GEN_AI_USAGE_OUTPUT_TOKENS. If you also want to ensure any legacygen_ai.usage.completion_tokensalias doesn’t linger, you could add an assertion in the zero-token case:it("should handle zero output tokens", () => { const attributes = { "ai.usage.completionTokens": 0, "gen_ai.usage.output_tokens": 0, "gen_ai.usage.completion_tokens": 0, }; @@ transformLLMSpans(attributes); assert.strictEqual( attributes[ATTR_GEN_AI_USAGE_OUTPUT_TOKENS], 0, ); - assert.strictEqual(attributes["ai.usage.completionTokens"], undefined); + assert.strictEqual(attributes["ai.usage.completionTokens"], undefined); + // Optional: ensure legacy alias is not kept + assert.strictEqual(attributes["gen_ai.usage.completion_tokens"], undefined); });Not required, but it would more tightly specify the migration behavior.
2050-2150: Operation-name mapping tests are consistent with OTEL GenAI conventionsMapping:
ai.generateText/ai.streamText/ai.generateObject/ai.streamObject→ATTR_GEN_AI_OPERATION_NAME = "chat", andai.toolCall/*.tool→"execute_tool",matches the GenAI semconv’s recommended
gen_ai.operation.namevalues for chat and tool execution. (opentelemetry.io)You might optionally add a future test if the AI SDK ever introduces explicit “embeddings” or separate “text_completion” operations, but for the current scope this looks correct and complete.
2246-2304: Finish-reason tests look good; consider one more edge caseYou cover scalar string, string-array, multiple allowed values, and “not present” for
ai.response.finishReason→ATTR_GEN_AI_RESPONSE_FINISH_REASONS. That’s solid.If you want to be extra-defensive, you could add a small case for an empty string or
nullvalue to pin down behavior there, but it’s not strictly necessary given current usage.
2390-2442: Conversation-id tests don’t assert metadata cleanup / association propertiesThe new
transformLLMSpans - conversation idtests verifyATTR_GEN_AI_CONVERSATION_IDderivation fromconversationIdandsessionId, including precedence, which is good.If the intent is to treat
ai.telemetry.metadata.conversationIdconsistently with the other metadata keys (i.e., also mirror it intoTRACELOOP_ASSOCIATION_PROPERTIESand remove the original metadata attribute), the tests currently wouldn’t catch regressions there. You could tighten the expectations, for example:it("should transform conversationId from metadata", () => { const attributes = { "ai.telemetry.metadata.conversationId": "conv_123", }; transformLLMSpans(attributes); assert.strictEqual( attributes[ATTR_GEN_AI_CONVERSATION_ID], "conv_123", ); + + // Optional: if conversationId should also be an association property + // and the raw metadata key removed (like other telemetry metadata): + // assert.strictEqual( + // attributes[ + // `${SpanAttributes.TRACELOOP_ASSOCIATION_PROPERTIES}.conversationId` + // ], + // "conv_123", + // ); + // assert.strictEqual( + // attributes["ai.telemetry.metadata.conversationId"], + // undefined, + // ); });Same idea could be applied to the “sessionId fallback” and “prefer conversationId over sessionId” cases if that invariance is desired.
2511-2586: End-to-end “complete transformation” test is strong; minor opportunity around conversation metadataThis scenario ties together operation name, request/response model, provider, finish reasons, response ID, conversation ID, and association properties, which is great for guarding the full
transformLLMSpanspipeline.If
ai.telemetry.metadata.conversationIdis also supposed to be turned into an association property (similar touserId/sessionIdin the earlier metadata tests), you might extend the expectations here to cover that too; currently onlyuserIdassociation is asserted:// Check metadata transformation assert.strictEqual( attributes[`${SpanAttributes.TRACELOOP_ASSOCIATION_PROPERTIES}.userId`], "user_789", ); + + // Optional: if conversationId should also appear as an association property + // assert.strictEqual( + // attributes[ + // `${SpanAttributes.TRACELOOP_ASSOCIATION_PROPERTIES}.conversationId` + // ], + // "conv_456", + // );Otherwise, the coverage here looks very good.
packages/instrumentation-together/src/instrumentation.ts (1)
253-278: Duplicate iteration overparams.tools.The code iterates over
params.toolstwice (lines 253-263 and 264-278), setting the same attributes. This appears to be redundant code that sets identical values.Remove the duplicate loop:
params.tools?.forEach((func, index) => { attributes[ `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name` ] = func.function?.name; attributes[ `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.description` ] = func.function?.description; attributes[ `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.arguments` ] = JSON.stringify(func.function?.parameters); }); - params.tools?.forEach((tool, index) => { - if (!tool.function) { - return; - } - - attributes[ - `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.name` - ] = tool.function.name; - attributes[ - `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.description` - ] = tool.function.description; - attributes[ - `${SpanAttributes.LLM_REQUEST_FUNCTIONS}.${index}.arguments` - ] = JSON.stringify(tool.function.parameters); - });packages/instrumentation-anthropic/test/instrumentation.test.ts (1)
257-354: Thinking / beta messages coverage is well-shaped; consider centralizing keys laterThe beta “thinking” test correctly uses the new GenAI request/response attributes and verifies that completion content includes both
thinkingandtextblocks, plus token usage. The literal"llm.request.thinking.*"keys are fine for now, but once these stabilize it may be worth adding them toSpanAttributesso both instrumentation and tests can reference shared constants.packages/instrumentation-bedrock/tests/anthropic.test.ts (1)
191-287: Streamed Bedrock Anthropic test: attributes OK, but async pattern should await assertionsThe streaming test’s assertions use the right ATTR_GEN_AI_* keys and map Bedrock invocation metrics to GenAI usage tokens plus
LLM_USAGE_TOTAL_TOKENScorrectly. However, the inner async IIFE isn’t awaited, so failures may surface as late unhandled rejections instead of proper test failures. Inlining thefor awaitloop or awaiting the IIFE would make this deterministic.- const response = await bedrockRuntimeClient.send(command); - if (response.body) { - (async function () { - let content = ""; - let result; - for await (const value of response.body!) { + const response = await bedrockRuntimeClient.send(command); + if (response.body) { + let content = ""; + let result; + for await (const value of response.body!) { const jsonString = new TextDecoder().decode(value.chunk?.bytes); const parsedResponse = JSON.parse(jsonString); @@ - if ("amazon-bedrock-invocationMetrics" in result) { + if ("amazon-bedrock-invocationMetrics" in result) { assert.strictEqual( attributes[ATTR_GEN_AI_USAGE_PROMPT_TOKENS], result["amazon-bedrock-invocationMetrics"]["inputTokenCount"], ); @@ ); - } - })(); + } + } }packages/instrumentation-openai/src/image-wrappers.ts (1)
292-335: Optional: consider also setting a GenAI response-model attribute for imagesIf the OTEL GenAI conventions treat image models the same as text models, you may want to also populate
ATTR_GEN_AI_RESPONSE_MODEL(e.g., fromparams.model) in image spans, similar to the text/chat instrumentation. This would make querying by response model consistent across modalities, but it’s not required for correctness.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (62)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/ai-semantic-conventions/tsconfig.json(1 hunks)packages/instrumentation-anthropic/package.json(1 hunks)packages/instrumentation-anthropic/src/instrumentation.ts(6 hunks)packages/instrumentation-anthropic/test/instrumentation.test.ts(7 hunks)packages/instrumentation-anthropic/tsconfig.json(1 hunks)packages/instrumentation-bedrock/package.json(1 hunks)packages/instrumentation-bedrock/src/instrumentation.ts(17 hunks)packages/instrumentation-bedrock/tests/ai21.test.ts(3 hunks)packages/instrumentation-bedrock/tests/amazon.test.ts(5 hunks)packages/instrumentation-bedrock/tests/anthropic.test.ts(3 hunks)packages/instrumentation-bedrock/tests/cohere.test.ts(3 hunks)packages/instrumentation-bedrock/tests/meta.test.ts(5 hunks)packages/instrumentation-bedrock/tsconfig.json(1 hunks)packages/instrumentation-chromadb/package.json(1 hunks)packages/instrumentation-chromadb/tsconfig.json(1 hunks)packages/instrumentation-cohere/package.json(1 hunks)packages/instrumentation-cohere/src/instrumentation.ts(12 hunks)packages/instrumentation-cohere/tests/chat.test.ts(9 hunks)packages/instrumentation-cohere/tests/generate.test.ts(9 hunks)packages/instrumentation-cohere/tests/rerank.test.ts(3 hunks)packages/instrumentation-cohere/tsconfig.json(1 hunks)packages/instrumentation-langchain/package.json(1 hunks)packages/instrumentation-langchain/src/callback_handler.ts(7 hunks)packages/instrumentation-langchain/test/instrumentation.test.ts(3 hunks)packages/instrumentation-langchain/tsconfig.json(1 hunks)packages/instrumentation-llamaindex/package.json(1 hunks)packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts(5 hunks)packages/instrumentation-llamaindex/tsconfig.json(1 hunks)packages/instrumentation-mcp/package.json(1 hunks)packages/instrumentation-mcp/tsconfig.json(1 hunks)packages/instrumentation-openai/package.json(1 hunks)packages/instrumentation-openai/src/image-wrappers.ts(15 hunks)packages/instrumentation-openai/src/instrumentation.ts(8 hunks)packages/instrumentation-openai/test/instrumentation.test.ts(21 hunks)packages/instrumentation-openai/tsconfig.json(1 hunks)packages/instrumentation-pinecone/package.json(1 hunks)packages/instrumentation-pinecone/tsconfig.json(1 hunks)packages/instrumentation-qdrant/tsconfig.json(1 hunks)packages/instrumentation-together/package.json(1 hunks)packages/instrumentation-together/src/instrumentation.ts(8 hunks)packages/instrumentation-together/test/instrumentation.test.ts(13 hunks)packages/instrumentation-together/tsconfig.json(1 hunks)packages/instrumentation-vertexai/package.json(1 hunks)packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts(8 hunks)packages/instrumentation-vertexai/src/vertexai-instrumentation.ts(4 hunks)packages/instrumentation-vertexai/tests/gemini.test.ts(2 hunks)packages/instrumentation-vertexai/tests/palm2.test.ts(2 hunks)packages/instrumentation-vertexai/tsconfig.json(1 hunks)packages/traceloop-sdk/package.json(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(19 hunks)packages/traceloop-sdk/src/lib/tracing/decorators.ts(2 hunks)packages/traceloop-sdk/src/lib/tracing/manual.ts(5 hunks)packages/traceloop-sdk/src/lib/tracing/span-processor.ts(4 hunks)packages/traceloop-sdk/src/lib/tracing/tracing.ts(1 hunks)packages/traceloop-sdk/test/agent_decorator.test.ts(4 hunks)packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts(7 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(7 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(42 hunks)packages/traceloop-sdk/test/decorators.test.ts(8 hunks)packages/traceloop-sdk/tsconfig.json(1 hunks)tsconfig.base.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (14)
- packages/traceloop-sdk/test/ai-sdk-integration.test.ts
- packages/traceloop-sdk/src/lib/tracing/decorators.ts
- packages/traceloop-sdk/src/lib/tracing/manual.ts
- packages/instrumentation-vertexai/tests/gemini.test.ts
- packages/instrumentation-openai/test/instrumentation.test.ts
- packages/instrumentation-vertexai/tests/palm2.test.ts
- packages/traceloop-sdk/test/decorators.test.ts
- packages/instrumentation-langchain/src/callback_handler.ts
- packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
- packages/instrumentation-bedrock/src/instrumentation.ts
- packages/instrumentation-together/test/instrumentation.test.ts
- packages/instrumentation-bedrock/tests/meta.test.ts
- packages/instrumentation-bedrock/tests/amazon.test.ts
- packages/instrumentation-cohere/src/instrumentation.ts
🧰 Additional context used
📓 Path-based instructions (7)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/traceloop-sdk/src/lib/tracing/tracing.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-cohere/tests/chat.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/traceloop-sdk/src/lib/tracing/tracing.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/tsconfig.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-chromadb/tsconfig.jsonpackages/instrumentation-qdrant/tsconfig.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-bedrock/tsconfig.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-together/tsconfig.jsonpackages/instrumentation-llamaindex/tsconfig.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-langchain/package.jsonpackages/instrumentation-openai/tsconfig.jsonpackages/instrumentation-vertexai/tsconfig.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-together/package.jsonpackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-mcp/tsconfig.jsonpackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-mcp/package.jsonpackages/instrumentation-anthropic/tsconfig.jsonpackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-bedrock/package.jsonpackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/tsconfig.jsonpackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-cohere/tests/chat.test.tspackages/instrumentation-pinecone/tsconfig.jsonpackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
packages/*/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use workspace:* for intra-repo package dependencies in package.json
Files:
packages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-langchain/package.jsonpackages/traceloop-sdk/package.jsonpackages/instrumentation-together/package.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-bedrock/package.json
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-cohere/tests/chat.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
packages/traceloop-sdk/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
When adding a new instrumentation package, add it to the main SDK dependencies
Files:
packages/traceloop-sdk/package.json
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (17)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/tsconfig.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-chromadb/tsconfig.jsonpackages/instrumentation-qdrant/tsconfig.jsonpackages/instrumentation-pinecone/package.jsonpackages/traceloop-sdk/tsconfig.jsonpackages/instrumentation-bedrock/tsconfig.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-together/tsconfig.jsonpackages/instrumentation-llamaindex/tsconfig.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-langchain/package.jsonpackages/instrumentation-openai/tsconfig.jsonpackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-vertexai/tsconfig.jsonpackages/traceloop-sdk/package.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-together/package.jsonpackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-mcp/tsconfig.jsonpackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-mcp/package.jsonpackages/ai-semantic-conventions/tsconfig.jsontsconfig.base.jsonpackages/instrumentation-anthropic/tsconfig.jsonpackages/instrumentation-cohere/tests/generate.test.tspackages/traceloop-sdk/src/lib/tracing/tracing.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-bedrock/package.jsonpackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/tsconfig.jsonpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-cohere/tests/chat.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-pinecone/tsconfig.jsonpackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-openai/package.jsonpackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/ai-semantic-conventions/tsconfig.jsonpackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-cohere/tests/chat.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-chromadb/tsconfig.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-together/tsconfig.jsonpackages/instrumentation-llamaindex/tsconfig.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-anthropic/src/instrumentation.tspackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-langchain/package.jsonpackages/instrumentation-openai/tsconfig.jsonpackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-vertexai/tsconfig.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-together/package.jsonpackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-mcp/package.jsonpackages/instrumentation-anthropic/tsconfig.jsonpackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-bedrock/package.jsonpackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/tsconfig.jsonpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-cohere/tests/chat.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/traceloop-sdk/tsconfig.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-openai/package.jsonpackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-langchain/package.jsonpackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/traceloop-sdk/package.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-together/package.jsonpackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-mcp/package.jsonpackages/instrumentation-cohere/tests/generate.test.tspackages/traceloop-sdk/src/lib/tracing/tracing.tspackages/instrumentation-bedrock/package.jsonpackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-cohere/tests/chat.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-agent-integration.test.tspackages/traceloop-sdk/tsconfig.jsonpackages/instrumentation-llamaindex/tsconfig.jsonpackages/traceloop-sdk/test/agent_decorator.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/traceloop-sdk/package.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/generate.test.tspackages/traceloop-sdk/src/lib/tracing/tracing.tspackages/instrumentation-openai/src/image-wrappers.tspackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/instrumentation-openai/src/instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Applied to files:
packages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/tsconfig.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-chromadb/tsconfig.jsonpackages/instrumentation-qdrant/tsconfig.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-bedrock/tsconfig.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-together/tsconfig.jsonpackages/instrumentation-llamaindex/tsconfig.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-langchain/package.jsonpackages/instrumentation-openai/tsconfig.jsonpackages/instrumentation-vertexai/tsconfig.jsonpackages/traceloop-sdk/package.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-together/package.jsonpackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-mcp/tsconfig.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-anthropic/tsconfig.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.jsonpackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-pinecone/tsconfig.jsonpackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies
Applied to files:
packages/instrumentation-vertexai/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/traceloop-sdk/tsconfig.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-langchain/package.jsonpackages/traceloop-sdk/package.jsonpackages/instrumentation-together/package.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-bedrock/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-anthropic/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-chromadb/package.jsonpackages/instrumentation-together/tsconfig.jsonpackages/instrumentation-openai/package.jsonpackages/instrumentation-anthropic/src/instrumentation.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-langchain/package.jsonpackages/instrumentation-openai/tsconfig.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-together/package.jsonpackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-bedrock/tests/anthropic.test.tspackages/instrumentation-mcp/package.jsonpackages/instrumentation-anthropic/tsconfig.jsonpackages/instrumentation-cohere/tests/generate.test.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-bedrock/package.jsonpackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/tsconfig.jsonpackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately
Applied to files:
packages/instrumentation-anthropic/test/instrumentation.test.tspackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/tsconfig.jsonpackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-anthropic/tsconfig.jsonpackages/instrumentation-cohere/tests/generate.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Applied to files:
packages/instrumentation-cohere/tsconfig.jsonpackages/instrumentation-chromadb/tsconfig.jsonpackages/instrumentation-bedrock/tsconfig.jsonpackages/instrumentation-together/tsconfig.jsonpackages/instrumentation-llamaindex/tsconfig.jsonpackages/instrumentation-langchain/test/instrumentation.test.tspackages/instrumentation-openai/tsconfig.jsonpackages/instrumentation-mcp/tsconfig.jsonpackages/instrumentation-langchain/tsconfig.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/src/lib/node-server-sdk.ts : Register new instrumentation packages during SDK initialization
Applied to files:
packages/traceloop-sdk/tsconfig.jsonpackages/instrumentation-langchain/test/instrumentation.test.tspackages/traceloop-sdk/package.json
📚 Learning: 2025-07-28T14:30:22.017Z
Learnt from: galkleinman
Repo: traceloop/openllmetry-js PR: 620
File: packages/instrumentation-llamaindex/package.json:43-45
Timestamp: 2025-07-28T14:30:22.017Z
Learning: Lerna automatically converts `workspace:*` dependencies to actual published versions during the `lerna publish` command, so `workspace:*` in package.json files is safe and expected in Lerna-managed monorepos.
Applied to files:
packages/traceloop-sdk/package.json
📚 Learning: 2025-08-10T06:42:06.412Z
Learnt from: nirga
Repo: traceloop/openllmetry-js PR: 623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.412Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
Applied to files:
packages/instrumentation-openai/src/image-wrappers.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (10)
packages/instrumentation-anthropic/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/traceloop-sdk/test/agent_decorator.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-langchain/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-together/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-bedrock/tests/ai21.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-bedrock/tests/anthropic.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-cohere/tests/generate.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
packages/instrumentation-cohere/tests/chat.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(18-59)
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 93009b1 in 9 minutes and 20 seconds. Click for details.
- Reviewed
6432lines of code in48files - Skipped
15files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. tsconfig.base.json:1
- Draft comment:
The tsconfig.base.json has a well-configured set of compiler options for strict TypeScript development. The strict settings (e.g., 'strict', 'strictNullChecks', 'noImplicitReturns') ensure good type safety, and 'resolveJsonModule' enables importing JSON modules. The inclusion of 'moduleResolution': 'node' and 'esModuleInterop': true are also best practices. No issues found. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. packages/instrumentation-bedrock/src/instrumentation.ts:390
- Draft comment:
There appears to be a typographical error in the comment on line 390. The phrase "The format is removing when we are setting span attribute" might be intended to read "The format is removed when setting the span attribute" or similar. Please review and adjust for clarity. - Reason this comment was not posted:
Comment was on unchanged code.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:28
- Draft comment:
Typo: It seemsASSOCATION_PROPERTIES_KEYis misspelled. Consider renaming it toASSOCIATION_PROPERTIES_KEY. - Reason this comment was not posted:
Comment was on unchanged code.
4. packages/traceloop-sdk/test/decorators.test.ts:578
- Draft comment:
Typo suggestion: The test string "Tell me a joke about Typescript" might need to be "Tell me a joke about TypeScript" with a capital 'S' for consistency. - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_TvZWWutMajl8O8P5
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed c331f0b in 5 minutes and 31 seconds. Click for details.
- Reviewed
12lines of code in1files - Skipped
0files when reviewing. - Skipped posting
0draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
Workflow ID: wflow_SJHzPlVB79x9yD4s
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed d2921fc in 2 minutes and 39 seconds. Click for details.
- Reviewed
252lines of code in1files - Skipped
0files when reviewing. - Skipped posting
6draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:11
- Draft comment:
Added import for ATTR_GEN_AI_PROVIDER_NAME; ensure this attribute exists in the dependency and matches the new semantic conventions. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the author to ensure that an attribute exists in a dependency and matches new semantic conventions. This falls under asking the author to ensure something is correct, which is against the rules.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:97
- Draft comment:
Reformatted the role assignment in transformResponseText into a single line for better readability; no functional change observed. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, as it only describes a change made for readability without suggesting any action or raising any concerns. It doesn't provide any actionable feedback or ask for confirmation on any specific aspect of the code.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:300
- Draft comment:
Consolidated multi‐line role assignment in transformPrompts; the change improves clarity without affecting functionality. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, as it praises the change without providing any actionable feedback or suggestions. It doesn't align with the rules for useful comments, which should either provide a suggestion, ask for clarification on specific code changes, or point out potential issues.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:480
- Draft comment:
Introduced providerName extraction and assignment to ATTR_GEN_AI_PROVIDER_NAME in transformVendor. Verify that vendor strings starting with a dot are handled as expected. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to verify that vendor strings starting with a dot are handled as expected. This falls under the category of asking the author to ensure behavior is intended, which is against the rules. Therefore, this comment should not be approved.
5. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:525
- Draft comment:
Reformatted the finish reason assignment using a ternary operator for improved clarity; this is purely a formatting change. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, as it only describes a formatting change without suggesting any improvements or identifying potential issues. It doesn't provide any actionable feedback or specific suggestions for the PR author.
6. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:677
- Draft comment:
Overall, the updates consolidate legacy mappings into normalized GenAI attributes per the new opentelemetry semantic conventions. The reformatting across transformation functions improves readability without impacting functionality. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_7baJY2DJjBMg6IcZ
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
328-356: Consider extracting common message processing logic.This block duplicates the message-processing logic from lines 282-323 (handling
AI_PROMPT_MESSAGES). Both iterate over messages, callprocessMessageContent, setATTR_GEN_AI_PROMPTattributes, and buildinputMessages. Consider extracting a shared helper function to reduce duplication and improve maintainability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts(18 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (7)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (7)
6-26: LGTM! Proper use of OpenTelemetry GenAI semantic conventions.The imports correctly use the standardized
ATTR_GEN_AI_*constants from@opentelemetry/semantic-conventions/incubating, aligning with the PR's goal to adopt OpenTelemetry semantic conventions.
380-402: LGTM! Token transforms correctly preserve canonical attributes.The fix from previous reviews is properly applied. Both functions now:
- Check if canonical
GEN_AI_USAGE_*attributes don't exist- Copy from legacy
ai.usage.*keys if present- Delete only the legacy keys (not the canonical ones)
This ensures
calculateTotalTokenscan read the preserved canonical attributes.
463-488: LGTM! Vendor transformation correctly handles provider name extraction.The function properly:
- Extracts the provider name prefix (before first dot) for compound identifiers like
"openai.chat"- Falls back to the full vendor string if no dot exists
- Maps known prefixes to standardized vendor names via
VENDOR_MAPPING- Sets both
ATTR_GEN_AI_SYSTEM(mapped vendor) andATTR_GEN_AI_PROVIDER_NAME(extracted prefix)
490-532: LGTM! New transformer functions are well-structured.The added transformers properly normalize AI SDK attributes to OpenTelemetry GenAI semantic conventions:
transformOperationName: Correctly derives operation type ("chat" vs "execute_tool") from span name patternstransformModelId: Clean rename ofai.model.id→ATTR_GEN_AI_REQUEST_MODELtransformFinishReason: Properly normalizes to array formattransformConversationId: Appropriate fallback fromconversationIdtosessionIdtransformResponseMetadata: Straightforward response model/ID mappingAlso applies to: 557-581
660-681: LGTM! Orchestrator properly integrates all transformations.The
transformLLMSpansfunction correctly:
- Invokes new transformers (operation name, model ID, finish reason, response metadata, conversation ID, tool call attributes) in the appropriate sequence
- Maintains existing transformations for responses, prompts, tools, tokens, and provider metadata
- Coordinates the complete migration to GenAI semantic conventions
235-279: Verify migration ofLLM_REQUEST_FUNCTIONSto GenAI semantic conventions.This function still uses
SpanAttributes.LLM_REQUEST_FUNCTIONSwhile the PR migrates toGEN_AI_*semantic conventions. Confirm whether a GenAI equivalent exists (e.g.,ATTR_GEN_AI_REQUEST_TOOLSor similar) and should be used here, or if this legacy attribute is intentionally retained for backward compatibility.
453-461: Verify GenAI semantic convention for total tokens attribute.This function reads from
ATTR_GEN_AI_USAGE_INPUT_TOKENSandATTR_GEN_AI_USAGE_OUTPUT_TOKENSbut writes toSpanAttributes.LLM_USAGE_TOTAL_TOKENS. Check if a GenAI equivalent (e.g.,ATTR_GEN_AI_USAGE_TOTAL_TOKENS) exists in@traceloop/ai-semantic-conventionsand should be used for consistency with the migration to GenAI semantic conventions.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 5890edb in 2 minutes and 45 seconds. Click for details.
- Reviewed
3131lines of code in33files - Skipped
0files when reviewing. - Skipped posting
8draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/decorators.test.ts:50
- Draft comment:
The test suite for decorators is very comprehensive. Consider adding a comment near the shared initialization (line 76) to explain the purpose of using shared initialization with Polly – it helps readers understand the testing context. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:35
- Draft comment:
The transformation tests are very detailed. It might be useful to add a couple of edge-case tests that cover scenarios where input attributes are empty objects or unexpected types, to further enforce robustness of the transformation function. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:64
- Draft comment:
Good use of environment variable setup for replay mode. Consider adding inline comments on why certain API keys are set to dummy values for clarity. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts:60
- Draft comment:
Tests for agent integration and association properties are thorough. A minor suggestion: document expected structure of association properties in a comment for future maintainers. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:904
- Draft comment:
The total tokens calculation tests handle both numeric and string token values. It could be beneficial to include a test for negative or non-numeric string values (if applicable), to ensure robustness in type conversion. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
6. packages/traceloop-sdk/test/decorators.test.ts:288
- Draft comment:
The decorator tests check for proper span attribute propagation. It might help to add a brief comment on the purpose of the ‘withWorkflow’ vs. ‘withAgent’ tests, so new contributors understand the distinction. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. packages/instrumentation-bedrock/src/instrumentation.ts:375
- Draft comment:
Typographical note: The comment 'The format is removing when we are setting span attribute' may be clearer if reworded. Consider changing it to 'The format is being removed when setting the span attribute'. - Reason this comment was not posted:
Comment was on unchanged code.
8. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:2271
- Draft comment:
Typo alert: The string "gpt-4o" may be a typo. Was it meant to be "gpt-4"? - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_s2mtQDIcV5NUexGx
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
packages/instrumentation-cohere/src/instrumentation.ts (1)
412-445: Namespace mixing for token attributes—this is intentional but inconsistent. Token usage attributes currently split acrossgen_ai.usage.*(prompt/completion) andllm.usage.total_tokens. Tests confirm this is expected behavior, but it conflicts with the intent to consolidate under GenAI semantic conventions. The proposed fix is valid: importATTR_GEN_AI_USAGE_TOTAL_TOKENSfrom@opentelemetry/semantic-conventions/incubating(available in v1.38.0) instead of usingSpanAttributes.LLM_USAGE_TOTAL_TOKENS.This pattern affects all instrumentation packages (anthropic, openai, cohere, bedrock, etc.), so consider a systemic fix: add
GEN_AI_USAGE_TOTAL_TOKENS = "gen_ai.usage.total_tokens"topackages/ai-semantic-conventions/src/SemanticAttributes.tsand update all instrumentations to import and use it consistently. Alternatively, standardize all token attributes to thellm.usage.*namespace if that is the intended stable convention.Also applies to: lines 486–515 (and the same pattern in other instrumentations).
packages/instrumentation-openai/src/instrumentation.ts (1)
685-723: Record tool_calls with correct GenAI semconv structure.The OpenTelemetry GenAI semantic conventions specify tool call attributes must use
gen_ai.completion.${index}.tool_calls.${toolIndex}.function.nameand.function.arguments(nested underfunction), not directly undertool_calls.${toolIndex}. Additionally, tool callidandtypeattributes should be recorded atgen_ai.completion.${index}.tool_calls.${toolIndex}.idand.type.Update to match the spec:
- Nest
nameandargumentsunder.function- Add
.idand.typefromtoolCall.idandtoolCall.typepackages/instrumentation-openai/src/image-wrappers.ts (4)
292-334:calculateImageGenerationTokens()docs mentiongpt-image-1, but code never handles it.
Right now anymodel === "gpt-image-1"falls into the default fallback (1056), contradicting the comment table and producing wrong “completion tokens”.function calculateImageGenerationTokens( params: any, imageCount: number, ): number { const size = params?.size || "1024x1024"; const model = params?.model || "dall-e-2"; const quality = params?.quality || "standard"; @@ - } else { - // Default fallback for unknown models - tokensPerImage = 1056; - } + } else if (model === "gpt-image-1") { + // gpt-image-1 costs depend on quality and size (per doc table in the comment above) + const gptImage1Costs: Record<string, Record<string, number>> = { + low: { + "1024x1024": 272, + "1024x1536": 408, + "1536x1024": 400, + }, + medium: { + "1024x1024": 1056, + "1024x1536": 1584, + "1536x1024": 1568, + }, + high: { + "1024x1024": 4160, + "1024x1536": 6240, + "1536x1024": 6208, + }, + }; + tokensPerImage = + gptImage1Costs[quality]?.[size] || gptImage1Costs["medium"]["1024x1024"]; + } else { + // Default fallback for unknown models + tokensPerImage = 1056; + }
539-587: wrapImageVariation: sync branch never sets request attributes (missing prompt/image metadata).
Even if “probably always Promise”, this is a real functional gap if the sync branch triggers.} else { try { + // If sync path is reachable, at least capture request attributes. + // Note: this is async today due to uploadCallback usage; if you keep a sync path, consider a sync-only variant. + void setImageVariationRequestAttributes(span, params, uploadCallback).catch((e) => + console.error("Error setting image variation request attributes:", e), + ); return response; } catch (error) { span.recordException(error as Error); throw error; } finally { span.end(); } }
472-525: Remove the unreachable else branch in wrapImageEdit—the sync path creates a race condition that, while currently unreachable (OpenAI SDK always returns a Promise), should not exist as dead code.OpenAI's image methods (
images.generate,images.edit,images.createVariation) always return a Promise—they are never synchronous. The else branch at lines 516–525 is dead code that would end the span beforesetRequestAttributesPromisecompletes if somehow reached. The Promise path itself properly awaits the promise before ending (lines 494, 511), so the current code is safe in practice. However, the unreachable else branch should be removed to avoid confusion and eliminate the latent race condition.
151-174: Define and import image attribute constants from @traceloop/ai-semantic-conventions.The hardcoded strings
gen_ai.request.image.size,gen_ai.request.image.quality,gen_ai.request.image.style, andgen_ai.request.image.countviolate the repo guideline: "Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings."Add these constants to
packages/ai-semantic-conventions/src/SemanticAttributes.ts:
GEN_AI_REQUEST_IMAGE_SIZEGEN_AI_REQUEST_IMAGE_QUALITYGEN_AI_REQUEST_IMAGE_STYLEGEN_AI_REQUEST_IMAGE_COUNTThen import and use them in
image-wrappers.ts(lines 156, 160, 164, 168) instead of hardcoded string keys.packages/instrumentation-bedrock/src/instrumentation.ts (1)
229-251: Replace incorrect constant names:ATTR_GEN_AI_USAGE_PROMPT_TOKENSandATTR_GEN_AI_USAGE_COMPLETION_TOKENSdo not exist in @opentelemetry/semantic-conventions v1.38.0These constants are not exported by @opentelemetry/semantic-conventions/incubating v1.38.0. The correct constant names are:
ATTR_GEN_AI_USAGE_INPUT_TOKENS(for prompt/input tokens)ATTR_GEN_AI_USAGE_OUTPUT_TOKENS(for completion/output tokens)Update the import statement (lines 46–47) and all usages at lines 231, 237, 445–446, 513–514, 528–529.
packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (1)
171-227: Avoid setting span attributes toundefined(invalid OTel attribute value).Line 179–182 (
modelcan beundefined) and the numeric parameter reads (Line 185–201) can also resolve toundefineddepending on payload shape. Guard before assigning.-const model = params.endpoint.split("/").pop(); -attributes[ATTR_GEN_AI_REQUEST_MODEL] = model; -attributes[ATTR_GEN_AI_RESPONSE_MODEL] = model; +const model = params.endpoint.split("/").pop(); +if (model) { + attributes[ATTR_GEN_AI_REQUEST_MODEL] = model; + attributes[ATTR_GEN_AI_RESPONSE_MODEL] = model; +} -if (params?.parameters.structValue?.fields?.maxOutputTokens.numberValue) { +const maxOutputTokens = + params?.parameters.structValue?.fields?.maxOutputTokens.numberValue; +if (typeof maxOutputTokens === "number") { attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = - params?.parameters.structValue?.fields?.maxOutputTokens.numberValue; + maxOutputTokens; }packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
156-169: GuardgenerationConfigfields before setting attrs (avoidundefinedAttributeValue).On Line 161–166,
max_output_tokens/temperature/top_pcan be absent. Prefertypeof === "number"checks (and only set when defined).
♻️ Duplicate comments (2)
packages/instrumentation-cohere/src/instrumentation.ts (1)
252-278: Prompt message body key should be.content, not.user(consistency + likely consumer expectations).
You already use.contentfor chat history entries, but completion/chat final user message + rerank query still use.user.This matches prior review feedback on this file.
if (type === "completion" && "prompt" in params) { attributes[`${ATTR_GEN_AI_PROMPT}.0.role`] = "user"; - attributes[`${ATTR_GEN_AI_PROMPT}.0.user`] = params.prompt; + attributes[`${ATTR_GEN_AI_PROMPT}.0.content`] = params.prompt; } else if (type === "chat" && "message" in params) { params.chatHistory?.forEach((msg, index) => { attributes[`${ATTR_GEN_AI_PROMPT}.${index}.role`] = msg.role; if (msg.role !== "TOOL") { attributes[`${ATTR_GEN_AI_PROMPT}.${index}.content`] = msg.message; } }); attributes[ `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.role` ] = "user"; attributes[ - `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user` + `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.content` ] = params.message; } else if (type === "rerank" && "query" in params) { attributes[`${ATTR_GEN_AI_PROMPT}.0.role`] = "user"; - attributes[`${ATTR_GEN_AI_PROMPT}.0.user`] = params.query; + attributes[`${ATTR_GEN_AI_PROMPT}.0.content`] = params.query; params.documents.forEach((doc, index) => { attributes[`documents.${index}.index`] = typeof doc === "string" ? doc : doc.text; }); }packages/instrumentation-vertexai/tests/palm2.test.ts (1)
17-28: Fix missing imports forATTR_GEN_AI_*and avoid hardcodedllm.*attribute keys.
As written,ATTR_GEN_AI_SYSTEM/ATTR_GEN_AI_REQUEST_MODEL/ etc. are undefined in this file, and"llm.request.type"/"llm.top_k"should use constants (per repo guidelines and the earlier review).import { context } from "@opentelemetry/api"; import { AsyncHooksContextManager } from "@opentelemetry/context-async-hooks"; import { AIPlatformInstrumentation } from "../src/aiplatform-instrumentation"; import * as assert from "assert"; +import { SpanAttributes } from "@traceloop/ai-semantic-conventions"; +import { + ATTR_GEN_AI_REQUEST_MODEL, + ATTR_GEN_AI_REQUEST_TOP_P, + ATTR_GEN_AI_RESPONSE_MODEL, + ATTR_GEN_AI_SYSTEM, +} from "@opentelemetry/semantic-conventions/incubating"; import { BasicTracerProvider, InMemorySpanExporter, SimpleSpanProcessor, } from "@opentelemetry/sdk-trace-base";- assert.strictEqual(attributes["llm.request.type"], "completion"); + assert.strictEqual(attributes[SpanAttributes.LLM_REQUEST_TYPE], "completion"); ... - assert.strictEqual(attributes["llm.top_k"], parameter.topK); + assert.strictEqual(attributes[SpanAttributes.LLM_TOP_K], parameter.topK);(Apply the same replacements in the second test block.)
Also applies to: 95-103, 168-178
🧹 Nitpick comments (20)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (5)
197-411: Prompt messages parsing/flattening tests are thorough (object content, arrays, mixed content, escaping, invalid JSON).
One small gotcha: several cases don’t assert removal ofai.prompt.messagesafter transform (some do); if deletion is part of the contract, consider making that consistent across cases.
824-862: Token tests correctly assert “delete legacy AI SDK keys, keep normalizedinput_tokens”.
Consider adding one explicit fallback test for the case where onlyai.usage.promptTokensexists (nogen_ai.usage.input_tokens) to document whether you want to (a) drop it, or (b) map it intoinput_tokens.
864-902: Completion tokens: same comment as prompt tokens—tests are consistent with “keepoutput_tokens”.
1594-1869: Telemetry metadata → association properties tests are valuable, but one expectation is lossy ("[object Object]").
If feasible, prefer JSON-stringifying object metadata values in the transformer and update the test accordingly; current behavior makes debugging/analytics harder.
2292-2351: “Complete transformation” scenario is a good integration-style assertion set for the new attribute suite.
Minor: if tokens are intentionally only computed fromgen_ai.usage.{input,output}_tokens, consider adding them here (or explicitly assert they remain unset) to make the scenario’s token expectations unambiguous.packages/instrumentation-openai/src/instrumentation.ts (2)
347-398: Function/tool metadata is still onllm.request.functions.*— decide if you want it migrated/normalized too
Right now prompts/completions move togen_ai.*, butparams.functions/params.toolsstill emitSpanAttributes.LLM_REQUEST_FUNCTIONS.*. If this is intentional (no GenAI equivalent), consider at least documenting it in-code; otherwise, map to the new convention and/or define new constants inpackages/ai-semantic-conventions/src/SemanticAttributes.ts. Based on learnings/coding guidelines, prefer centralizing attribute keys in@traceloop/ai-semantic-conventionswhere possible.
667-681: Emit GenAI “total tokens” too (you currently only emit prompt/completion tokens under GenAI)
IfATTR_GEN_AI_USAGE_TOTAL_TOKENSexists for your semconv version, set it alongside the legacySpanAttributes.LLM_USAGE_TOTAL_TOKENSfor a smoother migration.import { ATTR_GEN_AI_COMPLETION, ATTR_GEN_AI_PROMPT, @@ ATTR_GEN_AI_USAGE_COMPLETION_TOKENS, ATTR_GEN_AI_USAGE_PROMPT_TOKENS, + ATTR_GEN_AI_USAGE_TOTAL_TOKENS, } from "@opentelemetry/semantic-conventions/incubating"; @@ if (result.usage) { span.setAttribute( SpanAttributes.LLM_USAGE_TOTAL_TOKENS, result.usage?.total_tokens, ); + span.setAttribute( + ATTR_GEN_AI_USAGE_TOTAL_TOKENS, + result.usage?.total_tokens, + );packages/instrumentation-openai/src/image-wrappers.ts (2)
350-390:fetch(firstImage.url)has noresponse.okcheck + no timeout; can hang or upload garbage.
At minimum, checkresponse.okbefore readingarrayBuffer(). Consider anAbortControllertimeout since this is instrumentation code on the request path.- const response = await fetch(firstImage.url); + const response = await fetch(firstImage.url); + if (!response.ok) { + throw new Error(`Failed to fetch image: ${response.status} ${response.statusText}`); + } const arrayBuffer = await response.arrayBuffer();
190-231: The prompt structure violates OpenTelemetry GenAI semantic conventions for multi-modal input.Per the OpenTelemetry GenAI specification, multi-modal user input (text + image) must be represented as multiple parts within a single message, not as separate message entries. The current code uses
.0for text and.1for image, each with their own role assignment, which treats them as two separate messages. This deviates from the semantic convention that expects:
- A single message at index
.0withrole="user"- Multiple parts within that message (one text part, one image part)
Consolidate the text prompt and image into a single
.0entry with a multi-part content array that includes both the text and image components, following the parts-based structure defined in the OpenTelemetry GenAI input messages schema.packages/instrumentation-bedrock/src/instrumentation.ts (1)
37-48: GuardATTR_GEN_AI_RESPONSE_MODELbefore parsing vendor/model in_endSpan(prevents setting empty/incorrect response model when span attributes are incomplete).- if (ATTR_GEN_AI_SYSTEM in attributes) { - const modelId = attributes[ATTR_GEN_AI_RESPONSE_MODEL] as string; + if (ATTR_GEN_AI_SYSTEM in attributes) { + const modelId = attributes[ATTR_GEN_AI_RESPONSE_MODEL]; + if (typeof modelId !== "string" || modelId.length === 0) { + // best-effort: still end span, but skip vendor/model normalization + span.setStatus({ code: SpanStatusCode.OK }); + span.end(); + return; + } const { modelVendor, model } = this._extractVendorAndModel(modelId);Also applies to: 171-176, 213-218
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
17-32: Consider adding provenance for customGEN_AI_USAGE_*keys (and keep this list intentionally small).A short comment/link to the upstream semconv issue/spec (or rationale) would help justify why these live here given the repo direction to use official constants directly. (Based on past review comment + coding guidelines tension.)
packages/instrumentation-openai/test/instrumentation.test.ts (2)
38-45: Consider re-exporting GenAI constants from@traceloop/ai-semantic-conventionsto avoid directincubatingcoupling in instrumentation packages/tests.
This repo’s guideline prefers importing semantic attribute constants from@traceloop/ai-semantic-conventions; depending directly on@opentelemetry/semantic-conventions/incubatingcan introduce churn when the incubating entrypoint changes. Based on learnings/coding guidelines, consider re-exporting/pinning these keys centrally and importing from there.
672-686: Avoid hardcoded"gen_ai.request.*"strings when constants exist (or define missing ones inSpanAttributes).
The image assertions still use"gen_ai.request.type","gen_ai.request.image.size", and"gen_ai.request.image.count"as raw strings; consider using constants (or adding them topackages/ai-semantic-conventions/src/SemanticAttributes.ts) for consistency.Also applies to: 682-686
packages/instrumentation-vertexai/tests/gemini.test.ts (1)
91-98: Optional: migrate remaining"gen_ai.prompt.*"/"gen_ai.completion.*"string keys to constants for consistency.
Not required, but it’d match the direction taken in the other instrumentation tests.Also applies to: 145-155
packages/instrumentation-cohere/tests/rerank.test.ts (2)
28-33: Incubating constants import: ensure workspace pins a semantic-conventions version that exports these.
Given@opentelemetry/semantic-conventions/incubatingis an external API surface, make sure the monorepo dependency and TS moduleResolution settings support this entrypoint.
116-120: Drop duplicate request model assertion (same check twice).
Minor cleanup: lines 116-120 and 133-135 assert the sameATTR_GEN_AI_REQUEST_MODEL.Also applies to: 133-135
packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts (2)
35-46: Consider re-exportingATTR_GEN_AI_*from@traceloop/ai-semantic-conventionsto avoid OTel-incubating imports scattered across instrumentations.This repo’s guideline prefers importing semantic attribute constants from
@traceloop/ai-semantic-conventions; if the intent is to standardize on OTel’s incubating constants, a small re-export layer would keep call sites consistent and reduce future churn. As per coding guidelines, ...-} from "@opentelemetry/semantic-conventions/incubating"; +} from "@traceloop/ai-semantic-conventions";
284-367: Normalize request/response model consistently (short name vs full resource) or document why they differ.On Line 286–287 you set
ATTR_GEN_AI_RESPONSE_MODELfromresult[0].model, which may not match the request-sidemodelderived from the endpoint (Line 179–182). Consider parsing the response model similarly (or storing both in different attrs if that’s the intent).packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
35-46: Same constants-surface concern as other instrumentations (prefer single import surface).If the repo is standardizing on OTel incubating attrs, consider re-exporting these from
@traceloop/ai-semantic-conventionsand importing from there for consistency. As per coding guidelines, ...packages/instrumentation-bedrock/tests/meta.test.ts (1)
145-165: Drop duplicateATTR_GEN_AI_REQUEST_MODELassertions to reduce noise.You assert
attributes[ATTR_GEN_AI_REQUEST_MODEL]twice (Line 151 and Line 163; similarly Line 224 and Line 239). Consider removing the duplicates.Also applies to: 219-244
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (33)
packages/ai-semantic-conventions/src/SemanticAttributes.ts(1 hunks)packages/instrumentation-anthropic/src/instrumentation.ts(6 hunks)packages/instrumentation-anthropic/test/instrumentation.test.ts(7 hunks)packages/instrumentation-bedrock/src/instrumentation.ts(16 hunks)packages/instrumentation-bedrock/tests/ai21.test.ts(3 hunks)packages/instrumentation-bedrock/tests/amazon.test.ts(5 hunks)packages/instrumentation-bedrock/tests/anthropic.test.ts(3 hunks)packages/instrumentation-bedrock/tests/cohere.test.ts(3 hunks)packages/instrumentation-bedrock/tests/meta.test.ts(5 hunks)packages/instrumentation-cohere/src/instrumentation.ts(12 hunks)packages/instrumentation-cohere/tests/chat.test.ts(9 hunks)packages/instrumentation-cohere/tests/generate.test.ts(9 hunks)packages/instrumentation-cohere/tests/rerank.test.ts(3 hunks)packages/instrumentation-langchain/src/callback_handler.ts(7 hunks)packages/instrumentation-langchain/test/instrumentation.test.ts(3 hunks)packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts(5 hunks)packages/instrumentation-openai/src/image-wrappers.ts(15 hunks)packages/instrumentation-openai/src/instrumentation.ts(8 hunks)packages/instrumentation-openai/test/instrumentation.test.ts(21 hunks)packages/instrumentation-together/src/instrumentation.ts(8 hunks)packages/instrumentation-together/test/instrumentation.test.ts(13 hunks)packages/instrumentation-vertexai/src/aiplatform-instrumentation.ts(8 hunks)packages/instrumentation-vertexai/src/vertexai-instrumentation.ts(4 hunks)packages/instrumentation-vertexai/tests/gemini.test.ts(2 hunks)packages/instrumentation-vertexai/tests/palm2.test.ts(2 hunks)packages/traceloop-sdk/src/lib/tracing/decorators.ts(2 hunks)packages/traceloop-sdk/src/lib/tracing/manual.ts(5 hunks)packages/traceloop-sdk/src/lib/tracing/span-processor.ts(4 hunks)packages/traceloop-sdk/test/agent_decorator.test.ts(4 hunks)packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts(7 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts(7 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts(42 hunks)packages/traceloop-sdk/test/decorators.test.ts(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
- packages/instrumentation-anthropic/test/instrumentation.test.ts
- packages/instrumentation-langchain/test/instrumentation.test.ts
- packages/instrumentation-llamaindex/src/custom-llm-instrumentation.ts
- packages/traceloop-sdk/test/decorators.test.ts
- packages/traceloop-sdk/src/lib/tracing/decorators.ts
- packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts
- packages/instrumentation-anthropic/src/instrumentation.ts
- packages/traceloop-sdk/test/ai-sdk-integration.test.ts
- packages/instrumentation-together/test/instrumentation.test.ts
- packages/instrumentation-bedrock/tests/anthropic.test.ts
- packages/instrumentation-cohere/tests/generate.test.ts
🧰 Additional context used
📓 Path-based instructions (5)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-cohere/tests/chat.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.ts
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-cohere/tests/chat.test.ts
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/instrumentation-cohere/tests/chat.test.ts
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
🧠 Learnings (15)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-vertexai/tests/palm2.test.tspackages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.tspackages/ai-semantic-conventions/src/SemanticAttributes.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/agent_decorator.test.tspackages/traceloop-sdk/src/lib/tracing/manual.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-cohere/tests/rerank.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-langchain/src/callback_handler.tspackages/traceloop-sdk/test/ai-sdk-transformations.test.tspackages/traceloop-sdk/src/lib/tracing/span-processor.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-cohere/tests/chat.test.ts
📚 Learning: 2025-08-13T08:00:42.668Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/interfaces/dataset.interface.ts:27-27
Timestamp: 2025-08-13T08:00:42.668Z
Learning: In packages/traceloop-sdk/src/lib/interfaces/dataset.interface.ts, when the user said "there is no date" in response to a comment about removing "date" from column type unions, they may have been referring to local changes not yet reflected in the PR branch, as the code still contained "date" in the union types at lines 27 and 43.
Applied to files:
packages/traceloop-sdk/src/lib/tracing/manual.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-bedrock/tests/amazon.test.tspackages/instrumentation-vertexai/tests/gemini.test.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-openai/src/image-wrappers.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.tspackages/instrumentation-bedrock/tests/cohere.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Applied to files:
packages/instrumentation-vertexai/src/aiplatform-instrumentation.tspackages/instrumentation-together/src/instrumentation.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-vertexai/src/vertexai-instrumentation.tspackages/instrumentation-openai/test/instrumentation.test.tspackages/instrumentation-bedrock/src/instrumentation.tspackages/instrumentation-openai/src/instrumentation.tspackages/instrumentation-bedrock/tests/meta.test.tspackages/instrumentation-cohere/src/instrumentation.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately
Applied to files:
packages/instrumentation-together/src/instrumentation.tspackages/instrumentation-bedrock/tests/ai21.test.tspackages/instrumentation-cohere/src/instrumentation.ts
📚 Learning: 2025-08-10T06:42:06.412Z
Learnt from: nirga
Repo: traceloop/openllmetry-js PR: 623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.412Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
Applied to files:
packages/instrumentation-openai/src/image-wrappers.ts
📚 Learning: 2025-08-12T13:58:25.583Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/src/lib/client/dataset/column.ts:41-47
Timestamp: 2025-08-12T13:58:25.583Z
Learning: In packages/traceloop-sdk/src/lib/client/dataset/column.ts, when the user mentioned "its working there is transform func", they were confirming that the response transformer is functioning correctly and converting snake_case API responses to camelCase, which means the Column class should access camelCase properties (createdAt, updatedAt) from the transformed ColumnResponse data.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:05.901Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:97-105
Timestamp: 2025-08-12T13:57:05.901Z
Learning: The traceloop-sdk uses a response transformer (`transformApiResponse` in `packages/traceloop-sdk/src/lib/utils/response-transformer.ts`) that converts snake_case API responses to camelCase for SDK interfaces. Raw API responses use snake_case but SDK consumers see camelCase fields.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-12T13:57:42.538Z
Learnt from: galzilber
Repo: traceloop/openllmetry-js PR: 643
File: packages/traceloop-sdk/test/datasets-final.test.ts:129-143
Timestamp: 2025-08-12T13:57:42.538Z
Learning: The Traceloop SDK includes a response transformer utility (transformApiResponse) that automatically converts snake_case API responses to camelCase for SDK consumers. This means interface definitions and tests should use camelCase field names (like datasetId, datasetSlug, createdAt, updatedAt) rather than the raw API's snake_case format.
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Applied to files:
packages/instrumentation-cohere/src/instrumentation.ts
🧬 Code graph analysis (12)
packages/traceloop-sdk/test/agent_decorator.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/traceloop-sdk/src/lib/tracing/manual.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-cohere/tests/rerank.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-langchain/src/callback_handler.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformLLMSpans(660-681)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-bedrock/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-cohere/src/instrumentation.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-bedrock/tests/cohere.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
packages/instrumentation-cohere/tests/chat.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-58)
🔇 Additional comments (39)
packages/instrumentation-langchain/src/callback_handler.ts (4)
24-32: LGTM - Proper use of OpenTelemetry semantic conventions.The imports correctly source the new
ATTR_GEN_AI_*constants from@opentelemetry/semantic-conventions/incubatingwhile retaining custom attributes from@traceloop/ai-semantic-conventions. This follows the coding guidelines for importing AI/LLM semantic attribute constants.
73-90: LGTM - Chat model attributes correctly migrated.The migration from
LLM_SYSTEMandLLM_PROMPTStoATTR_GEN_AI_SYSTEMandATTR_GEN_AI_PROMPTis correct. The indexed attribute pattern for multi-message prompts is appropriate for LangChain's message array structure.
114-126: LGTM - Completion model attributes correctly migrated.Consistent migration with
handleChatModelStart. The "user" role for completion prompts is appropriate since these are direct text prompts rather than structured chat messages.
148-207: LGTM - Response and usage attributes correctly migrated.The completion content, model names, and token usage attributes are all properly migrated to the new
ATTR_GEN_AI_*conventions. Correctly retainsSpanAttributes.LLM_USAGE_TOTAL_TOKENSsince there's no standard GenAI equivalent for total tokens in the OpenTelemetry semantic conventions. BothusageandtokenUsagecompatibility paths are consistently updated.packages/traceloop-sdk/src/lib/tracing/span-processor.ts (2)
193-193: LGTM: Agent name propagation logic is correct.The migration from the legacy constant to
ATTR_GEN_AI_AGENT_NAMEis implemented consistently across all usage points. The agent name propagation logic correctly:
- Sets the attribute on span start from context or AI SDK metadata
- Propagates the agent name from parent spans when not directly available
- Maintains a TTL-based cache for efficient propagation
Also applies to: 275-293
18-18: No changes needed. The import at line 18 correctly uses@opentelemetry/semantic-conventions/incubatingfor standard OpenTelemetry semantic attributes. The coding guideline applies only to custom Traceloop-specific attributes defined inSpanAttributes;ATTR_GEN_AI_AGENT_NAMEis a standard OpenTelemetry convention and is not available in the Traceloop package. This import pattern is consistent across the codebase.Likely an incorrect or invalid review comment.
packages/instrumentation-together/src/instrumentation.ts (3)
28-39: LGTM! New semantic convention imports are correct.The imports from
@opentelemetry/semantic-conventions/incubatingcorrectly bring in the new ATTR_GEN_AI_* constants that align with official OpenTelemetry semantic conventions for GenAI telemetry.
201-284: No changes needed—these are intentionally defined Traceloop semantic conventions, not legacy attributes.The attributes using
SpanAttributes.LLM_*(LLM_REQUEST_TYPE, LLM_FREQUENCY_PENALTY, LLM_PRESENCE_PENALTY, LLM_REQUEST_FUNCTIONS) are defined in@traceloop/ai-semantic-conventionsspecifically for LLM attributes not yet included in OpenTelemetry's incubating semantic conventions. They are used consistently across all instrumentations in this repo and are not deprecated. The code correctly uses both OpenTelemetry'sATTR_GEN_AI_*constants and Traceloop's customSpanAttributes.LLM_*constants as intended.
509-524: The code is correct as-is.SpanAttributes.LLM_USAGE_TOTAL_TOKENSis a custom attribute defined in@traceloop/ai-semantic-conventionsand does not have anATTR_GEN_AI_*equivalent in OpenTelemetry semantic conventions. The codebase intentionally uses a mixed approach: OpenTelemetry'sATTR_GEN_AI_*constants for standard generative AI attributes (completion/prompt tokens) and Traceloop's customSpanAttributes.LLM_*constants for additional attributes like total tokens. This pattern is consistent across all instrumentation packages.Likely an incorrect or invalid review comment.
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (16)
36-82: Response text →gen_ai.completion.*assertions look consistent and preserve unrelated attrs.
84-115: Response object →gen_ai.completion.*assertions look consistent.
117-195: Tool-calls →gen_ai.completion.0.tool_calls.*coverage is solid (incl. invalid JSON no-op).
413-508:ai.prompt(single + messages-array) coverage is good, includinggen_ai.input.messagesemission.
Double-check intent around escaping: the “single prompt” case expects the escaped string to remain escaped, while the messages test expects\nto be unescaped—if the transformer is meant to normalize both, align expectations accordingly.
904-963: Total token calculation tests are clear, including string-number inputs.
965-1053: Provider/system mapping expectations are now consistent (provider_nameas normalized id,systemas display name).
1055-1241: End-to-end “apply all transformations” assertions are coherent (new gen_ai keys + legacy deletions).
1243-1592:gen_ai.input.messages/gen_ai.output.messagesserialization tests provide strong regression coverage (text, tool calls, object responses, mixed content).
1786-1868: Agent detection tests clearly distinguish root-span vs child-span behavior.
1913-1995: Operation-name mapping coverage is straightforward and matches expected span-name patterns.
1997-2038: Provider-name tests match the “normalized provider id + mapped system” contract.
2040-2062: Model id →gen_ai.request.modeltest is clear and checks deletion of the legacy key.
2064-2121: Finish-reason normalization to array is well-covered (scalar, array, missing).
2123-2186: Tool-call attribute mapping to OTel keys is well asserted (and the “keep args/result for Traceloop” nuance is documented).
2188-2232: Conversation-id precedence (conversationId over sessionId) is correctly tested.
2234-2290: Response metadata (ai.response.model/id) →gen_ai.response.*coverage is consistent, including combined case.packages/traceloop-sdk/test/agent_decorator.test.ts (4)
144-155: Test assertions correctly updated to use OpenTelemetry semantic conventions.The test assertions have been properly updated to use the new
ATTR_GEN_AI_AGENT_NAMEandATTR_GEN_AI_PROMPTconstants from the official OpenTelemetry semantic conventions. The template literal syntax for nested attributes (e.g.,${ATTR_GEN_AI_PROMPT}.0.role) is correct.
205-216: Test assertions correctly updated to use OpenTelemetry semantic conventions.The decoration syntax test has been properly updated to use the new attribute constants, maintaining consistency with the
withAgentsyntax test.
267-286: Manual LLM instrumentation test correctly updated.The test for manual LLM instrumentation using
withLLMCallhas been properly updated to verify the new OpenTelemetry semantic convention attributes, includingATTR_GEN_AI_AGENT_NAME,ATTR_GEN_AI_REQUEST_MODEL, andATTR_GEN_AI_PROMPT.
271-271: Line 271 correctly uses the constantSpanAttributes.LLM_REQUEST_TYPE.The code follows the established pattern for AI/LLM semantic attributes.
LLM_REQUEST_TYPE("llm.request.type") is intentionally defined as a custom attribute in the Traceloop ai-semantic-conventions package and is not equivalent to OpenTelemetry'sgen_ai.*attributes. No migration is needed.packages/instrumentation-openai/src/instrumentation.ts (1)
28-39: Use standard GenAI semantic conventions attributes; ATTR_GEN_AI_PROMPT and ATTR_GEN_AI_SYSTEM are deprecated in v1.38.0The imports from
@opentelemetry/semantic-conventions/incubatingv1.38.0 are available. However, two attributes are deprecated:
ATTR_GEN_AI_PROMPT("gen_ai.prompt") is deprecated; usegen_ai.input.messagesinsteadATTR_GEN_AI_SYSTEM("gen_ai.system") is deprecated; usegen_ai.provider.nameinsteadAdditionally,
ATTR_GEN_AI_USAGE_TOTAL_TOKENSis available in v1.38.0 but not used in the code; consider adding it for complete token usage tracking alongside prompt and completion tokens.packages/instrumentation-openai/test/instrumentation.test.ts (1)
158-175: Assertion migrations look consistent with the new GenAI attribute namespace.
The updated key paths (${ATTR_GEN_AI_PROMPT}.*,${ATTR_GEN_AI_COMPLETION}.*, and usage tokens) match the intended convention shape and keep the test intent intact.Also applies to: 197-218, 240-266, 281-289, 310-318, 398-458, 496-557, 626-654, 672-705
packages/instrumentation-bedrock/tests/cohere.test.ts (2)
29-39:ATTR_GEN_AI_*import looks fine; just verify the incubating entrypoint is supported by your pinned version.
147-178: GenAI attribute assertions look consistent across non-stream and streaming paths.Also applies to: 209-267
packages/instrumentation-bedrock/tests/amazon.test.ts (2)
29-39:ATTR_GEN_AI_*import looks fine; ensure@opentelemetry/semantic-conventions/incubatingis available in the pinned version.
149-195: GenAI attribute migrations in assertions look correct and consistent with the PR direction.Also applies to: 228-301
packages/instrumentation-vertexai/src/vertexai-instrumentation.ts (1)
243-275: LGTM: completion + usage attrs mapped cleanly to${ATTR_GEN_AI_COMPLETION}.*andATTR_GEN_AI_USAGE_*.The mapping is consistent with the new convention and keeps total tokens on
SpanAttributes.LLM_USAGE_TOTAL_TOKENS.packages/instrumentation-bedrock/tests/meta.test.ts (2)
29-39: Pin/verify incubating semantic convention constants compatibility with the repo’s OTel versions.These tests now rely on
@opentelemetry/semantic-conventions/incubatingexports; please confirm the project’s resolved@opentelemetry/semantic-conventionsversion provides these constants in CI.
191-290: Streaming test: verify span is finished before readinggetFinishedSpans()inside the chunk loop.If the Bedrock instrumentation ends the span only when the stream completes,
memoryExporter.getFinishedSpans()may be empty or stale mid-stream. If you’ve seen flakes, prefer asserting after the stream finishes (or wait/poll until span count is non-zero).packages/instrumentation-cohere/tests/chat.test.ts (1)
28-37: Verify incubating convention imports are stable (tests + TS resolution).Same concern as other packages: ensure
@opentelemetry/semantic-conventions/incubatingis available under the repo’s TS config and dependency graph.
| import { | ||
| ATTR_GEN_AI_COMPLETION, | ||
| ATTR_GEN_AI_PROMPT, | ||
| ATTR_GEN_AI_REQUEST_MAX_TOKENS, | ||
| ATTR_GEN_AI_REQUEST_MODEL, | ||
| ATTR_GEN_AI_REQUEST_TEMPERATURE, | ||
| ATTR_GEN_AI_REQUEST_TOP_P, | ||
| ATTR_GEN_AI_SYSTEM, | ||
| } from "@opentelemetry/semantic-conventions/incubating"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/instrumentation-bedrock/tests/ai21.test.ts | sed -n '25,180p'Repository: traceloop/openllmetry-js
Length of output: 6230
🏁 Script executed:
# Check what's being imported and used
rg "ATTR_GEN_AI" packages/instrumentation-bedrock/tests/ai21.test.ts -A 2 -B 2Repository: traceloop/openllmetry-js
Length of output: 1694
🌐 Web query:
@opentelemetry/semantic-conventions v1.38.0 ATTR_GEN_AI_RESPONSE_MODEL incubating export
💡 Result:
Yes — the GenAI attribute gen_ai.response.model is defined in the GenAI semantic conventions and is available from the incubating entry of @opentelemetry/semantic-conventions (the incubating entry re-exports unstable conventions such as GenAI attributes). See the semantic-conventions docs and the GenAI pages for the attribute name/description. [1][2][3]
Sources:
- @opentelemetry/semantic-conventions package docs (incubating entry explanation). [1]
- OpenTelemetry GenAI semantic conventions (gen_ai.response.model). [2]
- opentelemetry-js semconv releases (v1.38.0 series). [3]
🏁 Script executed:
# Check what's exported from @traceloop/ai-semantic-conventions
rg "ATTR_GEN_AI_RESPONSE_MODEL|export.*RESPONSE_MODEL" packages/ -t ts -t tsxRepository: traceloop/openllmetry-js
Length of output: 95
🏁 Script executed:
# Look at the actual @traceloop/ai-semantic-conventions package
find packages -name "*semantic-conventions*" -type dRepository: traceloop/openllmetry-js
Length of output: 100
🏁 Script executed:
# Check the traceloop ai-semantic-conventions exports
cat -n packages/traceloop-sdk/node_modules/@traceloop/ai-semantic-conventions/index.d.ts 2>/dev/null | head -100 || \
find packages -path "*/ai-semantic-conventions*" -name "*.d.ts" -o -name "index.ts" | head -5 | xargs cat -nRepository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Check the ai-semantic-conventions package structure
ls -la packages/ai-semantic-conventions/Repository: traceloop/openllmetry-js
Length of output: 616
🏁 Script executed:
# Read the main export file
cat -n packages/ai-semantic-conventions/src/index.ts | head -150Repository: traceloop/openllmetry-js
Length of output: 999
🏁 Script executed:
# Read the SemanticAttributes file
cat -n packages/ai-semantic-conventions/src/SemanticAttributes.ts | head -200Repository: traceloop/openllmetry-js
Length of output: 5339
🏁 Script executed:
# Search for RESPONSE_MODEL usage in the codebase to see if it's used elsewhere
rg "ATTR_GEN_AI_RESPONSE_MODEL" packages/Repository: traceloop/openllmetry-js
Length of output: 4593
🏁 Script executed:
# Check the OpenTelemetry semantic conventions package to verify ATTR_GEN_AI_RESPONSE_MODEL exists
find packages -path "*node_modules/@opentelemetry/semantic-conventions*" -name "*.d.ts" | head -3 | xargs grep "RESPONSE_MODEL" 2>/dev/null | head -10Repository: traceloop/openllmetry-js
Length of output: 50
Add missing import and fix duplicate assertion at line 171
Line 171 incorrectly asserts ATTR_GEN_AI_REQUEST_MODEL (same as line 151); the response model should use ATTR_GEN_AI_RESPONSE_MODEL instead. Add the missing import and correct the assertion:
import {
ATTR_GEN_AI_COMPLETION,
ATTR_GEN_AI_PROMPT,
ATTR_GEN_AI_REQUEST_MAX_TOKENS,
ATTR_GEN_AI_REQUEST_MODEL,
ATTR_GEN_AI_REQUEST_TEMPERATURE,
ATTR_GEN_AI_REQUEST_TOP_P,
+ ATTR_GEN_AI_RESPONSE_MODEL,
ATTR_GEN_AI_SYSTEM,
} from "@opentelemetry/semantic-conventions/incubating";- assert.strictEqual(attributes[ATTR_GEN_AI_REQUEST_MODEL], model);
+ assert.strictEqual(attributes[ATTR_GEN_AI_RESPONSE_MODEL], model);Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/instrumentation-bedrock/tests/ai21.test.ts around lines 29 to 37 and
line 171, the test is missing the ATTR_GEN_AI_RESPONSE_MODEL import and
currently asserts ATTR_GEN_AI_REQUEST_MODEL again at line 171 (duplicating line
151). Add an import for ATTR_GEN_AI_RESPONSE_MODEL from
"@opentelemetry/semantic-conventions/incubating" alongside the existing imports,
then change the assertion at line 171 to assert ATTR_GEN_AI_RESPONSE_MODEL (the
response model) instead of ATTR_GEN_AI_REQUEST_MODEL.
| import { | ||
| ATTR_GEN_AI_COMPLETION, | ||
| ATTR_GEN_AI_PROMPT, | ||
| ATTR_GEN_AI_REQUEST_MAX_TOKENS, | ||
| ATTR_GEN_AI_REQUEST_MODEL, | ||
| ATTR_GEN_AI_REQUEST_TEMPERATURE, | ||
| ATTR_GEN_AI_REQUEST_TOP_P, | ||
| ATTR_GEN_AI_SYSTEM, | ||
| ATTR_GEN_AI_USAGE_COMPLETION_TOKENS, | ||
| ATTR_GEN_AI_USAGE_PROMPT_TOKENS, | ||
| } from "@opentelemetry/semantic-conventions/incubating"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the traceloop/ai-semantic-conventions package and check what it exports
find . -path "*/ai-semantic-conventions/src/SemanticAttributes.ts" -o -path "*/ai-semantic-conventions/package.json" | head -20Repository: traceloop/openllmetry-js
Length of output: 176
🏁 Script executed:
fd -t f "SemanticAttributes.ts" --max-one-resultRepository: traceloop/openllmetry-js
Length of output: 301
🏁 Script executed:
find . -type f -name "package.json" | xargs grep -l "ai-semantic-conventions" | head -5Repository: traceloop/openllmetry-js
Length of output: 309
🏁 Script executed:
# Check the root package.json for @opentelemetry/semantic-conventions version
cat package.json | grep -A2 "semantic-conventions"Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
cat packages/ai-semantic-conventions/src/SemanticAttributes.tsRepository: traceloop/openllmetry-js
Length of output: 4520
🏁 Script executed:
cat packages/ai-semantic-conventions/package.jsonRepository: traceloop/openllmetry-js
Length of output: 1105
🏁 Script executed:
cat packages/instrumentation-cohere/package.json | grep -A5 -B5 "dependencies"Repository: traceloop/openllmetry-js
Length of output: 394
🏁 Script executed:
# Check what the cohere instrumentation currently imports and uses
head -60 packages/instrumentation-cohere/src/instrumentation.ts | tail -30Repository: traceloop/openllmetry-js
Length of output: 1039
🏁 Script executed:
cat packages/ai-semantic-conventions/src/index.tsRepository: traceloop/openllmetry-js
Length of output: 845
🏁 Script executed:
# Check another instrumentation to see if it also imports from incubating
head -50 packages/instrumentation-anthropic/src/instrumentation.tsRepository: traceloop/openllmetry-js
Length of output: 1620
🏁 Script executed:
# Search for where total tokens is used in cohere instrumentation
grep -n "total_tokens\|TOTAL_TOKENS" packages/instrumentation-cohere/src/instrumentation.ts | head -20Repository: traceloop/openllmetry-js
Length of output: 402
🏁 Script executed:
# Check if ATTR_GEN_AI_USAGE_TOTAL_TOKENS is used anywhere in the cohere instrumentation
grep -n "ATTR_GEN_AI_USAGE_TOTAL_TOKENS\|ATTR_GEN_AI.*TOKENS" packages/instrumentation-cohere/src/instrumentation.tsRepository: traceloop/openllmetry-js
Length of output: 463
🏁 Script executed:
# Check if other instrumentations have the same import pattern
grep -r "from \"@opentelemetry/semantic-conventions/incubating\"" packages/instrumentation-*/src/ | wc -lRepository: traceloop/openllmetry-js
Length of output: 70
🏁 Script executed:
# Look at what's actually exported from @opentelemetry/semantic-conventions in node_modules or check semantic-conventions docs
# Let's check if there's any documentation or README in the repo about this choice
find packages/instrumentation-cohere -name "README*" -o -name "CONTRIBUTING*"Repository: traceloop/openllmetry-js
Length of output: 109
🏁 Script executed:
cat packages/instrumentation-cohere/README.mdRepository: traceloop/openllmetry-js
Length of output: 2207
🏁 Script executed:
# Check if there are any open issues or comments in SemanticAttributes about re-exporting GenAI constants
grep -n "GenAI\|ATTR_GEN_AI\|re-export" packages/ai-semantic-conventions/src/SemanticAttributes.tsRepository: traceloop/openllmetry-js
Length of output: 50
Import ATTR_GEN_AI_* constants from @traceloop/ai-semantic-conventions to align with repo conventions.
The instrumentation currently imports semantic attribute constants like ATTR_GEN_AI_PROMPT, ATTR_GEN_AI_COMPLETION, etc. from @opentelemetry/semantic-conventions/incubating. Per coding guidelines and learnings, these should be imported from @traceloop/ai-semantic-conventions to maintain a stable, centralized contract across instrumentations. This requires adding these re-exports to the @traceloop/ai-semantic-conventions package's SemanticAttributes.ts (if not already present) and updating the import statement.
🤖 Prompt for AI Agents
In packages/instrumentation-cohere/src/instrumentation.ts around lines 37 to 47,
the ATTR_GEN_AI_* constants are imported from
@opentelemetry/semantic-conventions/incubating but should come from
@traceloop/ai-semantic-conventions; update the import to pull
ATTR_GEN_AI_PROMPT, ATTR_GEN_AI_COMPLETION, ATTR_GEN_AI_REQUEST_*,
ATTR_GEN_AI_USAGE_* and any other used ATTR_GEN_AI_* symbols from
@traceloop/ai-semantic-conventions, and if those symbols are not yet exported
from that package add/re-export them in @traceloop/ai-semantic-conventions'
SemanticAttributes.ts (or index) so the instrumentation imports compile and
follow repo conventions.
| const attributes: Attributes = { | ||
| [SpanAttributes.LLM_SYSTEM]: "Cohere", | ||
| [ATTR_GEN_AI_SYSTEM]: "Cohere", | ||
| [SpanAttributes.LLM_REQUEST_TYPE]: this._getLlmRequestTypeByMethod(type), | ||
| }; | ||
|
|
||
| try { | ||
| const model = params.model ?? "command"; | ||
| attributes[SpanAttributes.LLM_REQUEST_MODEL] = model; | ||
| attributes[SpanAttributes.LLM_REQUEST_MODEL] = model; | ||
| attributes[ATTR_GEN_AI_REQUEST_MODEL] = model; | ||
|
|
||
| if (!("query" in params)) { | ||
| attributes[SpanAttributes.LLM_REQUEST_TOP_P] = params.p; | ||
| attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p; | ||
| attributes[SpanAttributes.LLM_TOP_K] = params.k; | ||
| attributes[SpanAttributes.LLM_REQUEST_TEMPERATURE] = params.temperature; | ||
| attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature; | ||
| attributes[SpanAttributes.LLM_FREQUENCY_PENALTY] = | ||
| params.frequencyPenalty; | ||
| attributes[SpanAttributes.LLM_PRESENCE_PENALTY] = | ||
| params.presencePenalty; | ||
| attributes[SpanAttributes.LLM_REQUEST_MAX_TOKENS] = params.maxTokens; | ||
| attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.maxTokens; | ||
| } else { | ||
| attributes["topN"] = params["topN"]; | ||
| attributes["maxChunksPerDoc"] = params["maxChunksPerDoc"]; | ||
| } |
There was a problem hiding this comment.
Avoid setting possibly-undefined attribute values on span start.
Assignments like attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p (and similar) can end up as undefined if the SDK request didn’t include a value, which is not a valid OpenTelemetry attribute value.
if (!("query" in params)) {
- attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p;
+ if (params.p !== undefined) attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.p;
attributes[SpanAttributes.LLM_TOP_K] = params.k;
- attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
+ if (params.temperature !== undefined)
+ attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
attributes[SpanAttributes.LLM_FREQUENCY_PENALTY] =
params.frequencyPenalty;
attributes[SpanAttributes.LLM_PRESENCE_PENALTY] =
params.presencePenalty;
- attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.maxTokens;
+ if (params.maxTokens !== undefined)
+ attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.maxTokens;
} else {
attributes["topN"] = params["topN"];
attributes["maxChunksPerDoc"] = params["maxChunksPerDoc"];
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/instrumentation-cohere/src/instrumentation.ts around lines 229 to
250, the code unconditionally assigns request fields (e.g., params.p, params.k,
params.temperature, params.frequencyPenalty, params.presencePenalty,
params.maxTokens, and in the else branch params.topN and params.maxChunksPerDoc)
into the attributes object which can result in undefined values being set on
span start; update the code to only set each attribute when its corresponding
params field is not undefined (e.g., check params.p !== undefined before
assigning ATTR_GEN_AI_REQUEST_TOP_P) so only valid OpenTelemetry attribute
values are added.
| assert.strictEqual( | ||
| attributes[ | ||
| `${SpanAttributes.LLM_PROMPTS}.${params.chatHistory?.length ?? 0}.role` | ||
| `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.role` | ||
| ], | ||
| "user", | ||
| ); | ||
| assert.strictEqual( | ||
| attributes[ | ||
| `${SpanAttributes.LLM_PROMPTS}.${params.chatHistory?.length ?? 0}.user` | ||
| `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user` | ||
| ], | ||
| params.message, | ||
| ); |
There was a problem hiding this comment.
Bug: ${ATTR_GEN_AI_PROMPT}.*.user should almost certainly be .*.content.
Line 126–130 and Line 222–227 use .user, while the rest of the codebase uses .content for prompt message bodies.
- `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.user`
+ `${ATTR_GEN_AI_PROMPT}.${params.chatHistory?.length ?? 0}.content`Also applies to: 216-227
🤖 Prompt for AI Agents
In packages/instrumentation-cohere/tests/chat.test.ts around lines 119–130 and
216–227, the test asserts use the attribute key suffix `.user` for prompt
message bodies but the codebase standard (and other tests) use `.content`;
change the assertions to use `.content` instead of `.user` and assert the value
equals params.message (or the expected message content) so the attribute key
matches the rest of the codebase.
| assert.strictEqual(attributes[`${ATTR_GEN_AI_PROMPT}.0.role`], "user"); | ||
| assert.strictEqual( | ||
| attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`], | ||
| "user", | ||
| ); | ||
| assert.strictEqual( | ||
| attributes[`${SpanAttributes.LLM_PROMPTS}.0.user`], | ||
| attributes[`${ATTR_GEN_AI_PROMPT}.0.user`], | ||
| params.query, | ||
| ); |
There was a problem hiding this comment.
Likely wrong attribute key: ${ATTR_GEN_AI_PROMPT}.0.user should probably be ${ATTR_GEN_AI_PROMPT}.0.content.
Right now the test asserts params.query under a non-standard key; this likely won’t match what the transformer/instrumentation emits (and is inconsistent with other tests).
- assert.strictEqual(
- attributes[`${ATTR_GEN_AI_PROMPT}.0.user`],
- params.query,
- );
+ assert.strictEqual(
+ attributes[`${ATTR_GEN_AI_PROMPT}.0.content`],
+ params.query,
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert.strictEqual(attributes[`${ATTR_GEN_AI_PROMPT}.0.role`], "user"); | |
| assert.strictEqual( | |
| attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`], | |
| "user", | |
| ); | |
| assert.strictEqual( | |
| attributes[`${SpanAttributes.LLM_PROMPTS}.0.user`], | |
| attributes[`${ATTR_GEN_AI_PROMPT}.0.user`], | |
| params.query, | |
| ); | |
| assert.strictEqual(attributes[`${ATTR_GEN_AI_PROMPT}.0.role`], "user"); | |
| assert.strictEqual( | |
| attributes[`${ATTR_GEN_AI_PROMPT}.0.content`], | |
| params.query, | |
| ); |
🤖 Prompt for AI Agents
In packages/instrumentation-cohere/tests/rerank.test.ts around lines 121-125,
the test asserts the attribute key `${ATTR_GEN_AI_PROMPT}.0.user` which is
incorrect; it should assert `${ATTR_GEN_AI_PROMPT}.0.content`. Update the
assertion to check attributes[`${ATTR_GEN_AI_PROMPT}.0.content`] equals
params.query (keeping the role assertion as-is) so the test matches the
transformer/instrumentation output and other tests.
| import { SpanAttributes } from "@traceloop/ai-semantic-conventions"; | ||
| import { | ||
| ATTR_GEN_AI_COMPLETION, | ||
| ATTR_GEN_AI_PROMPT, | ||
| ATTR_GEN_AI_REQUEST_MODEL, | ||
| ATTR_GEN_AI_SYSTEM, | ||
| ATTR_GEN_AI_USAGE_COMPLETION_TOKENS, | ||
| ATTR_GEN_AI_USAGE_PROMPT_TOKENS, | ||
| } from "@opentelemetry/semantic-conventions/incubating"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the package.json to see which version of @opentelemetry/semantic-conventions is installed
cat packages/instrumentation-openai/package.json | grep -A 2 "@opentelemetry/semantic-conventions"Repository: traceloop/openllmetry-js
Length of output: 208
🏁 Script executed:
# Search for available constants in the codebase that reference semantic-conventions
find . -name "*.json" -path "*/node_modules/@opentelemetry/semantic-conventions*" -type f 2>/dev/null | head -5Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Check the actual image-wrappers.ts file to see the current state
cat -n packages/instrumentation-openai/src/image-wrappers.ts | head -100Repository: traceloop/openllmetry-js
Length of output: 4005
🌐 Web query:
@opentelemetry/semantic-conventions incubating GenAI usage attributes ATTR_GEN_AI_USAGE_TOTAL_TOKENS
💡 Result:
Yes — the incubating GenAI semantic conventions define a total-tokens attribute named gen_ai.usage.total_tokens (the JS constant is typically exposed by @opentelemetry/semantic-conventions as ATTR_GEN_AI_USAGE_TOTAL_TOKENS). See the GenAI attribute registry and spans/metrics docs for the GenAI conventions. [1][2]
Sources:
- OpenTelemetry GenAI attribute registry / GenAI docs (lists gen_ai.usage.* including gen_ai.usage.total_tokens). [1]
- OpenTelemetry GenAI spans/metrics semantic conventions pages (usage attributes for input/output/total tokens). [2]
🏁 Script executed:
# Find the setImageGenerationResponseAttributes function to see where total tokens are set
grep -n "setImageGenerationResponseAttributes\|LLM_USAGE_TOTAL_TOKENS" packages/instrumentation-openai/src/image-wrappers.tsRepository: traceloop/openllmetry-js
Length of output: 543
🏁 Script executed:
# Get more context around where total tokens are being set
sed -n '200,350p' packages/instrumentation-openai/src/image-wrappers.tsRepository: traceloop/openllmetry-js
Length of output: 4035
🏁 Script executed:
# Also check if there are any hardcoded image request attributes
grep -n "gen_ai.request.image\|image.size\|image.quality" packages/instrumentation-openai/src/image-wrappers.tsRepository: traceloop/openllmetry-js
Length of output: 573
🌐 Web query:
@opentelemetry/semantic-conventions 1.38.0 gen_ai.request.image constants ATTR_GEN_AI_REQUEST_IMAGE
💡 Result:
The constant ATTR_GEN_AI_REQUEST_IMAGE corresponds to the attribute name "gen_ai.request.image". [1][2]
Sources:
- OpenTelemetry GenAI attribute list (gen_ai.request.* includes image-related attributes). [1]
- @opentelemetry/semantic-conventions JS module docs (package defines semantic-convention attribute constants). [2]
References:
[1] https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/
[2] https://open-telemetry.github.io/opentelemetry-js/modules/_opentelemetry_semantic-conventions.html
🏁 Script executed:
# Check if the traceloop ai-semantic-conventions package has constants for these
cat packages/ai-semantic-conventions/src/SemanticAttributes.ts | head -100Repository: traceloop/openllmetry-js
Length of output: 4224
🏁 Script executed:
# Look for any exports related to image attributes
grep -r "image.size\|image.quality\|image.style\|image.count" packages/ai-semantic-conventions/Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Search for ATTR_GEN_AI_USAGE_TOTAL_TOKENS in the OpenTelemetry package files
find node_modules/@opentelemetry/semantic-conventions -name "*.d.ts" -o -name "*.js" 2>/dev/null | xargs grep -l "ATTR_GEN_AI_USAGE_TOTAL_TOKENS" 2>/dev/null | head -5Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# If that doesn't work, check the index file or incubating module
cat node_modules/@opentelemetry/semantic-conventions/build/src/metrics/incubating.d.ts 2>/dev/null | grep -A 2 -B 2 "ATTR_GEN_AI_USAGE" | head -50Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Also check the attributes incubating module
cat node_modules/@opentelemetry/semantic-conventions/build/src/attribute-keys/incubating.d.ts 2>/dev/null | grep "GEN_AI_USAGE" | head -20Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# List the structure of the semantic-conventions package
ls -la node_modules/@opentelemetry/semantic-conventions/build/src/ 2>/dev/null | head -20Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Check what's actually exported in the incubating index
cat node_modules/@opentelemetry/semantic-conventions/build/src/incubating.d.ts 2>/dev/null | grep "USAGE" | head -30Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Try a broader search for any TOTAL_TOKENS constant
grep -r "TOTAL_TOKENS" node_modules/@opentelemetry/semantic-conventions/build/ 2>/dev/null | head -20Repository: traceloop/openllmetry-js
Length of output: 50
Use ATTR_GEN_AI_USAGE_TOTAL_TOKENS consistently with other GenAI attributes.
You're mixing legacy SpanAttributes.LLM_USAGE_TOTAL_TOKENS with new ATTR_GEN_AI_USAGE_COMPLETION_TOKENS and ATTR_GEN_AI_USAGE_PROMPT_TOKENS in setImageGenerationResponseAttributes() (lines 325, 328, 331). This creates inconsistent attribute namespaces in spans: GenAI prompt/completion tokens under the new key, but total tokens under the old LLM key—breaking downstream queries/dashboards that expect all usage attributes in one namespace.
Import and use ATTR_GEN_AI_USAGE_TOTAL_TOKENS (if available in your @opentelemetry/semantic-conventions version) to replace SpanAttributes.LLM_USAGE_TOTAL_TOKENS, or define the constant in @traceloop/ai-semantic-conventions if the OTel package doesn't expose it yet.
| const attributes: Attributes = { | ||
| [SpanAttributes.LLM_SYSTEM]: provider, | ||
| [ATTR_GEN_AI_SYSTEM]: provider, | ||
| [SpanAttributes.LLM_REQUEST_TYPE]: type, | ||
| }; | ||
|
|
||
| try { | ||
| attributes[SpanAttributes.LLM_REQUEST_MODEL] = params.model; | ||
| attributes[ATTR_GEN_AI_REQUEST_MODEL] = params.model; | ||
| if (params.max_tokens) { | ||
| attributes[SpanAttributes.LLM_REQUEST_MAX_TOKENS] = params.max_tokens; | ||
| attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.max_tokens; | ||
| } | ||
| if (params.temperature) { | ||
| attributes[SpanAttributes.LLM_REQUEST_TEMPERATURE] = params.temperature; | ||
| attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature; | ||
| } | ||
| if (params.top_p) { | ||
| attributes[SpanAttributes.LLM_REQUEST_TOP_P] = params.top_p; | ||
| attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.top_p; | ||
| } |
There was a problem hiding this comment.
Don’t drop valid “0” request params (max_tokens / temperature / top_p)
Current truthy checks will skip setting attributes when values are 0. Prefer nullish checks.
- if (params.max_tokens) {
+ if (params.max_tokens != null) {
attributes[ATTR_GEN_AI_REQUEST_MAX_TOKENS] = params.max_tokens;
}
- if (params.temperature) {
+ if (params.temperature != null) {
attributes[ATTR_GEN_AI_REQUEST_TEMPERATURE] = params.temperature;
}
- if (params.top_p) {
+ if (params.top_p != null) {
attributes[ATTR_GEN_AI_REQUEST_TOP_P] = params.top_p;
}🤖 Prompt for AI Agents
In packages/instrumentation-openai/src/instrumentation.ts around lines 312 to
327, the current truthy checks (if (params.max_tokens), if (params.temperature),
if (params.top_p)) skip valid zero values; change these to nullish/explicit
undefined checks so 0 is preserved — e.g., test params.max_tokens !== undefined
&& params.max_tokens !== null (or use params.max_tokens ??) before assigning
ATTR_GEN_AI_REQUEST_MAX_TOKENS, and do the same for temperature and top_p so
zero values are recorded.
| assert.strictEqual(attributes[ATTR_GEN_AI_SYSTEM], "Google"); | ||
| assert.strictEqual(attributes["llm.request.type"], "completion"); | ||
| assert.strictEqual(attributes["gen_ai.request.model"], model); | ||
| assert.strictEqual(attributes["gen_ai.request.top_p"], 0.9); | ||
| assert.strictEqual(attributes[ATTR_GEN_AI_REQUEST_MODEL], model); | ||
| assert.strictEqual(attributes[ATTR_GEN_AI_REQUEST_TOP_P], 0.9); | ||
| assert.strictEqual(attributes["gen_ai.prompt.0.content"], prompt); | ||
| assert.strictEqual(attributes["gen_ai.prompt.0.role"], "user"); | ||
| assert.strictEqual(attributes["gen_ai.response.model"], model); | ||
| assert.strictEqual(attributes[ATTR_GEN_AI_RESPONSE_MODEL], model); | ||
| assert.strictEqual(attributes["gen_ai.completion.0.role"], "model"); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the test file to see current state of imports and attribute usage
cat -n packages/instrumentation-vertexai/tests/gemini.test.ts | head -100Repository: traceloop/openllmetry-js
Length of output: 4209
🏁 Script executed:
# Also check the semantic attributes file
cat packages/ai-semantic-conventions/src/SemanticAttributes.tsRepository: traceloop/openllmetry-js
Length of output: 4520
🏁 Script executed:
# Check if there's an index file that re-exports these constants
cat packages/ai-semantic-conventions/src/index.tsRepository: traceloop/openllmetry-js
Length of output: 845
🏁 Script executed:
# Search for ATTR_GEN_AI_SYSTEM or similar constant definitions in the codebase
rg "ATTR_GEN_AI_SYSTEM|ATTR_GEN_AI_REQUEST_MODEL" --type ts --type tsxRepository: traceloop/openllmetry-js
Length of output: 95
🏁 Script executed:
# Check if these constants might be imported from OpenTelemetry in other files
rg "@opentelemetry/semantic-conventions" --type ts --type tsx -A 3Repository: traceloop/openllmetry-js
Length of output: 95
🏁 Script executed:
# Search for gen_ai. pattern usage in the instrumentation source code
rg "gen_ai\." packages/instrumentation-vertexai/src --type ts -B 2 -A 2Repository: traceloop/openllmetry-js
Length of output: 50
🏁 Script executed:
# Search more broadly for ATTR_GEN_AI constants
rg "ATTR_GEN_AI"Repository: traceloop/openllmetry-js
Length of output: 50381
🏁 Script executed:
# Check the vertexai instrumentation source code
cat packages/instrumentation-vertexai/src/vertexai-instrumentation.ts | head -150Repository: traceloop/openllmetry-js
Length of output: 4427
🏁 Script executed:
# Look for how attributes are set in the instrumentation
rg "gen_ai\." packages/instrumentation-vertexai/src -B 2 -A 2 | head -100Repository: traceloop/openllmetry-js
Length of output: 50
Add missing imports for ATTR_GEN_AI_* constants—test code won't compile without them.
Import from @opentelemetry/semantic-conventions/incubating:
import type * as vertexAiImport from "@google-cloud/vertexai";
+import {
+ ATTR_GEN_AI_REQUEST_MAX_TOKENS,
+ ATTR_GEN_AI_REQUEST_MODEL,
+ ATTR_GEN_AI_REQUEST_TOP_P,
+ ATTR_GEN_AI_RESPONSE_MODEL,
+ ATTR_GEN_AI_SYSTEM,
+} from "@opentelemetry/semantic-conventions/incubating";Also applies to: 140-148
🤖 Prompt for AI Agents
In packages/instrumentation-vertexai/tests/gemini.test.ts around lines 87-94
(and also apply the same change at 140-148), the test references ATTR_GEN_AI_*
constants but they are not imported; add an import from
"@opentelemetry/semantic-conventions/incubating" bringing in the required
ATTR_GEN_AI_SYSTEM, ATTR_GEN_AI_REQUEST_MODEL, ATTR_GEN_AI_REQUEST_TOP_P,
ATTR_GEN_AI_RESPONSE_MODEL (and any other ATTR_GEN_AI_* constants used in the
file) and update the top-of-file imports accordingly so the test compiles.
| import { | ||
| ATTR_GEN_AI_AGENT_NAME, | ||
| ATTR_GEN_AI_COMPLETION, | ||
| ATTR_GEN_AI_PROMPT, | ||
| ATTR_GEN_AI_REQUEST_MODEL, | ||
| ATTR_GEN_AI_RESPONSE_MODEL, | ||
| ATTR_GEN_AI_USAGE_INPUT_TOKENS, | ||
| ATTR_GEN_AI_USAGE_OUTPUT_TOKENS, | ||
| } from "@opentelemetry/semantic-conventions/incubating"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/traceloop-sdk/src/lib/tracing/manual.ts | head -160Repository: traceloop/openllmetry-js
Length of output: 5439
🏁 Script executed:
# Check what's actually exported from @opentelemetry/semantic-conventions
rg "ATTR_GEN_AI_AGENT_NAME|ATTR_GEN_AI_USAGE_INPUT_TOKENS|ATTR_GEN_AI_USAGE_OUTPUT_TOKENS" packages/ --type ts --type tsxRepository: traceloop/openllmetry-js
Length of output: 95
🏁 Script executed:
# Find references to the constants mentioned in the imports
rg "ATTR_GEN_AI_" packages/traceloop-sdk/src/lib/tracing/manual.ts -A 2 -B 2Repository: traceloop/openllmetry-js
Length of output: 1815
🌐 Web query:
@opentelemetry/api startActiveSpan signature and options parameter format for setting span attributes
💡 Result:
Signature (JS/TS)
- tracer.startActiveSpan(name: string, fn: (span: Span) => T): T
- tracer.startActiveSpan(name: string, options: SpanOptions, fn: (span: Span) => T): T
(there is also tracer.startSpan(name: string, options?: SpanOptions, context?: Context) which uses the same SpanOptions shape). [1][2]
SpanOptions (for setting attributes)
- The options object accepts an attributes field: attributes?: Attributes. Attributes is a map of key -> AttributeValue. In the JS implementation Attributes is typed as a plain object where keys are strings and values are AttributeValue. AttributeValue is a scalar or array of scalars (string | number | boolean | Array<string|number|boolean>). You can also set kind?: SpanKind and startTime?: TimeInput in the same options object. [2][3]
Example
- TypeScript example setting attributes when starting an active span:
const result = tracer.startActiveSpan('opName', { attributes: { 'http.method': 'GET', 'app.user_id': 123 } }, (span) => {
// span is active here
span.end();
return something;
}); [2][3]
Sources
- Tracer API / startActiveSpan (opentelemetry-js docs). [1]
- SDK Span / SpanOptions & Attributes types (opentelemetry-js sources / typings). [2]
- OpenTelemetry Tracing spec (attributes rules & sampling note). [3]
🌐 Web query:
@opentelemetry/semantic-conventions incubating v1.38.0 ATTR_GEN_AI constants AGENT_NAME and USAGE token attributes
💡 Result:
Short answer: Yes — the GenAI semantic conventions in the incubating entry (semconv v1.38.0) define the agent-name attribute and the token-usage metric/attributes.
Key names you’ll see in the incubating semconv docs and v1.38.0 release:
- gen_ai.agent.name (agent name attribute). [1][2]
- gen_ai.client.token.usage (token-usage histogram metric). Required/associated attributes include gen_ai.token.type and provider/model attributes; token-count attributes used by some provider conventions include gen_ai.usage.input_tokens and gen_ai.usage.output_tokens. [1][3]
Sources:
- OpenTelemetry GenAI spans (agent name + agent spans). [opentelemetry.io]. [2]
- OpenTelemetry GenAI metrics (gen_ai.client.token.usage and gen_ai.token.type). [opentelemetry.io]. [1]
- OpenTelemetry semantic-conventions v1.38.0 release notes (v1.38.0). [github.com]. [4]
References:
[1] https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-metrics/
[2] https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/
[3] https://opentelemetry.io/docs/specs/semconv/gen-ai/openai/
[4] https://github.com/open-telemetry/semantic-conventions/releases/tag/v1.38.0
🏁 Script executed:
cat -n packages/traceloop-sdk/src/lib/tracing/manual.ts | sed -n '160,200p'Repository: traceloop/openllmetry-js
Length of output: 1453
Fix startActiveSpan options shape and import semantic constants from correct source
At line 153 in withVectorDBCall, the options object is missing the attributes wrapper. Per the OpenTelemetry API, attributes must be nested under an attributes key:
return getTracer().startActiveSpan(
`${vendor}.${type}`,
- { [SpanAttributes.LLM_REQUEST_TYPE]: type },
+ { attributes: { [SpanAttributes.LLM_REQUEST_TYPE]: type } },
entityContext,
(span: Span) => {Additionally, import ATTR_GEN_AI_AGENT_NAME, ATTR_GEN_AI_USAGE_INPUT_TOKENS, and ATTR_GEN_AI_USAGE_OUTPUT_TOKENS from @traceloop/ai-semantic-conventions (lines 8-16) instead of @opentelemetry/semantic-conventions/incubating.
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
packages/instrumentation-langchain/tsconfig.test.json (1)
1-9: ConsidernoEmit(or separateoutDir) for test tsconfig;references: []may be redundant.If this config is only used to typecheck/run tests (e.g., via ts-mocha/ts-node), setting
"noEmit": truecan avoid generatingdist/test/**artifacts (or alternatively use a dedicatedoutDirlikedist-test). Also, an explicit empty"references": []usually isn’t needed unless tooling expects the key. Based on learnings, use ts-mocha as the TypeScript test runner for package tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (24)
packages/instrumentation-anthropic/package.json(2 hunks)packages/instrumentation-anthropic/tsconfig.test.json(1 hunks)packages/instrumentation-bedrock/package.json(2 hunks)packages/instrumentation-bedrock/tsconfig.test.json(1 hunks)packages/instrumentation-chromadb/package.json(2 hunks)packages/instrumentation-chromadb/tsconfig.test.json(1 hunks)packages/instrumentation-cohere/package.json(2 hunks)packages/instrumentation-cohere/tsconfig.test.json(1 hunks)packages/instrumentation-langchain/package.json(2 hunks)packages/instrumentation-langchain/tsconfig.test.json(1 hunks)packages/instrumentation-llamaindex/package.json(2 hunks)packages/instrumentation-llamaindex/tsconfig.test.json(1 hunks)packages/instrumentation-mcp/package.json(2 hunks)packages/instrumentation-mcp/tsconfig.test.json(1 hunks)packages/instrumentation-openai/package.json(2 hunks)packages/instrumentation-openai/tsconfig.test.json(1 hunks)packages/instrumentation-pinecone/package.json(2 hunks)packages/instrumentation-pinecone/tsconfig.test.json(1 hunks)packages/instrumentation-qdrant/package.json(1 hunks)packages/instrumentation-qdrant/tsconfig.test.json(1 hunks)packages/instrumentation-together/package.json(2 hunks)packages/instrumentation-together/tsconfig.test.json(1 hunks)packages/instrumentation-vertexai/package.json(2 hunks)packages/instrumentation-vertexai/tsconfig.test.json(1 hunks)
✅ Files skipped from review due to trivial changes (5)
- packages/instrumentation-llamaindex/tsconfig.test.json
- packages/instrumentation-openai/tsconfig.test.json
- packages/instrumentation-cohere/tsconfig.test.json
- packages/instrumentation-mcp/tsconfig.test.json
- packages/instrumentation-bedrock/tsconfig.test.json
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/instrumentation-vertexai/package.json
- packages/instrumentation-chromadb/package.json
- packages/instrumentation-together/package.json
- packages/instrumentation-openai/package.json
- packages/instrumentation-anthropic/package.json
🧰 Additional context used
📓 Path-based instructions (2)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-anthropic/tsconfig.test.jsonpackages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/package.jsonpackages/instrumentation-qdrant/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-chromadb/tsconfig.test.json
packages/*/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use workspace:* for intra-repo package dependencies in package.json
Files:
packages/instrumentation-langchain/package.jsonpackages/instrumentation-qdrant/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-mcp/package.json
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-anthropic/tsconfig.test.jsonpackages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/package.jsonpackages/instrumentation-qdrant/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Applied to files:
packages/instrumentation-anthropic/tsconfig.test.jsonpackages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Use ts-mocha as the TypeScript test runner for package tests
Applied to files:
packages/instrumentation-anthropic/tsconfig.test.jsonpackages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/package.jsonpackages/instrumentation-qdrant/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/instrumentation-anthropic/tsconfig.test.jsonpackages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must extract request/response data and token usage from wrapped calls
Applied to files:
packages/instrumentation-anthropic/tsconfig.test.jsonpackages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/package.jsonpackages/instrumentation-qdrant/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-mcp/package.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Applied to files:
packages/instrumentation-qdrant/tsconfig.test.jsonpackages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-vertexai/tsconfig.test.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-together/tsconfig.test.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must capture and record errors appropriately
Applied to files:
packages/instrumentation-pinecone/tsconfig.test.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-langchain/tsconfig.test.jsonpackages/instrumentation-chromadb/tsconfig.test.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/instrumentation-langchain/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-mcp/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies
Applied to files:
packages/instrumentation-langchain/package.jsonpackages/instrumentation-bedrock/package.jsonpackages/instrumentation-llamaindex/package.jsonpackages/instrumentation-cohere/package.jsonpackages/instrumentation-pinecone/package.jsonpackages/instrumentation-mcp/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/instrumentation-llamaindex/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
Repo: traceloop/openllmetry-js PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/instrumentation-llamaindex/package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (18)
packages/instrumentation-vertexai/tsconfig.test.json (1)
1-9: Configuration looks good.This TypeScript test configuration file is appropriate and correctly structured. It properly extends the base configuration and includes both source and test files for compilation.
packages/instrumentation-anthropic/tsconfig.test.json (1)
1-9: Configuration looks good.This test TypeScript configuration properly extends the base config (which includes the updated
moduleResolution: nodeper the PR objectives) and correctly includes both source and test files. The setup aligns with ts-mocha test runner expectations and follows the repo's established patterns.packages/instrumentation-llamaindex/package.json (1)
13-13: Changes align with PR objectives for OpenTelemetry semantic conventions migration.The test script update to
tsconfig.test.jsonand the version bump to @opentelemetry/semantic-conventions ^1.38.0 are consistent with the broader repository alignment to newer OpenTelemetry semantic conventions, and the workspace:* usage for @traceloop/ai-semantic-conventions follows coding guidelines correctly.Verify that
tsconfig.test.jsonexists in this package and is properly configured for TypeScript test compilation. If it doesn't exist or has not been added, it should be created with test-specific configurations.Also applies to: 43-43
packages/instrumentation-together/tsconfig.test.json (1)
1-9: Configuration looks good.This test TypeScript configuration follows the established pattern for per-package test configurations and correctly extends the base config. The includes cover both source and test files, and the compiler options are reasonable.
packages/instrumentation-qdrant/tsconfig.test.json (1)
1-9: Configuration looks good.The test configuration correctly extends the base tsconfig and includes both src and test files with appropriate compiler options for test builds.
packages/instrumentation-qdrant/package.json (2)
40-46: LGTM on dependency configuration.The package correctly uses
workspace:*for the internal@traceloop/ai-semantic-conventionsdependency and maintains proper OpenTelemetry and instrumentation dependencies. This aligns with the coding guidelines and PR objectives for GenAI semantic convention alignment.
13-13: No issues found. Thetsconfig.test.jsonfile already exists inpackages/instrumentation-qdrant/, and this change aligns with the consistent pattern established across all other instrumentation packages in the monorepo.packages/instrumentation-chromadb/tsconfig.test.json (1)
1-9: Configuration looks good.The test TypeScript configuration is correctly structured to compile both source and test files with appropriate output and root directory settings. The extension of the base config and explicit file inclusion patterns are sound.
packages/instrumentation-mcp/package.json (3)
13-13: Test configuration updated to use separate tsconfig.The test script now references
tsconfig.test.jsoninstead oftsconfig.json, aligning with the PR's TypeScript configuration standardization. Ensure thattsconfig.test.jsonexists in the package root and is properly configured for the test environment.
44-44: Semantic conventions version updated to support new GenAI attributes.The bump from
^1.36.0to^1.38.0aligns with the PR's goal to adopt OpenTelemetry's newATTR_GEN_AI_*semantic attributes. Verify that the MCP instrumentation implementation uses the new semantic convention constants from@opentelemetry/semantic-conventionsrather than hardcoding attribute strings.
48-59: Inconsistency between summary and provided package.json.The AI summary claims that
@opentelemetry/semantic-conventionswas updated indevDependenciesfrom^1.36.0to^1.38.0, but the provided code does not show@opentelemetry/semantic-conventionsin thedevDependenciessection, nor is there a change marker (~) on any devDependencies line. Additionally,@opentelemetry/semantic-conventionsshould only be a runtime dependency, not a dev dependency. Please clarify whether this was an unintended summary error or if the code snapshot is incomplete.packages/instrumentation-cohere/package.json (1)
44-44: Semantic conventions version bump is necessary for new GenAI attributes.The dependency bump from
^1.36.0to^1.38.0enables the newATTR_GEN_AI_*constants required by this PR. This aligns with the migration to OpenTelemetry semantic conventions for GenAI telemetry.Please confirm that the instrumentation code in this package has been updated to use the new
ATTR_GEN_AI_*attributes from the bumped version of@opentelemetry/semantic-conventions.packages/instrumentation-pinecone/tsconfig.test.json (1)
1-9: Test configuration is correct and consistent with established patterns.The tsconfig.test.json file properly extends tsconfig.base.json (which includes
moduleResolution: node), uses the correct test directory reference, and matches the structure used across all other instrumentation packages in the repository.packages/instrumentation-pinecone/package.json (1)
40-47: The@opentelemetry/semantic-conventionsv1.38.0 bump is safe. This package uses@traceloop/ai-semantic-conventionsfor semantic attributes and does not depend on the k8s metrics that changed in v1.38.0. No compatibility issues with the current OpenTelemetry package versions.packages/instrumentation-bedrock/package.json (2)
13-13: The test script change to usetsconfig.test.jsonis properly supported. The config file exists and is correctly configured to include both src and test directories, with appropriate compiler options. This change is consistent across all instrumentation packages in the repository.
44-44: Semantic conventions version bump is appropriate and consistently applied.The @opentelemetry/semantic-conventions dependency was updated to ^1.38.0 across all instrumentation packages, aligning with the PR's goal of adopting newer OpenTelemetry semantic conventions for GenAI telemetry. The bedrock instrumentation correctly uses semantic convention constants (ATTR_GEN_AI_*, SpanAttributes) throughout its request and response handling for all supported vendors (ai21, amazon, anthropic, cohere, meta), with no hardcoded attribute strings.
packages/instrumentation-langchain/package.json (2)
13-13: No action needed—tsconfig.test.jsonis properly configured and test files exist.The test script change is correctly set up. The
tsconfig.test.jsonproperly extends the base config, includes both source and test files, and the test glob pattern matches the existing test file location.
45-45: @opentelemetry/semantic-conventions@^1.38.0 is compatible with your OTel stack.Version 1.38.0 has no peer dependencies and requires only Node >=14, which matches your engines constraint. The dependency is safely compatible with the @opentelemetry/api (^1.9.0), @opentelemetry/core (^2.0.1), and @opentelemetry/instrumentation (^0.203.0) versions used in this package. The codebase correctly uses both @opentelemetry/semantic-conventions/incubating for standard GenAI attributes and @traceloop/ai-semantic-conventions for custom semantic definitions.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed d7df478 in 6 minutes and 17 seconds. Click for details.
- Reviewed
37lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-agent-integration.test.ts:503
- Draft comment:
Good update: Replacing SpanAttributes.GEN_AI_AGENT_NAME with ATTR_GEN_AI_AGENT_NAME across outer, inner, tool, and child spans aligns the test with the new opentelemetry semantic conventions. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, explaining a change that aligns with new conventions. It doesn't provide a suggestion, ask for confirmation, or highlight a potential issue.
Workflow ID: wflow_Pqscx7LFOwM34CIU
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Fixes TLP-1235
Important
Consolidates tracing attributes under GenAI namespace, updates tests, and upgrades dependencies to standardize AI SDK tracing with OpenTelemetry conventions.
SemanticAttributes.ts.agent_decorator.test.ts,ai-sdk-agent-integration.test.ts,ai-sdk-integration.test.ts,ai-sdk-transformations.test.ts, anddecorators.test.tsto reflect new GenAI attributes.@opentelemetry/semantic-conventionsto^1.38.0inpackage.jsonfiles.package.jsonfiles.This description was created by
for d7df478. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Behavior
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.