Conversation
- Add ImageUploader class for uploading base64 images to backend - Add image handling utilities for base64 encoding and file processing - Add image generation instrumentation wrappers for Images API - Add uploadBase64Image callback to OpenAI instrumentation config - Add vision support infrastructure for chat completions - Update Traceloop SDK to provide image upload callback to instrumentations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Remove unused imports and fix type annotations in vision-utils - Remove incomplete vision processing from startSpan method - Successfully build both OpenAI instrumentation and traceloop SDK packages 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add simple_image_generation.mjs - minimal example matching user request - Add openai_image_generation_sample.mjs - comprehensive sample with error handling - Add test scripts for ImageUploader and instrumentation verification - Successfully tested image generation with real OpenAI API - Instrumentation working correctly with proper tracing 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
… samples - Add dotenv dependency to sample-app package using pnpm workspaces - Update both sample apps to load environment variables from .env file - Create .env.example template with clear instructions - Add npm scripts for easy sample execution (run:image_generation, run:image_generation_full) - Add comprehensive IMAGE_GENERATION_README.md with setup instructions - Include API key setup, troubleshooting, and cost information Now users can easily test image generation with proper environment variable management. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Implements comprehensive image generation instrumentation matching the Python implementation, with automatic image upload to Traceloop backend. Features: - Full support for generate(), edit(), createVariation() with span creation - Two-step image upload process with callback mechanism - Accurate token usage tracking based on OpenAI pricing - TypeScript support using proper OpenAI SDK types - Robust error handling and async/await patterns Technical changes: - Enhanced image-wrappers.ts with safe buffer processing - Added ImageUploader for backend integration - Fixed OTLP export timing (increased timeout 3s→10s) - Updated sample apps to TypeScript with dotenv support - Added comprehensive test files for isolated testing All spans now export successfully with complete image data and usage metrics. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Caution Review failedThe pull request is closed. WalkthroughAdds OpenAI image API instrumentation: new image wrappers for generate/edit/variation, token-estimation logic, optional base64 image upload support and ImageUploader, wiring upload callback into tracing init, sample app scripts/tests for image flows, and updated/added HAR recordings. Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant Wrapper as Image Wrapper
participant Tracer as OTel Tracer
participant OpenAI as OpenAI.Images API
participant Uploader as ImageUploader (optional)
App->>Wrapper: images.generate(params)
activate Wrapper
Wrapper->>Tracer: startSpan("openai.images.generate")
Wrapper->>OpenAI: call generate(params)
OpenAI-->>Wrapper: response (urls or base64)
alt response contains base64
Wrapper->>Uploader: uploadBase64Image(traceId, spanId, name, base64)
Uploader-->>Wrapper: uploaded URL
end
Wrapper->>Tracer: set response attrs (usage, tokens, urls)
Wrapper->>Tracer: endSpan()
Wrapper-->>App: response
deactivate Wrapper
sequenceDiagram
participant App
participant Tracing as startTracing()
participant Images as ImageUploader
participant OpenAIInst as OpenAIInstrumentation
App->>Tracing: initialize({ apiKey, baseUrl, ... })
Tracing->>Images: new ImageUploader(baseUrl, apiKey)
Tracing->>OpenAIInst: configure({ uploadBase64Image: Images.uploadBase64Image })
OpenAIInst-->>Tracing: instrument OpenAI (incl. Images)
Tracing-->>App: tracing started
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (23)
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
- Remove IMAGE_GENERATION_README.md (redundant documentation) - Remove dev-run.js (not needed for the implementation) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Changes requested ❌
Reviewed everything up to a6bd480 in 2 minutes and 8 seconds. Click for details.
- Reviewed
1633lines of code in16files - Skipped
3files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/instrumentation.ts:118
- Draft comment:
Consider replacing direct console.log debug statements with a proper logging mechanism or removing them in production. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
2. packages/traceloop-sdk/src/lib/images/image-uploader.ts:39
- Draft comment:
Ensure 'fetch' is polyfilled for Node versions <18 since it's not available by default in Node 14. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While technically correct about fetch compatibility, this falls into the category of dependency/environment concerns. We don't know what Node versions this needs to support, and handling Node compatibility should be a project-wide concern, not a file-specific one. The comment also starts with "Ensure that..." which is a red flag per the rules. The comment identifies a real technical concern about Node.js compatibility that could cause runtime errors. Maybe this is important enough to keep despite the rules? No - per the rules, we should not make speculative comments or comments about dependencies/environments. This is better handled at the project configuration level. Delete the comment. While technically accurate, it violates multiple rules about not commenting on dependencies and not making "ensure that..." style comments.
3. dev-run.js:27
- Draft comment:
Typo: 'OpenLLMetry' on this line might be a misspelling of 'OpenTelemetry'. Please verify the intended product name. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment assumes the product name should be OpenTelemetry, but the package names suggest this might be a different product. The "LL" in OpenLLMetry could be intentional, possibly referring to Large Language models given the OpenAI instrumentation. Without more context about the product, we can't be sure this is actually a typo. I could be wrong about the intentionality - it could genuinely be a typo of OpenTelemetry. The presence of tracing/instrumentation does suggest telemetry. Even if it might be a typo, we don't have enough context to be certain. Making assumptions about product names without full context could lead to incorrect changes. Delete the comment since we can't be certain it's actually a typo without more context about the product name, and product naming is likely intentional.
Workflow ID: wflow_USXbK1zY0ipWSBz9
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 595501f in 32 seconds. Click for details.
- Reviewed
149lines of code in2files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. dev-run.js:1
- Draft comment:
Removed dev-run.js. Confirm this script is no longer needed and update build processes or documentation accordingly. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
2. packages/sample-app/IMAGE_GENERATION_README.md:1
- Draft comment:
Removed IMAGE_GENERATION_README.md. Ensure the updated image generation documentation is provided elsewhere if still necessary. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
Workflow ID: wflow_C4aowG5QAX3ezBcu
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
The image-utils.ts file contained duplicate functionality that was already implemented more efficiently in image-wrappers.ts. All image processing is now handled directly in the instrumentation wrappers. - Remove unused image-utils.ts with redundant image processing functions - Clean up images/index.ts exports - Reduces code duplication and maintains single responsibility 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 05acd52 in 1 minute and 4 seconds. Click for details.
- Reviewed
198lines of code in2files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/images/index.ts:1
- Draft comment:
Removed image-utils exports could break consumers relying on them. Confirm intentional removal or add alternative exports for backwards compatibility. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While removing exports is a breaking change, this appears to be intentional refactoring. The comment violates rules by asking for confirmation ("Confirm intentional removal") and being speculative. We don't have evidence that these exports were part of the public API or that consumers actually use them. Without that context, we can't be certain this is an issue. The exports being removed could genuinely break downstream dependencies. Major refactoring without migration paths can cause real problems. While breaking changes are concerning, the comment violates multiple rules by being speculative and asking for confirmation. We need strong evidence of an issue to keep comments. Delete the comment as it is speculative and asks for confirmation without strong evidence that the removed exports were part of the public API.
Workflow ID: wflow_ROPNDwwiZIrP3gNQ
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
- Remove debug console.log statements from instrumentation and tracing - Remove unused vision-utils.ts (119 lines of unused vision API code) - Clean up production code by removing development debugging 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 22
🔭 Outside diff range comments (3)
packages/traceloop-sdk/src/lib/tracing/index.ts (3)
92-127: Avoid duplicating instrumentations on repeated init callsMost instrumentations here are (re)initialized unconditionally, pushing duplicates into the array if
initInstrumentationsis called again. This can lead to double patching and confusing behavior.Apply the same “create-or-reuse” pattern you used for OpenAI/Anthropic across all instrumentations:
- cohereInstrumentation = new CohereInstrumentation({ exceptionLogger }); - instrumentations.push(cohereInstrumentation); + if (!cohereInstrumentation) { + cohereInstrumentation = new CohereInstrumentation({ exceptionLogger }); + instrumentations.push(cohereInstrumentation); + } ... - vertexaiInstrumentation = new VertexAIInstrumentation({ exceptionLogger }); - instrumentations.push(vertexaiInstrumentation); + if (!vertexaiInstrumentation) { + vertexaiInstrumentation = new VertexAIInstrumentation({ exceptionLogger }); + instrumentations.push(vertexaiInstrumentation); + } ... - aiplatformInstrumentation = new AIPlatformInstrumentation({ exceptionLogger }); - instrumentations.push(aiplatformInstrumentation); + if (!aiplatformInstrumentation) { + aiplatformInstrumentation = new AIPlatformInstrumentation({ exceptionLogger }); + instrumentations.push(aiplatformInstrumentation); + } ... - bedrockInstrumentation = new BedrockInstrumentation({ exceptionLogger }); - instrumentations.push(bedrockInstrumentation); + if (!bedrockInstrumentation) { + bedrockInstrumentation = new BedrockInstrumentation({ exceptionLogger }); + instrumentations.push(bedrockInstrumentation); + } ... - pineconeInstrumentation = new PineconeInstrumentation({ exceptionLogger }); - instrumentations.push(pineconeInstrumentation); + if (!pineconeInstrumentation) { + pineconeInstrumentation = new PineconeInstrumentation({ exceptionLogger }); + instrumentations.push(pineconeInstrumentation); + } ... - langchainInstrumentation = new LangChainInstrumentation({ exceptionLogger }); - instrumentations.push(langchainInstrumentation); + if (!langchainInstrumentation) { + langchainInstrumentation = new LangChainInstrumentation({ exceptionLogger }); + instrumentations.push(langchainInstrumentation); + } ... - llamaIndexInstrumentation = new LlamaIndexInstrumentation({ exceptionLogger }); - instrumentations.push(llamaIndexInstrumentation); + if (!llamaIndexInstrumentation) { + llamaIndexInstrumentation = new LlamaIndexInstrumentation({ exceptionLogger }); + instrumentations.push(llamaIndexInstrumentation); + } ... - chromadbInstrumentation = new ChromaDBInstrumentation({ exceptionLogger }); - instrumentations.push(chromadbInstrumentation); + if (!chromadbInstrumentation) { + chromadbInstrumentation = new ChromaDBInstrumentation({ exceptionLogger }); + instrumentations.push(chromadbInstrumentation); + } ... - qdrantInstrumentation = new QdrantInstrumentation({ exceptionLogger }); - instrumentations.push(qdrantInstrumentation); + if (!qdrantInstrumentation) { + qdrantInstrumentation = new QdrantInstrumentation({ exceptionLogger }); + instrumentations.push(qdrantInstrumentation); + } ... - togetherInstrumentation = new TogetherInstrumentation({ exceptionLogger }); - instrumentations.push(togetherInstrumentation); + if (!togetherInstrumentation) { + togetherInstrumentation = new TogetherInstrumentation({ exceptionLogger }); + instrumentations.push(togetherInstrumentation); + }
283-289: Default headers should use the resolved apiKeyIf
options.headersandTRACELOOP_HEADERSaren’t provided, you’re usingoptions.apiKeywhich may be undefined even when a validapiKeywas resolved from env. Use the computedapiKeyvariable.- const headers = + const headers = options.headers || (process.env.TRACELOOP_HEADERS ? parseKeyPairsIntoRecord(process.env.TRACELOOP_HEADERS) - : { Authorization: `Bearer ${options.apiKey}` }); + : (apiKey ? { Authorization: `Bearer ${apiKey}` } : {}));
290-299: Use the resolved baseUrl/apiKey for exporter and span processor
traceExporterandcreateSpanProcessorreferenceoptions.baseUrl/options.apiKey, which might be undefined when those come from env vars. Use the previously computedbaseUrl/apiKey.- const traceExporter = + const traceExporter = options.exporter ?? new OTLPTraceExporter({ - url: `${options.baseUrl}/v1/traces`, + url: `${baseUrl}/v1/traces`, headers, }); ... _spanProcessor = createSpanProcessor({ - apiKey: options.apiKey, - baseUrl: options.baseUrl, + apiKey, + baseUrl, disableBatch: options.disableBatch, exporter: traceExporter, headers, allowedInstrumentationLibraries: ALL_INSTRUMENTATION_LIBRARIES, });
🧹 Nitpick comments (26)
.vscode/settings.json (1)
1-22: Remove workspace-specific theming from VCSColor customizations are personal and can distract in shared repos. Recommend removing this file or limiting .vscode to shared, neutral settings (e.g., recommended extensions).
Do you want a PR-ready commit to drop this and add a .gitignore entry for .vscode/settings.json?
dev-run.js (1)
32-32: Ensure dependency-aware buildsInclude --with-deps so Nx builds any upstream libs in the right order.
- run('pnpm nx run-many --targets=build --projects=@traceloop/instrumentation-openai,@traceloop/node-server-sdk'); + run('pnpm nx run-many --targets=build --with-deps --projects=@traceloop/instrumentation-openai,@traceloop/node-server-sdk');packages/instrumentation-openai/src/types.ts (2)
3-8: Avoid type drift: co-locate or clearly mark sync with SDK's ImageUploadCallbackThis type duplicates the SDK definition at packages/traceloop-sdk/src/lib/images/image-utils.ts. To prevent divergence, either import from a shared types package or add a prominent note to keep these in sync.
Consider at minimum documenting the coupling:
export type ImageUploadCallback = ( traceId: string, spanId: string, filename: string, base64Data: string ) => Promise<string>; + +/** + * NOTE: Keep in sync with packages/traceloop-sdk/src/lib/images/image-utils.ts::ImageUploadCallback. + * If changing the signature here, update the SDK type as well. + */
28-33: Document privacy/safety constraints and invocation gatingGood addition. Please clarify in the JSDoc when this callback is invoked and call out privacy implications: raw pixels can contain sensitive data. Recommend documenting that:
- It’s only used when traceContent is true.
- Implementers should apply redaction/minification and size limits before upload.
/** * Callback function for uploading base64-encoded images. * Used for image generation and vision capabilities. + * + * Security: This receives raw image data which may contain sensitive content. + * Only enable if you have explicit consent and a secure storage destination. + * Recommended: redact/minify, enforce size limits, and avoid PII where possible. + * Invocation: Only called when `traceContent === true`. * @default undefined */ uploadBase64Image?: ImageUploadCallback;packages/sample-app/IMAGE_GENERATION_README.md (3)
5-6: Add Node.js version prerequisiteState Node.js >= 18 (for global fetch and ESM). This avoids confusion when running samples.
## 🚀 Quick Start + +### Prerequisites +- Node.js 18+ (required for global `fetch` and ESM) +- pnpm installed
15-23: Document TRACELOOP_BASE_URL used by scriptsScripts read TRACELOOP_BASE_URL; include it in the env section to aid self-hosted/testing.
# Optional: Traceloop API Key for tracing (recommended) TRACELOOP_API_KEY=your_traceloop_key_here + +# Optional: Traceloop Base URL (for self-hosted or staging) +TRACELOOP_BASE_URL=https://app.traceloop.com
25-31: Include single-operation test commands (generate/edit) for faster iterationThe repo includes test scripts for generate-only and edit-only flows. Add them here.
**Image Generation Sample** (with variations and error handling): ```bash cd packages/sample-app pnpm run:image_generation
+Quick tests
+bash +# Generate only +pnpm test:generate +# Edit only (requires test_edit_image.png present) +pnpm test:edit +</blockquote></details> <details> <summary>packages/sample-app/src/test_generate_only.ts (3)</summary><blockquote> `5-9`: **Ensure fetch availability and TypeScript typing** This file uses fetch but doesn’t import it. On Node 18+, fetch is available at runtime, but TypeScript may still complain unless the DOM lib is enabled. Two options: - Add "lib": ["ES2022", "DOM"] to tsconfig for the sample-app. - Or import fetch explicitly (works well in Node 18+ with Undici). ```diff import * as dotenv from "dotenv"; import { initialize } from "@traceloop/node-server-sdk"; import * as fs from "fs"; +// Optional: if TS complains about fetch or you run on Node < 18, use undici: +// import { fetch } from "undici";
9-21: Fail fast on missing API keysProvide a clear error if required env vars are missing before making API calls.
// Load environment variables from .env file dotenv.config(); +// Validate required environment variables early +const requireEnv = (name: string) => { + const v = process.env[name]; + if (!v) { + console.error(`Missing required env var: ${name}`); + process.exit(1); + } + return v; +}; +requireEnv("OPENAI_API_KEY"); +// Optional but recommended: +if (!process.env.TRACELOOP_API_KEY) { + console.warn("TRACELOOP_API_KEY not set — tracing will be disabled."); +} + // Initialize Traceloop SDK with image support FIRST
86-90: Minor: Log before waiting to export tracesThe “Allowing time for trace export…” message prints after the wait. Log before scheduling the timeout for better UX.
-// Give time for spans to be exported - increase timeout significantly -setTimeout(() => { - console.log("\n⏱️ Allowing time for trace export..."); - process.exit(0); -}, 10000); +// Give time for spans to be exported - increase timeout significantly +console.log("\n⏱️ Allowing time for trace export (10s)..."); +setTimeout(() => process.exit(0), 10000);packages/sample-app/src/test_edit_only.ts (3)
9-21: Fail fast on missing API keysMirror the generate-only test: validate env vars up front for clearer errors.
// Load environment variables from .env file dotenv.config(); // Initialize Traceloop SDK with image support FIRST console.log("🚀 Initializing Traceloop SDK for edit test..."); initialize({Add before initialization:
const requireEnv = (name: string) => { const v = process.env[name]; if (!v) { console.error(`Missing required env var: ${name}`); process.exit(1); } return v; }; requireEnv("OPENAI_API_KEY"); if (!process.env.TRACELOOP_API_KEY) { console.warn("TRACELOOP_API_KEY not set — tracing will be disabled."); }
37-46: Optional: Friendly existence check for the test imageYou already handle stream failures; a preliminary check gives a clearer message.
let imageFile; try { - imageFile = await toFile(fs.createReadStream("test_edit_image.png"), "test_edit_image.png", { + if (!fs.existsSync("test_edit_image.png")) { + throw new Error("test_edit_image.png not found. Place it next to this script."); + } + imageFile = await toFile(fs.createReadStream("test_edit_image.png"), "test_edit_image.png", { type: "image/png", });
87-91: Minor: Log before waiting to export tracesSame nit as the generate-only script: print the wait message before the timeout.
-// Give time for spans to be exported -setTimeout(() => { - console.log("\n⏱️ Allowing time for trace export..."); - process.exit(0); -}, 10000); +// Give time for spans to be exported +console.log("\n⏱️ Allowing time for trace export (10s)..."); +setTimeout(() => process.exit(0), 10000);packages/sample-app/package.json (1)
31-34: pnpm usage in npm script may fail outside repo context
dev:image_generationrelies on pnpm being installed and a workspace context. If someone runs this package standalone with npm/yarn, it will fail. Consider gating or documenting the requirement, or invoke pnpm via npx to reduce friction.Example:
- "dev:image_generation": "pnpm --filter @traceloop/instrumentation-openai build && pnpm --filter @traceloop/node-server-sdk build && npm run build && node dist/src/sample_openai_image_generation.js", + "dev:image_generation": "npx pnpm --filter @traceloop/instrumentation-openai build && npx pnpm --filter @traceloop/node-server-sdk build && npm run build && node dist/src/sample_openai_image_generation.js",packages/traceloop-sdk/src/lib/images/image-uploader.ts (1)
39-51: Add timeouts/AbortSignal and ensure fetch availabilityNode <18 lacks global fetch; network calls can also hang indefinitely.
- Import fetch from undici for consistent environments, or document Node 18+ requirement.
- Add an AbortController with a timeout to both requests.
Example:
import { fetch } from "undici"; private async withTimeout<T>(p: Promise<T>, ms = 10000): Promise<T> { const ac = new AbortController(); const t = setTimeout(() => ac.abort(), ms); try { return await p; } finally { clearTimeout(t); } } // usage: await this.withTimeout(fetch(..., { signal: ac.signal }), 10000);Also applies to: 71-78
packages/instrumentation-openai/src/instrumentation.ts (2)
118-118: Remove noisy console.log; use the instrumentation loggerConsole logs in libraries are noisy. Use
_diag.debugfor opt-in debug output.- console.log("🚨 PATCH CALLED: OpenAI instrumentation patch() method executing"); + this._diag.debug("OpenAI instrumentation patch() executing");
169-186: Consider v3 image endpoints (optional)If v3 support matters, you may also wrap
OpenAIApi.prototype.createImage(and related edit/variation endpoints) in the v3 branch.packages/instrumentation-openai/src/vision-utils.ts (2)
26-28: Comment vs implementation mismatchThe comment says “data URLs or local file references,” but only
data:image/is handled. Align the comment or implement file:// handling.- // Only process data URLs or local file references + // Only process data URLs
78-89: Process message contents in parallel to reduce latencyCurrent implementation awaits each item sequentially. Parallelize with
Promise.all.- const processedContent: MessageContent[] = []; - - for (let contentIndex = 0; contentIndex < message.content.length; contentIndex++) { - const content = message.content[contentIndex]; - const processedContentItem = await processImageInContent( - content, - traceId, - spanId, - uploadCallback, - messageIndex, - contentIndex - ); - processedContent.push(processedContentItem); - } + const processedContent: MessageContent[] = await Promise.all( + message.content.map((content: MessageContent, contentIndex: number) => + processImageInContent( + content, + traceId, + spanId, + uploadCallback, + messageIndex, + contentIndex + ) + ) + );packages/traceloop-sdk/src/lib/tracing/index.ts (1)
66-85: Replace console logs with SDK loggerNoisy console logs in initialization paths can clutter user apps.
- console.log("🔧 Updating existing OpenAI instrumentation with callback:", !!uploadBase64ImageCallback); + diag.debug(`Updating existing OpenAI instrumentation with callback: ${!!uploadBase64ImageCallback}`); ... - console.log("🚨 Creating NEW OpenAI instrumentation with callback:", !!uploadBase64ImageCallback); + diag.debug(`Creating NEW OpenAI instrumentation with callback: ${!!uploadBase64ImageCallback}`); ... - console.log("📝 Added OpenAI instrumentation to array. Total instrumentations:", instrumentations.length); + diag.debug(`Added OpenAI instrumentation. Total instrumentations: ${instrumentations.length}`);packages/traceloop-sdk/src/lib/images/image-utils.ts (3)
36-36: Remove trivial type annotation (ESLint no-inferrable-types)index: number = 0 can be simplified to rely on inference.
-export async function processImageFile( - image: any, - index: number = 0 -): Promise<ProcessedImage | null> { +export async function processImageFile( + image: any, + index = 0 +): Promise<ProcessedImage | null> {
59-75: Readable stream: filename fallback and consistent extension inferenceNot all readable streams have a .path. Fallback to a generated name, and derive ext safely.
- const name = image.path || `image_${index}`; - const ext = extname(name).toLowerCase(); + const name = (typeof image.path === "string" && image.path) || `image_${index}`; + const ext = extname(name).toLowerCase();
133-157: Optional: parallelize uploads for multiple imagesCurrent for-loop processes sequentially. Consider Promise.all to speed up multi-image scenarios. Maintain order by mapping index.
-export async function processImageArray( +export async function processImageArray( images: any[] | any, traceId: string, spanId: string, uploadCallback: ImageUploadCallback ): Promise<ImageContent[]> { const imageArray = Array.isArray(images) ? images : [images]; - const results: ImageContent[] = []; - - for (let i = 0; i < imageArray.length; i++) { - const imageContent = await processAndUploadImage( - imageArray[i], - i, - traceId, - spanId, - uploadCallback - ); - - if (imageContent) { - results.push(imageContent); - } - } - - return results; + const results = await Promise.all( + imageArray.map((img, i) => + processAndUploadImage(img, i, traceId, spanId, uploadCallback) + ) + ); + return results.filter((x): x is ImageContent => Boolean(x)); }Also applies to: 142-154
packages/instrumentation-openai/src/image-wrappers.ts (3)
59-59: Remove trivial type annotation (ESLint no-inferrable-types)index: number = 0 can be simplified.
- index: number = 0 + index = 0
20-52: Type params for calculateImageGenerationTokens instead of anyUse a narrower type; the function only needs size and quality.
-function calculateImageGenerationTokens(params: any, imageCount: number): number { +function calculateImageGenerationTokens( + params: Partial<Pick<ImageGenerateParams, "size" | "quality">> | undefined, + imageCount: number +): number {
171-191: Optional: fall back to original URL for input images when upload is unavailableIf uploadCallback is absent or processing fails for an http URL, consider setting attributes with the original URL for better observability.
- if (params.image && uploadCallback && span.spanContext().traceId && span.spanContext().spanId) { + if (params.image && span.spanContext().traceId && span.spanContext().spanId) { const traceId = span.spanContext().traceId; const spanId = span.spanContext().spanId; - const imageUrl = await processImageInRequest( - params.image, - traceId, - spanId, - uploadCallback, - 0 - ); + let imageUrl: string | null = null; + if (uploadCallback) { + imageUrl = await processImageInRequest(params.image, traceId, spanId, uploadCallback, 0); + } else if (typeof params.image === "string" && params.image.startsWith("http")) { + imageUrl = params.image; + }Also applies to: 218-238
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
packages/sample-app/test_edit_image.pngis excluded by!**/*.pngpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
.vscode/settings.json(1 hunks)dev-run.js(1 hunks)package.json(1 hunks)packages/instrumentation-openai/src/image-wrappers.ts(1 hunks)packages/instrumentation-openai/src/instrumentation.ts(6 hunks)packages/instrumentation-openai/src/types.ts(2 hunks)packages/instrumentation-openai/src/vision-utils.ts(1 hunks)packages/sample-app/IMAGE_GENERATION_README.md(1 hunks)packages/sample-app/package.json(2 hunks)packages/sample-app/src/test_edit_only.ts(1 hunks)packages/sample-app/src/test_generate_only.ts(1 hunks)packages/traceloop-sdk/src/index.ts(1 hunks)packages/traceloop-sdk/src/lib/images/image-uploader.ts(1 hunks)packages/traceloop-sdk/src/lib/images/image-utils.ts(1 hunks)packages/traceloop-sdk/src/lib/images/index.ts(1 hunks)packages/traceloop-sdk/src/lib/node-server-sdk.ts(1 hunks)packages/traceloop-sdk/src/lib/tracing/index.ts(4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
packages/instrumentation-openai/src/types.ts (1)
packages/traceloop-sdk/src/lib/images/image-utils.ts (1)
ImageUploadCallback(17-22)
packages/instrumentation-openai/src/vision-utils.ts (3)
packages/instrumentation-openai/src/types.ts (1)
ImageUploadCallback(3-8)packages/traceloop-sdk/src/lib/images/index.ts (1)
ImageUploadCallback(10-10)packages/traceloop-sdk/src/lib/images/image-utils.ts (1)
ImageUploadCallback(17-22)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/instrumentation-openai/src/image-wrappers.ts (3)
wrapImageGeneration(380-432)wrapImageEdit(434-493)wrapImageVariation(495-546)
packages/traceloop-sdk/src/lib/images/image-utils.ts (2)
packages/traceloop-sdk/src/lib/images/index.ts (9)
ImageContent(10-10)ProcessedImage(10-10)ImageUploadCallback(10-10)processImageFile(3-3)processAndUploadImage(4-4)processImageArray(5-5)isBase64String(6-6)isDataUrl(7-7)extractBase64FromDataUrl(8-8)packages/instrumentation-openai/src/types.ts (1)
ImageUploadCallback(3-8)
🪛 ESLint
packages/traceloop-sdk/src/lib/images/image-utils.ts
[error] 36-36: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
packages/instrumentation-openai/src/image-wrappers.ts
[error] 59-59: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
[error] 385-385: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 439-439: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 500-500: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (9)
packages/traceloop-sdk/src/lib/node-server-sdk.ts (1)
17-17: Explicit init only: call out breaking change and docs updateInstrumentations no longer auto-init at import. Ensure:
- initialize() reliably triggers all instrumentations (including new image upload callbacks).
- README/migration notes instruct users to call initialize() early enough.
Would you like a short MIGRATION.md entry drafted?
dev-run.js (2)
1-1: Script Executable Permission ConfirmedThe
dev-run.jsfile already has executable permissions (mode 755), so no further action is needed.
36-36: Missing “image_generation” script in sample-appThe
dev-run.jsstep at line 36 invokes:run('pnpm --filter sample-app run:image_generation');but
packages/sample-app/package.jsondoes not declare a script named"image_generation". This will cause a hard failure when you run your dev script.Please address one of the following:
• Add the missing script to packages/sample-app/package.json:
"scripts": { + "image_generation": "<your image generation command>", // …other scripts }• Or guard the call in dev-run.js before invoking
run(...), for example:// dev-run.js – before line 36 const pkg = require('../packages/sample-app/package.json'); if (!pkg.scripts?.image_generation) { console.error( 'Error: "image_generation" script not found in sample-app. ' + 'Please add it to packages/sample-app/package.json.' ); process.exit(1); } // existing call run('pnpm --filter sample-app run:image_generation');fix_required
packages/traceloop-sdk/src/lib/images/index.ts (2)
1-10: Barrel exports look good and consistent with intended API surfaceExports are cohesive and match expected usage patterns.
1-10: All exports verified – no changes neededA quick check confirms that every symbol exported in
packages/traceloop-sdk/src/lib/images/index.tsis defined in its source file:
ImageUploaderis exported as a class inimage-uploader.ts.- Utility functions (
processImageFile,processAndUploadImage,processImageArray,isBase64String,isDataUrl,extractBase64FromDataUrl) are all exported inimage-utils.ts.ImageContentandProcessedImageare declared and exported as interfaces.ImageUploadCallbackis exported as a type alias.No broken builds expected. Great work!
packages/sample-app/src/test_generate_only.ts (1)
40-45: Confirm 'n' param support for gpt-image-1Some OpenAI SDK versions/models ignore or disallow the n parameter for gpt-image-1. If unsupported, remove it to avoid validation errors.
Would you like me to adjust the call to omit n if the SDK version in package.json doesn’t support it?
packages/sample-app/src/test_edit_only.ts (1)
22-56: Verify correct OpenAI method name for image editsDepending on the SDK version, the method might be images.edits (or images.edits.create) rather than images.edit. Mismatch will throw at runtime.
If your installed openai version expects edits, adjust like:
const editResult = await openai.images.edits({ image: imageFile, model: "gpt-image-1", prompt: "Add a red hat to the person", n: 1, size: "1024x1024", });I can update the sample once you confirm the exact SDK version.
packages/sample-app/package.json (1)
67-67: Runtime compatibility check for JimpJimp v1.x is ESM-first and may require Node 18+ in some setups. Your engines field is
>=14. Please confirm runtime compatibility across scripts.If Node 18+ is required by any dependency (e.g., ai, OpenAI SDK, Jimp), update engines accordingly:
- "engines": { - "node": ">=14" - }, + "engines": { + "node": ">=18" + },packages/traceloop-sdk/src/lib/tracing/index.ts (1)
246-255: startTracing: ensure consistent options vs. resolved valuesYou resolve
apiKey/baseUrlbut still passoptions.instrumentModulesto manual init and not the fully resolved options elsewhere. The current calls look correct, but ensure future code uses the resolved values consistently (as suggested above) to avoid subtle bugs.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed e999c45 in 1 minute and 11 seconds. Click for details.
- Reviewed
163lines of code in3files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/instrumentation.ts:117
- Draft comment:
Removed console.log debug statement. Use the existing _diag.debug call for consistent logging. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. packages/instrumentation-openai/src/vision-utils.ts:1
- Draft comment:
Entire vision-utils.ts removed. Ensure that its image processing functionality is fully migrated and no external references remain. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/src/lib/tracing/index.ts:67
- Draft comment:
Removed extraneous console.log debug logs when updating/creating OpenAI instrumentation. This improves production readiness; ensure that logging is handled via proper diagnostic methods. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is purely informative, explaining what was done and why it is beneficial. It does not provide a specific suggestion or ask for confirmation on a potentially unintended change. It also does not ask for tests or code improvements. Therefore, it violates the rule against purely informative comments.
Workflow ID: wflow_5tIuvNam9EesHPwS
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
- Add back support for low/medium/high quality values for gpt-image-1 model - Update token cost documentation to reflect both gpt-image-1 and DALL-E 3 - Fix default quality logic based on model (gpt-image-1 defaults to 'high') - Support all valid quality parameters: low, medium, high, standard, hd The low quality option is valid for gpt-image-1 and provides faster, lower-cost image generation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 0429b4d in 47 seconds. Click for details.
- Reviewed
72lines of code in1files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/image-wrappers.ts:26
- Draft comment:
Default quality now depends on model ('gpt-image-1' defaults to 'high'). Document this behavior more explicitly for future maintainers. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
2. packages/instrumentation-openai/src/image-wrappers.ts:29
- Draft comment:
The inline tokenCosts mapping is now more comprehensive. Consider moving it outside the function to avoid re-creation on each call if performance is a concern. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
3. packages/instrumentation-openai/src/image-wrappers.ts:58
- Draft comment:
The removed special block for 'low' quality token calculation has been consolidated into the tokenCosts mapping; double-check that this change preserves backward compatibility. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to double-check backward compatibility, which violates the rule against asking for confirmation or verification. It doesn't provide a specific suggestion or point out a specific issue with the code.
Workflow ID: wflow_P8O3PZuOYEQ5gUUk
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
VS Code settings file is not needed in the repository. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Skipped PR review on 1623b0b because no changed files had a supported extension. If you think this was in error, please contact us and we'll fix it right away.
Replace browser-specific code with Node.js equivalents:
- Use Buffer.toString('base64') instead of btoa()
- Handle Node.js Buffer objects and ReadStreams
- Remove browser File API dependencies
- Use Buffer.from(arrayBuffer) for cleaner conversion
Since this code only runs in Node.js backend environments,
browser-specific APIs are unnecessary and can cause issues.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Changes requested ❌
Reviewed 8ee7537 in 1 minute and 45 seconds. Click for details.
- Reviewed
67lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/image-wrappers.ts:94
- Draft comment:
Using Buffer.toString('base64') is more efficient and avoids manual loops. Note that this branch now only supports Node.js Buffer objects; if browser File objects are needed, consider a fallback. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. packages/instrumentation-openai/src/image-wrappers.ts:355
- Draft comment:
Simplified base64 conversion by using Buffer.from(arrayBuffer) and toString('base64') improves efficiency over the manual loop approach. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_CUk5Vbch0OjPVKXN
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (11)
packages/instrumentation-openai/src/image-wrappers.ts (11)
11-23: Link official docs in the token table comment and use “High/HD” terminologyAdd a direct link to OpenAI’s image-generation pricing/docs and align the “High” label to “High/HD” for consistency.
/** * Calculate completion tokens for image generation based on OpenAI's actual token costs - * - * Token costs based on OpenAI documentation: + * + * See OpenAI docs for the full pricing table: + * https://platform.openai.com/docs/guides/image-generation + * + * Token costs per OpenAI: * For gpt-image-1: Square (1024×1024) Portrait (1024×1536) Landscape (1536×1024) * Low 272 tokens 408 tokens 400 tokens * Medium 1056 tokens 1584 tokens 1568 tokens - * High 4160 tokens 6240 tokens 6208 tokens + * High/HD 4160 tokens 6240 tokens 6208 tokens * * For DALL-E 3: * Standard 1056 tokens 1584 tokens 1568 tokens - * HD 4160 tokens 6240 tokens 6208 tokens + * HD 4160 tokens 6240 tokens 6208 tokens */
77-91: Do not treat arbitrary strings as base64; validate shape before uploadingCurrently any non-URL string is treated as base64, which may misinterpret file paths or random text. Guard with a base64 check and skip otherwise.
- } else { - // Assume it's base64 or file path - base64Data = image; - filename = `input_image_${index}.png`; + } else { + // Only treat as base64 if it looks like base64; otherwise skip + const maybe = image.replace(/\s/g, ""); + const looksBase64 = + /^[A-Za-z0-9+/]+={0,2}$/.test(maybe) && maybe.length % 4 === 0; + if (looksBase64) { + base64Data = maybe; + filename = `input_image_${index}.png`; + } else { + return null; + } }
98-104: Replace btoa and O(n²) string concat with Buffer-based base64 (Node) and a browser fallbackbtoa is not available in Node and the char-by-char loop is inefficient. Prefer Buffer when present.
- const buffer = new Uint8Array(arrayBuffer); - // Convert buffer to base64 safely without stack overflow - let binary = ''; - for (let i = 0; i < buffer.byteLength; i++) { - binary += String.fromCharCode(buffer[i]); - } - base64Data = btoa(binary); + // Prefer Node path when available + // eslint-disable-next-line @typescript-eslint/ban-ts-comment + // @ts-ignore + if (typeof Buffer !== "undefined") { + base64Data = Buffer.from(arrayBuffer as ArrayBuffer).toString("base64"); + } else { + // Browser fallback + const buf = new Uint8Array(arrayBuffer); + let binary = ""; + for (let i = 0; i < buf.byteLength; i++) { + binary += String.fromCharCode(buf[i]); + } + // eslint-disable-next-line @typescript-eslint/ban-ts-comment + // @ts-ignore + base64Data = btoa(binary); + }
298-306: Fix total-tokens fallback on estimation failureFalling back to response.data.length is incorrect. Use completionTokens to align with the non-enriched path.
- } catch (error) { - // If token calculation fails, continue without it - attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = response.data.length; - } + } catch (error) { + // Fall back to completion-only tokens if estimation fails + attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = completionTokens; + }
340-350: Avoid shadowing ‘response’ and use Buffer for base64 conversion (Node-compatible)Rename the fetched response variable to avoid shadowing and replace btoa with Buffer plus a browser fallback.
- const response = await fetch(firstImage.url); - const arrayBuffer = await response.arrayBuffer(); - const buffer = new Uint8Array(arrayBuffer); - - // Convert buffer to base64 safely - let binary = ''; - for (let i = 0; i < buffer.byteLength; i++) { - binary += String.fromCharCode(buffer[i]); - } - const base64Data = btoa(binary); + const res = await fetch(firstImage.url); + const arrayBuffer = await res.arrayBuffer(); + // Prefer Node path + // eslint-disable-next-line @typescript-eslint/ban-ts-comment + // @ts-ignore + const base64Data = + typeof Buffer !== "undefined" + ? Buffer.from(arrayBuffer as ArrayBuffer).toString("base64") + : (() => { + const buf = new Uint8Array(arrayBuffer); + let binary = ""; + for (let i = 0; i < buf.byteLength; i++) { + binary += String.fromCharCode(buf[i]); + } + // eslint-disable-next-line @typescript-eslint/ban-ts-comment + // @ts-ignore + return btoa(binary); + })();
397-399: Avoid unsafe Function type in wrapper signatureUse a typed function signature to satisfy ESLint and improve type safety.
- return function (original: Function) { + return function (original: (...args: any[]) => Promise<any> | any) {
409-441: Capture synchronous throws from original.apply and end the spanIf original throws before returning a Promise, the span never ends and the error isn’t recorded. Wrap the call.
- const response = original.apply(this, args); + let response: any; + try { + response = original.apply(this, args); + } catch (error) { + span.recordException(error as Error); + span.end(); + throw error; + }Optional: set request attributes before invoking original so they’re captured even on sync throws.
451-453: Avoid unsafe Function type in edit wrapper signatureSame as generate wrapper: replace Function with a typed signature.
- return function (original: Function) { + return function (original: (...args: any[]) => Promise<any> | any) {
469-492: Edit wrapper: handle synchronous throws from original.applyMirror the try/catch to ensure span end and exception recording on sync throws.
- const response = original.apply(this, args); + let response: any; + try { + response = original.apply(this, args); + } catch (error) { + // Ensure request attributes promise is awaited so we don't leave work dangling + try { await setRequestAttributesPromise; } catch {} + span.recordException(error as Error); + span.end(); + throw error; + }
512-514: Avoid unsafe Function type in variation wrapper signatureSame as above wrappers: replace Function with a typed signature.
- return function (original: Function) { + return function (original: (...args: any[]) => Promise<any> | any) {
525-556: Variation wrapper: handle synchronous throws from original.applyAdd try/catch like in the other wrappers.
- const response = original.apply(this, args); + let response: any; + try { + response = original.apply(this, args); + } catch (error) { + span.recordException(error as Error); + span.end(); + throw error; + }
🧹 Nitpick comments (4)
packages/instrumentation-openai/src/image-wrappers.ts (4)
66-73: Remove redundant type annotation flagged by ESLintindex has a default value, so the explicit type is unnecessary under no-inferrable-types.
- index: number = 0 + index = 0
351-356: Preserve file extension based on Content-Type when uploading fetched imageUse the Content-Type header to derive the extension and filename instead of hardcoding .png.
- const uploadedUrl = await uploadCallback( + const contentType = res.headers.get("content-type") || "image/png"; + const ext = contentType.split("/")[1]?.split(";")[0] || "png"; + const uploadedUrl = await uploadCallback( traceId, spanId, - "generated_image.png", + `generated_image.${ext}`, base64Data );
309-383: Consider recording all returned images and image count, not just the firstYou only attach the first image to LLM_COMPLETIONS. Either loop over response.data to record multiple completions or set an image count attribute for visibility.
Example:
- attributes["llm.response.image.count"] = response.data.length
- Iterate response.data and set LLM_COMPLETIONS.i.content for a small bounded i (e.g., up to 3) to avoid oversized spans.
340-342: Ensure fetch is available in the runtime (Node 18+ or polyfill)If Node <18 is supported, import a fetch polyfill (e.g., undici or node-fetch) or guard accordingly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/instrumentation-openai/src/image-wrappers.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: nirga
PR: traceloop/openllmetry-js#623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.404Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
📚 Learning: 2025-08-10T06:42:06.404Z
Learnt from: nirga
PR: traceloop/openllmetry-js#623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.404Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
Applied to files:
packages/instrumentation-openai/src/image-wrappers.ts
🪛 ESLint
packages/instrumentation-openai/src/image-wrappers.ts
[error] 71-71: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
[error] 397-397: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 451-451: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 512-512: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
🔇 Additional comments (1)
packages/instrumentation-openai/src/image-wrappers.ts (1)
111-116: Confirm upload callback sends Authorization header (non-presigned endpoint)Per retrieved learnings, the Traceloop images upload endpoint requires the bearer token on upload. Please verify that the provided uploadCallback includes Authorization.
Reference: ImageUploader must keep Authorization header for /v2/traces/{traceId}/spans/{spanId}/images uploads (not a presigned URL).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (7)
packages/instrumentation-openai/src/image-wrappers.ts (7)
11-23: Add official docs link and align terminology (“High/HD”)Link to OpenAI’s image generation pricing/docs and change “High” to “High/HD” to match the docs.
/** * Calculate completion tokens for image generation based on OpenAI's actual token costs - * - * Token costs based on OpenAI documentation: + * + * See OpenAI docs for full pricing table: + * https://platform.openai.com/docs/guides/image-generation + * + * Token costs per OpenAI: * For gpt-image-1: Square (1024×1024) Portrait (1024×1536) Landscape (1536×1024) * Low 272 tokens 408 tokens 400 tokens * Medium 1056 tokens 1584 tokens 1568 tokens - * High 4160 tokens 6240 tokens 6208 tokens + * High/HD 4160 tokens 6240 tokens 6208 tokens * * For DALL-E 3: * Standard 1056 tokens 1584 tokens 1568 tokens * HD 4160 tokens 6240 tokens 6208 tokens */
115-115: Log stream error details for diagnosticsInclude the error in the handler so failures aren’t silent.
- image.on("error", () => resolve(null)); + image.on("error", (err: unknown) => { console.error(err); resolve(null); });
352-357: Avoid shadowing the response parameter; rename local variableRename the fetch result to prevent shadowing the function parameter “response”.
- const response = await fetch(firstImage.url); - const arrayBuffer = await response.arrayBuffer(); - const buffer = Buffer.from(arrayBuffer); - const base64Data = buffer.toString("base64"); + const res = await fetch(firstImage.url); + const arrayBuffer = await res.arrayBuffer(); + const base64Data = Buffer.from(arrayBuffer).toString("base64");
77-91: Do not assume arbitrary strings are base64; guard with a shape checkPrevent misinterpreting file paths or arbitrary strings as base64.
- } else { - // Assume it's base64 or file path - base64Data = image; - filename = `input_image_${index}.png`; + } else { + // Only treat as base64 if it looks like base64; otherwise skip + const maybe = image.replace(/\s/g, ""); + const looksBase64 = + /^[A-Za-z0-9+/]+={0,2}$/.test(maybe) && maybe.length % 4 === 0; + if (looksBase64) { + base64Data = maybe; + filename = `input_image_${index}.png`; + } else { + return null; + } }
311-315: Fix token fallback: use completionTokens when enrichment failsFalling back to response.data.length is incorrect. Use completionTokens to keep behavior consistent with enrichTokens disabled.
- } catch (error) { - // If token calculation fails, continue without it - attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = response.data.length; - } + } catch (error) { + // Fall back to completion-only tokens if estimation fails + attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = completionTokens; + }
404-404: Avoid the unsafe Function type; use a precise signatureSatisfy @typescript-eslint/no-unsafe-function-type and improve type safety.
- return function (original: Function) { + return function (original: (...args: any[]) => Promise<any> | any) {Apply the same change in wrapImageEdit and wrapImageVariation (above ranges).
Also applies to: 458-458, 519-519
416-418: Capture synchronous exceptions from original.apply and end the spanIf original throws synchronously, the span never ends and the exception isn’t recorded.
- const response = original.apply(this, args); + let response: any; + try { + response = original.apply(this, args); + } catch (error) { + span.recordException(error as Error); + span.end(); + throw error; + }Apply similarly at the other locations indicated above.
Also applies to: 476-476, 532-532
🧹 Nitpick comments (2)
packages/instrumentation-openai/src/image-wrappers.ts (2)
71-71: ESLint: remove trivially inferred type annotationFixes @typescript-eslint/no-inferrable-types.
- index: number = 0 + index = 0
124-126: Ensure upload callback includes Authorization header (Traceloop API requires it)Per org guidance, the Traceloop images upload endpoint is not pre-signed; Authorization bearer token must be sent on upload.
Please confirm the provided uploadCallback attaches the Authorization header as required by the ImageUploader. I can add a brief doc comment here referencing that requirement if helpful.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/instrumentation-openai/src/image-wrappers.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: nirga
PR: traceloop/openllmetry-js#623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.404Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
📚 Learning: 2025-08-10T06:42:06.404Z
Learnt from: nirga
PR: traceloop/openllmetry-js#623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.404Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
Applied to files:
packages/instrumentation-openai/src/image-wrappers.ts
🪛 ESLint
packages/instrumentation-openai/src/image-wrappers.ts
[error] 71-71: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
[error] 404-404: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 458-458: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 519-519: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
🔇 Additional comments (1)
packages/instrumentation-openai/src/image-wrappers.ts (1)
352-355: Verify fetch availability in target runtimeGlobal fetch requires Node 18+. If earlier Node versions are supported, add a polyfill/import.
What Node versions does this package target? If <18 is in scope, I can add a conditional import of node-fetch.
Change test:generate and test:edit to run:sample_generate and run:sample_edit to follow the established pattern. These are sample apps, not unit tests. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add complete test coverage for image generation, editing, and variation - Fix Buffer/File format issues in tests using proper File objects - Remove problematic postinstall script that caused CI failures - Tests now verify span creation, attributes, and token calculation - Support for different quality levels (low/medium/high) validation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 41721e6 in 1 minute and 10 seconds. Click for details.
- Reviewed
2421lines of code in13files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:120
- Draft comment:
Avoid hard-coding token count values (e.g. '15', '82') in assertions. Consider defining constants or computing expected values dynamically to ease maintenance if tokenization changes. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/instrumentation-openai/recordings/:1
- Draft comment:
HAR recording files have been updated with new runtime versions (v20.11.1) and cookie expiry dates (year 2025). Verify that these changes reflect intentional API updates and remain consistent across tests to avoid brittleness. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/instrumentation-openai/test/instrumentation.test.ts:86
- Draft comment:
The beforeEach hook filters out authorization headers from recordings. Ensure that this filtering covers all sensitive headers to prevent accidental persistence of secrets. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_xLVkU4upgCCgHrZG
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
packages/instrumentation-openai/test/instrumentation.test.ts (4)
636-637: Prefer constants over string literals for attribute keysUse SpanAttributes.LLM_REQUEST_TYPE instead of the raw "llm.request.type" string to prevent typos and ease refactors.
- assert.strictEqual(imageSpan.attributes["llm.request.type"], "image_generation"); + assert.strictEqual(imageSpan.attributes[SpanAttributes.LLM_REQUEST_TYPE], "image_generation"); @@ - assert.strictEqual(editSpan.attributes["llm.request.type"], "image_edit"); + assert.strictEqual(editSpan.attributes[SpanAttributes.LLM_REQUEST_TYPE], "image_edit"); @@ - assert.strictEqual(variationSpan.attributes["llm.request.type"], "image_variation"); + assert.strictEqual(variationSpan.attributes[SpanAttributes.LLM_REQUEST_TYPE], "image_variation");Also applies to: 670-671, 703-704
635-642: Assert image quality attribute as wellSince the request specifies quality: "high", assert the instrumentation captured it.
assert.strictEqual(imageSpan.attributes[SpanAttributes.LLM_REQUEST_MODEL], "gpt-image-1"); assert.strictEqual(imageSpan.attributes["llm.request.image.size"], "1024x1024"); + assert.strictEqual(imageSpan.attributes["llm.request.image.quality"], "high"); assert.strictEqual(imageSpan.attributes["llm.request.image.count"], 1);
623-630: Remove unused variables to avoid potential noUnusedLocals failuresResponses aren’t used; drop the bindings.
- const response = await openai.images.generate({ + await openai.images.generate({ model: "gpt-image-1", prompt: "A test image", n: 1, size: "1024x1024", quality: "high", });- const response = await openai.images.edit({ + await openai.images.edit({ image: mockImageFile, prompt: "Add a red hat", model: "gpt-image-1", n: 1, size: "1024x1024", });- const response = await openai.images.createVariation({ + await openai.images.createVariation({ image: mockImageFile, model: "gpt-image-1", n: 1, size: "1024x1024", });- const lowResponse = await openai.images.generate({ + await openai.images.generate({ model: "gpt-image-1", prompt: "Test low quality", quality: "low", size: "1024x1024", });- const mediumResponse = await openai.images.generate({ + await openai.images.generate({ model: "gpt-image-1", prompt: "Test medium quality", quality: "medium", size: "1024x1024", });Also applies to: 657-663, 691-696, 719-724, 727-733
644-645: Avoid brittle magic numbers for token assertions (consider centralizing the estimation)Hardcoding 4160/272/1056 ties tests to internal heuristics. Prefer verifying relationships (e.g., high > medium > low, or expected proportionality), or expose a shared token estimation utility used by the instrumentation and tests.
I can extract/centralize the estimation logic to a shared util and update tests accordingly.
Also applies to: 748-752
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
package.json(1 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-chat-completion_1028826680/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-stream-chat-completion_2713905208/recording.har(6 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-chat_1427850401/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-completion_322013537/recording.har(8 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-function-calling_1703714575/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-editing_4216676336/recording.har(1 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-variation_2237177737/recording.har(1 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har(6 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-completion_1484001317/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har(9 hunks)packages/instrumentation-openai/test/instrumentation.test.ts(1 hunks)packages/sample-app/package.json(2 hunks)
✅ Files skipped from review due to trivial changes (11)
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-variation_2237177737/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-editing_4216676336/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-chat_1427850401/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-completion_322013537/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-function-calling_1703714575/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-chat-completion_1028826680/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-completion_1484001317/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-stream-chat-completion_2713905208/recording.har
🚧 Files skipped from review as they are similar to previous changes (2)
- package.json
- packages/sample-app/package.json
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-59)
🔇 Additional comments (1)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
644-645: Numeric token attributes for image spans are correctly numbersThe image-wrappers.ts implementation assigns numeric values (
completionTokens,estimatedPromptTokens,response.data.length) to all LLM_USAGE_* attributes, and the test at lines 644–645 correctly asserts4160as a number. No changes are needed here.
galkleinman
left a comment
There was a problem hiding this comment.
General comments:
- Cost estimation should take place at the backend imo
- Remove the meaningless claude code comments (there are bunch of them)
- CodeRabbit gave some good comments
| const tokenCosts: Record<string, Record<string, number>> = { | ||
| // gpt-image-1 quality values | ||
| "low": { | ||
| "1024x1024": 272, | ||
| "1024x1536": 408, | ||
| "1536x1024": 400, | ||
| }, | ||
| "medium": { | ||
| "1024x1024": 1056, | ||
| "1024x1536": 1584, | ||
| "1536x1024": 1568, | ||
| }, | ||
| "high": { | ||
| "1024x1024": 4160, | ||
| "1024x1536": 6240, | ||
| "1536x1024": 6208, | ||
| }, | ||
| // DALL-E 3 quality values (same costs as medium/high above) | ||
| "standard": { | ||
| "1024x1024": 1056, | ||
| "1024x1536": 1584, | ||
| "1536x1024": 1568, | ||
| }, | ||
| "hd": { | ||
| "1024x1024": 4160, | ||
| "1024x1536": 6240, | ||
| "1536x1024": 6208, | ||
| } | ||
| }; |
There was a problem hiding this comment.
why is it here and not in our backend? then you should just report the size and model and avoid changing it every time pricing changes
| "cheerio": "^1.1.0", | ||
| "chromadb": "^3.0.9", | ||
| "cohere-ai": "^7.17.1", | ||
| "dotenv": "^17.2.1", |
- Change llm.request.image.* to gen_ai.request.image.* attributes - Update llm.request.type to gen_ai.request.type for consistency - Change llm.response.revised_prompt to gen_ai.response.revised_prompt - Update all tests to check for correct gen_ai attribute names - Aligns with OpenAI gen_ai semantic conventions for image generation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Remove the build:deps script as it's no longer needed and can cause CI dependency issues. Build dependencies are handled by nx automatically. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 1a2799b in 1 minute and 14 seconds. Click for details.
- Reviewed
138lines of code in2files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/image-wrappers.ts:142
- Draft comment:
Consider extracting the repeated 'gen_ai' prefix into a constant to reduce magic strings and improve maintainability. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
2. packages/instrumentation-openai/test/instrumentation.test.ts:635
- Draft comment:
For test assertions comparing new attribute keys (e.g. 'gen_ai.request.type'), consider using shared constants to avoid duplication of string literals. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
Workflow ID: wflow_phcE6TgRQmYqsjeD
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed e423a4e in 1 minute and 11 seconds. Click for details.
- Reviewed
14lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. package.json:6
- Draft comment:
Removal of 'build:deps' script is noted. Ensure that no other processes or documentation rely on it. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to ensure that no other processes or documentation rely on the removed script. This falls under the rule of not asking the author to double-check things or ensure behavior is intended. Therefore, this comment should be removed.
Workflow ID: wflow_HtyFHYEUgrGKgdKS
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (6)
packages/instrumentation-openai/src/image-wrappers.ts (6)
115-115: Log stream errors for better debuggabilityThe ReadStream error handler swallows the error. Log it to aid debugging.
- image.on("error", () => resolve(null)); + image.on("error", (err: unknown) => { + console.error(err); + resolve(null); + });
352-356: Avoid shadowing the response parameter; rename fetched response varShadowing response obscures the ImagesResponse param and harms readability.
- const response = await fetch(firstImage.url); - const arrayBuffer = await response.arrayBuffer(); - const buffer = Buffer.from(arrayBuffer); + const res = await fetch(firstImage.url); + const arrayBuffer = await res.arrayBuffer(); + const buffer = Buffer.from(arrayBuffer);
77-91: Do not assume arbitrary strings are base64; gate with a base64 shape checkTreating any non-URL string as base64 is unsafe; add a format check and return null otherwise.
} else { - // Assume it's base64 or file path - base64Data = image; - filename = `input_image_${index}.png`; + // Only treat as base64 if it looks like base64; otherwise skip + const maybe = image.replace(/\s/g, ""); + const looksBase64 = + /^[A-Za-z0-9+/]+={0,2}$/.test(maybe) && maybe.length % 4 === 0; + if (looksBase64) { + base64Data = maybe; + filename = `input_image_${index}.png`; + } else { + return null; + } }
311-315: Fix total token fallback when enrichment failsFalling back to response.data.length is incorrect. Use completionTokens for consistency with the disabled-enrichment path.
} catch (error) { - // If token calculation fails, continue without it - attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = response.data.length; + // Fall back to completion-only tokens if estimation fails + attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] = completionTokens; }
404-406: Avoid unsafe Function type (ESLint) by tightening the signatureReplace Function with a safer callable type.
- return function (original: Function) { + return function (original: (...args: any[]) => Promise<any> | any) {Apply the same change in wrapImageEdit and wrapImageVariation at the indicated lines.
Also applies to: 458-458, 519-519
417-417: Capture synchronous throws from original.apply to end the span and record the errorIf the SDK throws synchronously (e.g., param validation), spans leak and exceptions aren’t recorded. Wrap the call.
- const response = original.apply(this, args); + let response: any; + try { + response = original.apply(this, args); + } catch (error) { + span.recordException(error as Error); + span.end(); + throw error; + }Apply similarly at wrapImageEdit and wrapImageVariation where original.apply is called.
Also applies to: 476-477, 532-532
🧹 Nitpick comments (5)
packages/instrumentation-openai/src/image-wrappers.ts (5)
11-23: Link to official docs and align terminology (“High/HD”)Add a direct reference to OpenAI’s image pricing docs and use “High/HD” to match the spec.
/** * Calculate completion tokens for image generation based on OpenAI's actual token costs - * - * Token costs based on OpenAI documentation: + * + * See OpenAI docs for the latest pricing/token tables: + * https://platform.openai.com/docs/guides/image-generation + * + * Token costs based on OpenAI documentation: * For gpt-image-1: Square (1024×1024) Portrait (1024×1536) Landscape (1536×1024) * Low 272 tokens 408 tokens 400 tokens * Medium 1056 tokens 1584 tokens 1568 tokens - * High 4160 tokens 6240 tokens 6208 tokens + * High/HD 4160 tokens 6240 tokens 6208 tokens * * For DALL-E 3: * Standard 1056 tokens 1584 tokens 1568 tokens * HD 4160 tokens 6240 tokens 6208 tokens */
71-72: Remove redundant type annotation (ESLint: no-inferrable-types)index’s type is inferred from the default. Clean up per ESLint.
- index: number = 0 + index = 0
406-414: Set request attributes before invoking the SDK (generation)For generate, set request attributes before calling original so they’re captured even if the call throws. Remove the duplicate setting in the success path.
return function (this: any, ...args: any[]) { const params = args[0] as ImageGenerateParams; const span = tracer.startSpan("openai.images.generate", { kind: SpanKind.CLIENT, attributes: { [SpanAttributes.LLM_SYSTEM]: "OpenAI", "gen_ai.request.type": "image_generation", }, }); - - let response: any; + // Capture request attributes up front + try { + setImageGenerationRequestAttributes(span, params); + } catch (err) { + // Non-fatal: keep going to still capture the SDK error/response + console.error("Failed setting image generation request attributes:", err); + } + + let response: any; try { response = original.apply(this, args); } catch (error) { span.recordException(error as Error); span.end(); throw error; } // Handle both promise and direct response if (response && typeof response.then === 'function') { return response.then(async (result: any) => { try { - setImageGenerationRequestAttributes(span, params); await setImageGenerationResponseAttributes(span, result, uploadCallback, instrumentationConfig, params); return result;Also applies to: 421-425
24-57: Consider moving/centralizing token tables in backend to avoid churnEmbedding price/token tables in the client means frequent releases when pricing changes and risk of drift. Prefer sending model/size/quality and letting backend compute or enrich tokens.
244-262: Optional: Preserve input URL in request attributes when upload is skippedWhen the input image is a URL and upload is not performed, request attributes omit the image context. Consider attaching the original URL to prompts to retain traceability.
Also applies to: 256-262
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/instrumentation-openai/src/image-wrappers.ts(1 hunks)packages/instrumentation-openai/test/instrumentation.test.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/instrumentation-openai/test/instrumentation.test.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-10T06:42:06.404Z
Learnt from: nirga
PR: traceloop/openllmetry-js#623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.404Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
Applied to files:
packages/instrumentation-openai/src/image-wrappers.ts
🪛 ESLint
packages/instrumentation-openai/src/image-wrappers.ts
[error] 71-71: Type number trivially inferred from a number literal, remove type annotation.
(@typescript-eslint/no-inferrable-types)
[error] 404-404: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 458-458: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
[error] 519-519: The Function type accepts any function-like value.
Prefer explicitly defining any function parameters and return type.
(@typescript-eslint/no-unsafe-function-type)
🔇 Additional comments (2)
packages/instrumentation-openai/src/image-wrappers.ts (2)
132-169: Request attribute mapping looks solidModel, size, quality, style, count, and prompt are captured under gen_ai.request.* and LLM_PROMPTS; consistent with conventions.
124-126: No action required: Authorization header is already includedBoth
getImageUrlanduploadImageDatainpackages/traceloop-sdk/src/lib/images/image-uploader.tsexplicitly set:
Authorization: Bearer ${this.apiKey}Therefore, the provided
uploadCallbackenforces bearer authentication on the Traceloop API’s image‐upload endpoints. No further changes needed.
- Add complete OpenAI Images API instrumentation (generate, edit, createVariation) - Implement callback-based image upload architecture to SDK backend - Add accurate token usage calculation based on OpenAI pricing model - Support gen_ai semantic attributes for image operations - Add guard conditions for OpenAI SDK version compatibility - Fix all lint errors and ensure TypeScript compilation - Add comprehensive test coverage with real image files - Fix function/tool calling test expectations to match API responses - Optimize image processing for Node.js backend environment 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed 5bf1651 in 1 minute and 55 seconds. Click for details.
- Reviewed
1997lines of code in15files - Skipped
0files when reviewing. - Skipped posting
4draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/image-wrappers.ts:27
- Draft comment:
The default quality for gpt-image-1 has been changed from 'high' to 'auto'. Please confirm that this change is intentional and update the documentation as needed, since the token cost mapping for 'auto' mimics that of 'standard' quality. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. packages/instrumentation-openai/src/image-wrappers.ts:112
- Draft comment:
Good improvement: The error handler for image streams now logs the error with a descriptive message, aiding in debugging stream failures. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
3. packages/instrumentation-openai/src/instrumentation.ts:87
- Draft comment:
Nice update: Conditional checks for module.Images ensure instrumentation is applied only if the Images module exists, supporting multiple OpenAI API versions. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
4. packages/instrumentation-openai/test/instrumentation.test.ts:101
- Draft comment:
Ensure that differences in prompt token counts between streaming and non-streaming cases are well-documented to avoid confusion. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
Workflow ID: wflow_uoPMD5YWBXUMe83T
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 6
🔭 Outside diff range comments (5)
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har (1)
99-116: Redact cookies and Set-Cookie values from recordings (security + flakiness).Committed cookie tokens (__cf_bm, _cfuvid) and Set-Cookie headers are sensitive/ephemeral. Redact them in the recording and sanitize on persist.
Apply this minimal redaction in the fixture:
- "expires": "2025-08-11T07:56:59.000Z", + "expires": "2025-08-11T07:56:59.000Z", ... - "value": "3LCjXOI1Xo7nz5jIa9siESWWwUWNMTaAlOtO5s9kBAs-1754897219-1.0.1.1-bwxQGXBK9pnSIVCXVRMPQCR2NFdmsf9HO.d7P7dn7rV3cLN3.LFLmNUBJL0Rncoc5p7bXchsc.04MKUcCvJJ655WS7PiXyg9wv0N2wfcDsw" + "value": "<REDACTED>" ... - "value": "Od7To08BWDnKzSUrTCpu4UgYJ63Homi4A2pGMiNo9L0-1754897219905-0.0.1.1-604800000" + "value": "<REDACTED>" ... - "value": "__cf_bm=3LCjXOI1Xo7nz5jIa9siESWWwUWNMTaAlOtO5s9kBAs-1754897219-1.0.1.1-bwxQGXBK9pnSIVCXVRMPQCR2NFdmsf9HO.d7P7dn7rV3cLN3.LFLmNUBJL0Rncoc5p7bXchsc.04MKUcCvJJ655WS7PiXyg9wv0N2wfcDsw; path=/; expires=Mon, 11-Aug-25 07:56:59 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None" + "value": "<REDACTED>" ... - "value": "_cfuvid=Od7To08BWDnKzSUrTCpu4UgYJ63Homi4A2pGMiNo9L0-1754897219905-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None" + "value": "<REDACTED>"Also add a sanitizer in your test setup to do this automatically for future recordings:
// Example Polly sanitizer (in test setup) polly.server.any().on('beforePersist', (_req, recording) => { const redactHeader = (headers: any[]) => { const redacted = new Set([ 'set-cookie', 'cookie', 'authorization', 'x-request-id', 'cf-ray', 'x-envoy-upstream-service-time' ]); recording.response.headers = recording.response.headers?.filter(Boolean).map(h => { if (redacted.has(h.name.toLowerCase())) return { ...h, value: '<REDACTED>' }; return h; }); }; // Cookies recording.response.cookies = recording.response.cookies?.map(c => ({ ...c, value: '<REDACTED>' })); redactHeader(recording.response.headers || []); });Also applies to: 193-199
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har (1)
96-116: Sanitize All HAR Recordings — Redact Cookie and Set-Cookie ValuesVerification shows real
__cf_bm,_cfuvidandset-cookieheaders leaked in 40+.harfiles underpackages/instrumentation-openai/recordings. These values are unnecessary for tests, pose a security risk, and trigger cassette churn.• File under review
packages/instrumentation-openai/recordings/…/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har (lines 96–116)
• Other affected recordings
All.harfiles beneathpackages/instrumentation-openai/recordingscontain similar exposures.Apply this targeted diff in the file above (and mirror for others):
--- a/.../recording.har +++ b/.../recording.har @@ -96,12 +96,12 @@ "cookies": [ { "domain": ".api.openai.com", - "expires": "2025-08-11T07:57:08.000Z", + "expires": "<redacted>", "httpOnly": true, "name": "__cf_bm", "path": "/", "sameSite": "None", "secure": true, - "value": "uBFY7R8vjCUXNxpaN0sCEHfdf13XQn1SVYBZ_YjwDOw-1754897228-…" + "value": "<redacted>" }, { "domain": ".api.openai.com",To bulk-sanitize every recording, add a post-process step or Polly hook. For example:
#!/usr/bin/env bash set -e find packages/instrumentation-openai/recordings -type f -name '*.har' | while IFS= read -r file; do jq 'walk( if (type=="object" and (has("expires") or has("value"))) then .expires? = "<redacted>" | .value? = "<redacted>" else . end )' "$file" > "$file.tmp" && mv "$file.tmp" "$file" doneThis will replace all cookie‐related
expiresandvaluefields with<redacted>across your entire suite.packages/instrumentation-openai/package.json (1)
23-25: Bump engines.node to Node.js ≥18The OpenAI Node SDK v4.x requires Node.js 18 LTS or newer (early v4 builds briefly mentioned 16 but later docs standardized on 18+). Please update your package.json accordingly.
• File: packages/instrumentation-openai/package.json
• Lines: 23–25Apply this diff:
- "engines": { - "node": ">=14" - }, + "engines": { + "node": ">=18" + },packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-completion_1484001317/recording.har (1)
98-116: Remove sensitive Cloudflare cookies from HAR fixturesThe HAR recording fixtures under
packages/instrumentation-openai/recordings/currently include raw__cf_bmand_cfuvidcookie values and their correspondingset-cookieheaders. These fixtures should be sanitized to avoid committing transient identifiers.• Affected files:
– All*.harfiles inpackages/instrumentation-openai/recordings/**/
• Required changes:
– Clear out the"cookies"array
– Redact allSet-Cookieheader valuesExample diff for each recording HAR:
- "cookies": [ - { "domain": ".api.openai.com", "httpOnly": true, "name": "__cf_bm", "path": "/", "sameSite": "None", "secure": true, "value": "weXYN…trk" }, - { "domain": ".api.openai.com", "httpOnly": true, "name": "_cfuvid", "path": "/", "sameSite": "None", "secure": true, "value": ".bAE0…000" } - ], + "cookies": [], @@ - { "name": "set-cookie", "value": "__cf_bm=…; HttpOnly; Secure; SameSite=None" }, - { "name": "set-cookie", "value": "_cfuvid=…; HttpOnly; Secure; SameSite=None" }, + { "name": "set-cookie", "value": "<redacted>" }, + { "name": "set-cookie", "value": "<redacted>" },Please update all HAR fixtures accordingly.
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.har (1)
97-116: Sanitize cookies/token-like values in recordings to reduce PII and churnThese Cloudflare cookies aren’t needed for replay and create noisy diffs. Prefer redaction.
Proposed minimal in-place redaction:
- "cookies": [ - { - "domain": ".api.openai.com", - "expires": "2025-08-11T07:58:41.000Z", - "httpOnly": true, - "name": "__cf_bm", - "path": "/", - "sameSite": "None", - "secure": true, - "value": "xuimC18NpNaFulDH3.pqu.2pQ8HqGD5c08Ajl07F7fk-1754897321-1.0.1.1-7ncsQ6uxMtc5eDoRF2vVMb2pZ7jNR6GQknKSHV2QNAhi7dpfIC_wB3LgcIKG432.qAzpi0XcX_FeeG2o4A3qU.zvc.D14ZGTGH8dCVMyxXs" - }, - { - "domain": ".api.openai.com", - "httpOnly": true, - "name": "_cfuvid", - "path": "/", - "sameSite": "None", - "secure": true, - "value": "Q2MryEeyNXjCLvmYYOot.QbfilVxmikzuSkoimQ_ek4-1754897321681-0.0.1.1-604800000" - } - ], + "cookies": [], @@ - "value": "__cf_bm=xuimC18NpNaFulDH3.pqu.2pQ8HqGD5c08Ajl07F7fk-1754897321-1.0.1.1-7ncsQ6uxMtc5eDoRF2vVMb2pZ7jNR6GQknKSHV2QNAhi7dpfIC_wB3LgcIKG432.qAzpi0XcX_FeeG2o4A3qU.zvc.D14ZGTGH8dCVMyxXs; path=/; expires=Mon, 11-Aug-25 07:58:41 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None" + "value": "REDACTED; path=/; expires=REDACTED; domain=.api.openai.com; HttpOnly; Secure; SameSite=None" @@ - "value": "_cfuvid=Q2MryEeyNXjCLvmYYOot.QbfilVxmikzuSkoimQ_ek4-1754897321681-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None" + "value": "REDACTED; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None"If desired, we can also strip volatile headers entirely during recording via Polly configuration.
Also applies to: 192-200
♻️ Duplicate comments (5)
packages/instrumentation-openai/src/image-wrappers.ts (5)
81-92: Validate base64 strings before processingThe code assumes any non-URL string is base64 (lines 90-91), which could misinterpret file paths. Add validation to ensure the string is actually base64-encoded.
} else if (image.startsWith("http")) { return null; } else { - base64Data = image; - filename = `input_image_${index}.png`; + // Validate if string looks like base64 + const cleanedImage = image.replace(/\s/g, ""); + const isBase64 = /^[A-Za-z0-9+/]+={0,2}$/.test(cleanedImage) && cleanedImage.length % 4 === 0; + if (isBase64) { + base64Data = cleanedImage; + filename = `input_image_${index}.png`; + } else { + // Not a valid base64 string, could be a file path + return null; + } }
333-361: Fix variable shadowing in fetch responseLine 338 shadows the function parameter
responsewith the fetch response, which can cause confusion and potential bugs.try { const traceId = span.spanContext().traceId; const spanId = span.spanContext().spanId; - const response = await fetch(firstImage.url); - const arrayBuffer = await response.arrayBuffer(); + const fetchResponse = await fetch(firstImage.url); + const arrayBuffer = await fetchResponse.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); const base64Data = buffer.toString("base64");
381-430: Handle synchronous exceptions from original functionIf the wrapped function throws synchronously (e.g., during parameter validation), the span will never be closed, causing memory leaks and incomplete traces.
export function wrapImageGeneration( tracer: ReturnType<typeof trace.getTracer>, uploadCallback?: ImageUploadCallback, instrumentationConfig?: { enrichTokens?: boolean } ) { - return function (original: (...args: any[]) => any) { + return function (original: (...args: any[]) => Promise<any> | any) { return function (this: any, ...args: any[]) { const params = args[0] as ImageGenerateParams; const span = tracer.startSpan("openai.images.generate", { kind: SpanKind.CLIENT, attributes: { [SpanAttributes.LLM_SYSTEM]: "OpenAI", "gen_ai.request.type": "image_generation", }, }); - const response = original.apply(this, args); + let response: any; + try { + response = original.apply(this, args); + } catch (error) { + span.recordException(error as Error); + span.end(); + throw error; + } if (response && typeof response.then === 'function') {
432-486: Apply same synchronous error handling to edit wrapperThe edit wrapper has the same issue with synchronous exceptions.
488-536: Apply same synchronous error handling to variation wrapperThe variation wrapper also needs synchronous exception handling.
🧹 Nitpick comments (9)
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har (3)
144-145: Sanitize volatile ratelimit/perf headers to reduce fixture churn.openai-processing-ms, x-envoy-upstream-service-time, and x-ratelimit-* are highly variable and not relevant to instrumentation assertions. Sanitize or strip them when persisting.
Extend your sanitizer:
const drop = new Set([ 'openai-processing-ms', 'x-envoy-upstream-service-time', 'x-ratelimit-limit-requests', 'x-ratelimit-limit-tokens', 'x-ratelimit-remaining-requests', 'x-ratelimit-remaining-tokens', 'x-ratelimit-reset-requests', 'x-ratelimit-reset-tokens', 'date' ]); recording.response.headers = recording.response.headers?.filter( (h: any) => !drop.has(h.name.toLowerCase()) );Also applies to: 155-157, 160-173, 175-181
94-94: Consider normalizing the SSE payload text to reduce fixture size and churn.The SSE body is large and includes volatile ids/obfuscation. If tests only assert span attributes and not the exact delta content, normalize the SSE stream to a minimal stable sequence (e.g., a few small chunks + [DONE]) or sanitize ids/obfuscation strings to placeholders in beforePersist.
If you want, I can generate a sanitizer that replaces any "id", "obfuscation", timestamps, and "system_fingerprint" in the SSE text with deterministic placeholders while preserving the chunking pattern.
120-120: Ensure volatile Date/Set-Cookie dates don’t break tests."date" and Set-Cookie "expires" will keep changing across re-records. Strip or replace with placeholders in the sanitizer to prevent flakiness.
I can open a follow-up PR to add the sanitizer to the test setup; let me know.
Also applies to: 193-193, 199-199
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har (1)
90-95: Canonicalize event-stream text to reduce dynamic noise (IDs, timestamps, obfuscation)The SSE payload includes dynamic ids, created timestamps, fingerprints, obfuscation tokens, and tool_call ids. Tests usually only need the structural parts (tool names, finish_reason). Consider post-processing the “text” to replace dynamic fields with placeholders.
Example sanitization approach (outside this file) to run on HARs:
// sanitize-har.js import fs from 'node:fs'; const sanitizeSSE = s => s .replace(/("id":")([^"]+)(")/g, '$1<redacted>$3') .replace(/("created":)(\d+)/g, '$1 0') .replace(/("system_fingerprint":")([^"]+)(")/g, '$1<redacted>$3') .replace(/("obfuscation":")([^"]+)(")/g, '$1<redacted>$3') .replace(/("id":")call_[^"]+(")/g, '$1<redacted>$2'); const file = process.argv[2]; const har = JSON.parse(fs.readFileSync(file, 'utf8')); for (const e of har.log.entries) { if (e.response?.content?.mimeType?.startsWith('text/event-stream') && typeof e.response.content.text === 'string') { e.response.content.text = sanitizeSSE(e.response.content.text); } } fs.writeFileSync(file, JSON.stringify(har, null, 2));Happy to wire this into your test recording workflow or convert it to a Polly hook.
packages/instrumentation-openai/package.json (1)
4-4: Typo in description“Instrumentaion” → “Instrumentation”.
- "description": "OpenAI Instrumentaion", + "description": "OpenAI Instrumentation",packages/instrumentation-openai/src/image-wrappers.ts (1)
12-68: Consider moving token cost calculations to backend for maintainabilityPer @galkleinman's earlier comment, hardcoding token costs here requires code updates whenever OpenAI changes pricing. Consider moving this logic to your backend service where you can dynamically update pricing without requiring client-side changes. The client should just report the model, size, and quality parameters.
packages/instrumentation-openai/test/instrumentation.test.ts (1)
29-29: Remove unused importThe
toFileimport from openai appears to be unused in the test file.-import { toFile } from "openai";packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.har (2)
148-150: Consider capturing openai-project header as span attributeUseful for multi-project orgs; capture and mask if ever logged at non-debug level.
232-234: Normalize volatile timing fields to reduce fixture churnTimestamps and wait times are inherently volatile. If not required for tests, consider post-processing to fix them to stable placeholders.
Also applies to: 241-242
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
packages/instrumentation-openai/test/test_edit_image.pngis excluded by!**/*.png
📒 Files selected for processing (15)
packages/instrumentation-openai/package.json(1 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-chat-completion_1028826680/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-stream-chat-completion_2713905208/recording.har(6 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-chat_1427850401/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-completion_322013537/recording.har(8 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-function-calling_1703714575/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-editing_4216676336/recording.har(1 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-variation_2237177737/recording.har(1 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har(6 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-completion_1484001317/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.har(7 hunks)packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har(9 hunks)packages/instrumentation-openai/src/image-wrappers.ts(1 hunks)packages/instrumentation-openai/src/instrumentation.ts(6 hunks)packages/instrumentation-openai/test/instrumentation.test.ts(5 hunks)
✅ Files skipped from review due to trivial changes (2)
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-variation_2237177737/recording.har
- packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-completion_322013537/recording.har
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-10T06:42:06.404Z
Learnt from: nirga
PR: traceloop/openllmetry-js#623
File: packages/traceloop-sdk/src/lib/images/image-uploader.ts:71-78
Timestamp: 2025-08-10T06:42:06.404Z
Learning: In the Traceloop SDK's ImageUploader class (packages/traceloop-sdk/src/lib/images/image-uploader.ts), the Authorization header with bearer token IS required when uploading image data to the URL returned by the Traceloop API's `/v2/traces/{traceId}/spans/{spanId}/images` endpoint. This is not a pre-signed URL pattern but a Traceloop-specific API endpoint that maintains the same authentication requirements.
Applied to files:
packages/instrumentation-openai/src/image-wrappers.ts
🧬 Code Graph Analysis (3)
packages/instrumentation-openai/src/image-wrappers.ts (2)
packages/instrumentation-openai/src/types.ts (1)
ImageUploadCallback(3-8)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-59)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes(17-59)
packages/instrumentation-openai/src/instrumentation.ts (1)
packages/instrumentation-openai/src/image-wrappers.ts (3)
wrapImageGeneration(381-430)wrapImageEdit(432-486)wrapImageVariation(488-536)
🪛 GitHub Actions: CI
packages/instrumentation-openai/src/image-wrappers.ts
[warning] 1-1: Prettier: Code style issues detected in this file. Run 'pnpm prettier --write' to fix.
packages/instrumentation-openai/test/instrumentation.test.ts
[warning] 1-1: Prettier: Code style issues detected in this file. Run 'pnpm prettier --write' to fix.
packages/instrumentation-openai/src/instrumentation.ts
[warning] 1-1: Prettier: Code style issues detected in this file. Run 'pnpm prettier --write' to fix.
🔇 Additional comments (23)
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har (2)
66-66: Avoid committing environment-dependent values (sizes, timings, node version).x-stainless-runtime-version, headersSize/bodySize/size, headersSize, startedDateTime/time/wait are environment-dependent and cause noisy diffs. Ensure tests don’t assert these and sanitize on persist as above.
Consider adding a Polly matcher that ignores these fields for request/response matching and strips them in persisted HARs.
Also applies to: 78-78, 90-90, 93-93, 221-221, 227-229, 236-236
92-92: No strict equality checks against “text/event-stream” found
I searched for==,===or.equals('text/event-stream')across the repo and found no instances of brittle header comparisons. All content-type handling is already robust, and no instrumentation code relies on exact string equality.packages/instrumentation-openai/package.json (1)
13-13: Test timeout increase to 40s makes sense for image flowsLooks good given longer image tests and first-time recording overhead.
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-editing_4216676336/recording.har (1)
88-97: Good error-path recording for image editingCaptures 400 with invalid mimetype, which is valuable for asserting error attributes.
Also applies to: 195-197
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-chat-completion_1028826680/recording.har (2)
90-101: Recording updates look consistent with newer runtime and headersBody/content/headers align with environment updates; OK for replay.
Also applies to: 226-234
99-116: Redact cookies and Set-Cookie values in HAR
To avoid committing transient identifiers, sanitize all HAR recordings underpackages/instrumentation-openai/recordingsby:
- Clearing any
"cookies"arrays.- Replacing all
set-cookieheader values with<redacted>.For example, update each
.harfile as follows:- "cookies": [ - { "domain": ".api.openai.com", "name": "__cf_bm", … "value": "uvgo5Ym7IcQGdvvF_1A4JXFlgMy…" }, - { "domain": ".api.openai.com", "name": "_cfuvid", … "value": "gc.HFObe1_8NU3ZXFHFJYEd54BzX…" } - ], + "cookies": [], @@ - { "name": "set-cookie", "value": "__cf_bm=…; HttpOnly; Secure; SameSite=None" }, - { "name": "set-cookie", "value": "_cfuvid=…; HttpOnly; Secure; SameSite=None" }, + { "name": "set-cookie", "value": "<redacted>" }, + { "name": "set-cookie", "value": "<redacted>" },Please verify there are no remaining unredacted entries by searching for
"cookies":and"set-cookie"inpackages/instrumentation-openai/recordings.packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-completion_1484001317/recording.har (1)
90-101: Streaming recording refresh LGTMNew snapshot content/headers are consistent; OK for test replay.
Also applies to: 236-242
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-emit-logprobs-span-event-for-stream-chat-completion_2713905208/recording.har (2)
90-96: Updated streaming chat recording looks consistentRicher payload with logprobs and updated headers/runtime is fine.
Also applies to: 221-226
98-116: HAR recordings sanitized: response cookies removed and all Set-Cookie header values redacted across recordings.No further changes needed—transient cookie data has been stripped from all *.har files.
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-function-calling_1703714575/recording.har (1)
1-249: Skip review - Test recording artifactThis is a HAR recording file for test replay purposes, not source code requiring review.
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-chat_1427850401/recording.har (1)
1-249: Skip review - Test recording artifactThis is a HAR recording file for test replay purposes, not source code requiring review.
packages/instrumentation-openai/src/instrumentation.ts (4)
43-47: Good addition of image wrapper importsThe imports are properly structured and follow the existing pattern.
86-104: Excellent defensive programming with Images API checkGood job adding the existence check for
module.Imagesbefore attempting to wrap. This prevents runtime errors on OpenAI versions that don't include the Images API.
143-160: Consistent implementation of Images API wrapping in patch methodThe patch method correctly mirrors the manual instrumentation with proper existence checks.
182-186: Proper cleanup in unpatch methodGood implementation of the unwrap logic with existence check, ensuring clean removal of instrumentation.
packages/instrumentation-openai/test/instrumentation.test.ts (4)
624-652: Comprehensive image generation test coverageGood test implementation covering:
- Span attributes validation
- Token usage calculations
- Request/response attribute checks
654-687: Well-structured image editing testThe test properly uses a file stream and validates all expected attributes including the fixed token count (4160 for high quality).
689-719: Thorough image variation testGood coverage of the variation API with proper attribute validation.
721-756: Excellent token calculation validationGreat test for verifying different quality levels produce the expected token counts (272 for low, 1056 for medium).
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.har (4)
201-204: AI summary inconsistency: strict-transport-security is present, not removedThe current HAR includes strict-transport-security; adjust summary expectations.
136-138: x-request-id exposure — ensure tests read header case-insensitivelyHeader names’ case may differ across environments; normalize to lowercase during extraction.
Also applies to: 184-186
65-67: No action needed for Node runtime bumpAll packages declare
engines.node: ">=14"(includingpackages/instrumentation-openai) and your.nvmrcpins development to Node 20, which covers v20.11.1. CI configurations (even if not usingactions/setup-node) will respect.nvmrcor default to a compatible version. You can safely ignore the bumpedx-stainless-runtime-versionheader or normalize it in tests for determinism.
94-96: ✔ Compressed response decodes to valid Chat Completions JSON
- HAR file
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-tool-calling_1747151373/recording.harwas inspected.- Combined Base64 chunks inflate cleanly to JSON with top-level keys including
id,model,choices, etc.- Model: gpt-4-0613
- Choices count: 1
No further changes required.
| "cookies": [ | ||
| { | ||
| "domain": ".api.openai.com", | ||
| "expires": "2025-08-11T07:36:04.000Z", | ||
| "httpOnly": true, | ||
| "name": "__cf_bm", | ||
| "path": "/", | ||
| "sameSite": "None", | ||
| "secure": true, | ||
| "value": "sNJZbuwXoQIiE7FtfyDHePe66tJzCT_5Tu2aR.5cFmY-1754895964-1.0.1.1-3wjJq7.8.8w7AcHRY.oqVVWsaOe3.gWbKUHaVi.azUtnYChnP0MY4M7ALuqEn3ykAQbIlxlnWE_7lvDOfaSEHx3ossKJiIRRNzdxTu3fGyE" | ||
| }, | ||
| { | ||
| "domain": ".api.openai.com", | ||
| "httpOnly": true, | ||
| "name": "_cfuvid", | ||
| "path": "/", | ||
| "sameSite": "None", | ||
| "secure": true, | ||
| "value": "dHNKcYdAPrzQEoPcSpb6vczkZeO1ABHtIOF58LVAwJU-1754895964051-0.0.1.1-604800000" | ||
| } | ||
| ], |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Sanitize cookies and Set-Cookie headers in committed HARs
Recording includes Cloudflare cookies and Set-Cookie header values. These are unnecessary for test replay and are sensitive artifacts to keep in VCS.
Please redact them. Minimal inline fix:
- "cookies": [
- { "domain": ".api.openai.com", ... "name": "__cf_bm", ... "value": "sNJZb..." },
- { "domain": ".api.openai.com", ... "name": "_cfuvid", ... "value": "dHNKc..." }
- ],
+ "cookies": [],
@@
- { "name": "set-cookie", "value": "__cf_bm=...; HttpOnly; Secure; SameSite=None" },
- { "name": "set-cookie", "value": "_cfuvid=...; HttpOnly; Secure; SameSite=None" },
+ { "name": "set-cookie", "value": "<redacted>" },
+ { "name": "set-cookie", "value": "<redacted>" },Optionally consider redacting other highly dynamic headers (x-request-id, cf-ray) to reduce recording churn.
Also applies to: 166-174
🏁 Script executed:
#!/bin/bash
# List HAR files containing cookies or Set-Cookie values to sanitize
fd -t f -e har | xargs rg -n '("cookies"\\s*:\\s*\\[\\s*\\{|\\bset-cookie\\b)' -n
# Confirm that no Authorization headers were recorded
rg -n '"authorization"|Authorization' -SLength of output: 4778
Let’s pinpoint exactly which HAR files include real cookie arrays or Set-Cookie header entries:
🏁 Script executed:
#!/bin/bash
# Find HARs with a "cookies": [ { … } ] section
fd -t f -e har | xargs rg -n '"cookies"\s*:\s*\[' -n
# Find HARs that record Set-Cookie headers
fd -t f -e har | xargs rg -n '"name"\s*:\s*"set-cookie"' -nLength of output: 53011
Redact cookies and Set-Cookie headers in HAR recordings
Test HARs under packages/instrumentation-*/recordings/**/*.har (and packages/traceloop-sdk/recordings/**/*.har) include real Cloudflare cookies and Set-Cookie header values. These aren’t needed for replay and expose sensitive, highly dynamic data. Please sanitize all recordings by:
• Replacing any non-empty cookies array with an empty list:
"cookies": [ … ] → "cookies": []
• Redacting Set-Cookie header values:
{ "name": "set-cookie", "value": "…actual cookie…" } → { "name": "set-cookie", "value": "<redacted>" }
Optionally remove other dynamic fields (e.g. x-request-id, cf-ray) to reduce churn.
Example diff for one recording:
- "cookies": [
- { "domain": ".api.openai.com", "name": "__cf_bm", "value": "sNJZb…", … },
- { "domain": ".api.openai.com", "name": "_cfuvid", "value": "dHNKc…", … }
- ],
+ "cookies": [],
@@
- { "name": "set-cookie", "value": "__cf_bm=…; HttpOnly; Secure; SameSite=None" },
- { "name": "set-cookie", "value": "_cfuvid=…; HttpOnly; Secure; SameSite=None" },
+ { "name": "set-cookie", "value": "<redacted>" },
+ { "name": "set-cookie", "value": "<redacted>" },Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-image-editing_4216676336/recording.har
around lines 95 to 115, the HAR contains real Cloudflare cookies; replace any
non-empty "cookies" arrays with an empty array ("cookies": []) and redact any
Set-Cookie header values by replacing their "value" with "<redacted>" (i.e. {
"name": "set-cookie", "value": "<redacted>" }); optionally remove or normalize
other dynamic headers/fields such as "x-request-id" and "cf-ray" to reduce
churn.
| "name": "openai-project", | ||
| "value": "proj_tzz1TbPPOXaf6j9tEkVUBIAa" | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Redact project id and per-request identifiers from the HAR.
openai-project exposes a project id; x-request-id and cf-ray are unique identifiers. These shouldn't live in git.
Apply redactions:
- "name": "openai-project",
- "value": "proj_tzz1TbPPOXaf6j9tEkVUBIAa"
+ "name": "openai-project",
+ "value": "<REDACTED>"
...
- "name": "x-request-id",
- "value": "req_d4cbf6f13bc6488ca540735f279f6fa9"
+ "name": "x-request-id",
+ "value": "<REDACTED>"
...
- "name": "cf-ray",
- "value": "96d60206489d7d94-TLV"
+ "name": "cf-ray",
+ "value": "<REDACTED>"Also applies to: 183-185, 214-214
🤖 Prompt for AI Agents
In
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-attributes-in-span-for-streaming-chat_72904453/recording.har
around lines 147-149 (also apply to 183-185 and 214), the HAR contains sensitive
identifiers (openai-project project id, x-request-id, cf-ray); remove or redact
those values by replacing the project id and any per-request identifiers with
stable placeholders (e.g. "<REDACTED_PROJECT_ID>", "<REDACTED_REQUEST_ID>",
"<REDACTED_CF_RAY>") or delete the header/value pairs entirely so no unique IDs
remain in the committed HAR. Ensure the same redaction is applied consistently
at the other referenced line ranges.
| "name": "x-ratelimit-limit-requests", | ||
| "value": "5000" | ||
| "value": "10000" | ||
| }, | ||
| { | ||
| "name": "x-ratelimit-limit-tokens", | ||
| "value": "80000" | ||
| "value": "1000000" | ||
| }, | ||
| { | ||
| "name": "x-ratelimit-remaining-requests", | ||
| "value": "4999" | ||
| "value": "9999" | ||
| }, | ||
| { | ||
| "name": "x-ratelimit-remaining-tokens", | ||
| "value": "79974" | ||
| "value": "999989" | ||
| }, | ||
| { | ||
| "name": "x-ratelimit-reset-requests", | ||
| "value": "12ms" | ||
| "value": "6ms" | ||
| }, | ||
| { | ||
| "name": "x-ratelimit-reset-tokens", | ||
| "value": "19ms" | ||
| "value": "0s" |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Rate-limit headers changed — verify extraction remains case-insensitive and tested
Ensure instrumentation still captures these headers and tests assert presence (not exact values).
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching instrumentation for rate-limit handling:"
rg -n -S 'x-ratelimit-|ratelimit' packages || true
echo
echo "Searching tests for assertions on rate-limit headers:"
rg -n -S 'ratelimit|x-ratelimit' -- tests packages || trueLength of output: 126204
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Listing files and directories under packages/instrumentation-openai:"
fd --max-depth 1 --type f --type d packages/instrumentation-openaiLength of output: 665
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Finding all files under packages/instrumentation-openai (including source and recordings):"
find packages/instrumentation-openai -maxdepth 3 -type f
echo
echo "Searching source for header extraction patterns:"
rg -n 'headers\.' packages/instrumentation-openai || true
rg -n 'ratelimit' -t js -t ts packages/instrumentation-openai || trueLength of output: 1279
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching instrumentation.ts for response header handling:"
rg -n 'response\.headers' -C3 packages/instrumentation-openai/src/instrumentation.ts || true
echo
echo "Displaying surrounding code in instrumentation.ts:"
sed -n '1,200p' packages/instrumentation-openai/src/instrumentation.tsLength of output: 6716
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Inspecting instrumentation tests (first 200 lines):"
sed -n '1,200p' packages/instrumentation-openai/test/instrumentation.test.ts
echo
echo "Searching tests for any rate-limit header assertions:"
rg -n 'ratelimit' -C3 packages/instrumentation-openai/test/instrumentation.test.ts || trueLength of output: 6691
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching for APIPromise usage in instrumentation.ts:"
rg -n 'APIPromise' -C5 packages/instrumentation-openai/src/instrumentation.ts || trueLength of output: 1488
Add case-insensitive capture and tests for x-ratelimit headers
We currently unwrap the OpenAI API promise but never pull any of the x-ratelimit-* response headers into the span, nor do our tests assert their presence. Please:
• In packages/instrumentation-openai/src/instrumentation.ts
– Inside _wrapPromise (after promise._thenUnwrap), access the raw response headers (e.g. via the underlying HTTP client or APIPromise internals), filter header names by /^x-ratelimit-/i, normalize them to lower-case, and call
span.setAttribute(headerName, headerValue)
• In packages/instrumentation-openai/test/instrumentation.test.ts
– In the “should set attributes in span for tool calling” test (and any other tool-calling tests), add assertions such as:
ts assert.ok(completionSpan.attributes['x-ratelimit-limit-requests']); assert.ok(completionSpan.attributes['x-ratelimit-remaining-tokens']); // …etc., without checking exact values, just that they’re defined
This will ensure we continue to capture all new rate-limit headers in a case-insensitive way and have test coverage for their presence.
| "name": "openai-project", | ||
| "value": "proj_tzz1TbPPOXaf6j9tEkVUBIAa" | ||
| }, |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Mask internal/correlation identifiers (openai-project, x-request-id, cf-ray)
These are environment/account-specific and not required by assertions. Redact to reduce leakage and churn.
Apply this diff:
- "value": "proj_tzz1TbPPOXaf6j9tEkVUBIAa"
+ "value": "<redacted>"
...
- "value": "req_a3d313c317fe45698f4c943bc31e0e7e"
+ "value": "<redacted>"
...
- "value": "96d60234595a7d94-TLV"
+ "value": "<redacted>"Scan for additional exposures across recordings:
Also applies to: 183-185, 213-215
🏁 Script executed:
#!/bin/bash
rg -n '"openai-project"|"\bx-request-id\b"|"cf-ray"' packages/instrumentation-openai/recordings -A1Length of output: 18356
Redact environment-specific identifiers in HAR test recordings
Multiple .har files under packages/instrumentation-openai/recordings expose hard-coded values for openai-project, x-request-id and cf-ray. These should be replaced with <redacted> to prevent leakage and snapshot churn.
Affected paths:
- packages/instrumentation-openai/recordings/**/*.har
Suggested one-liner to sanitize all recordings:
find packages/instrumentation-openai/recordings -type f -name '*.har' -print0 | \
xargs -0 sed -i -E \
-e 's/("name": "(openai-project|x-request-id|cf-ray)",\s*"value": ")[^"]+"/\1<redacted>"/g'Example diff snippet:
- "name": "openai-project",
- "value": "proj_tzz1TbPPOXaf6j9tEkVUBIAa"
+ "name": "openai-project",
+ "value": "<redacted>"
...
- "name": "x-request-id",
- "value": "req_a3d313c317fe45698f4c943bc31e0e7e"
+ "name": "x-request-id",
+ "value": "<redacted>"
...
- "name": "cf-ray",
- "value": "96d60234595a7d94-TLV"
+ "name": "cf-ray",
+ "value": "<redacted>"🤖 Prompt for AI Agents
In
packages/instrumentation-openai/recordings/Test-OpenAI-instrumentation_1770406427/should-set-function_call-attributes-in-span-for-stream-completion-when-multiple-tools-cal_2278551756/recording.har
around lines 147-149, the HAR recording contains environment-specific
identifiers (openai-project, x-request-id, cf-ray) that must be redacted; update
all .har files under packages/instrumentation-openai/recordings to replace the
values for these keys with "<redacted>" (e.g., use a bulk find/replace or sed
script to match the "name": "<key>", "value": "<any>" pattern and set the value
to "<redacted>") so recordings no longer leak identifiers and snapshots stay
stable.
- Implement image generation, editing, and variation instrumentation - Add proper OpenTelemetry span creation and attribute tracking - Support token usage calculation for DALL-E 2/3 models - Include image upload callback mechanism for base64 processing - Add comprehensive test coverage with HTTP recordings - Follow gen_ai semantic attribute conventions - Install @types/node dependency for TypeScript compilation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed d0d6e1b in 2 minutes and 37 seconds. Click for details.
- Reviewed
2826lines of code in21files - Skipped
1files when reviewing. - Skipped posting
6draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/image-wrappers.ts:12
- Draft comment:
The JSDoc comment (lines 12–24) mentions token costs for 'gpt-image-1' while the implementation supports 'dall-e-2' and 'dall-e-3'. Please update the documentation for clarity. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% The comment points out a real issue - the documentation mentions a model that isn't actually supported in the code. This could be confusing for developers. However, the review rules say to only keep comments that require code changes. Documentation-only changes are not mentioned as being in scope. The rules emphasize focusing on logic issues rather than documentation. The documentation is technically incorrect which could cause confusion. But am I being too strict in interpreting the rules about "code changes only"? The rules are quite clear that we should focus on code logic issues, not documentation or explanatory comments. While documentation accuracy is important, it's out of scope for this review. The comment should be deleted as it only suggests a documentation change rather than a code change, which is outside the scope of what we're looking for according to the review rules.
2. packages/instrumentation-openai/src/image-wrappers.ts:80
- Draft comment:
In processImageInRequest, when the image string starts with 'http', the function simply returns null. If URL-based images should be supported, consider handling them explicitly or documenting this behavior clearly. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/instrumentation-openai/src/image-wrappers.ts:217
- Draft comment:
There is an inconsistency in prompt attribute indexing between setImageEditRequestAttributes and setImageVariationRequestAttributes (index 1 vs. index 0). Please confirm if this is intentional and add a clarifying comment if needed. - Reason this comment was not posted:
Comment was on unchanged code.
4. packages/instrumentation-openai/src/image-wrappers.ts:304
- Draft comment:
In setImageGenerationResponseAttributes, a constant value of 272 tokens is added when an image is provided. It would be helpful to document the rationale for this magic number or make it configurable. - Reason this comment was not posted:
Comment was on unchanged code.
5. packages/instrumentation-openai/test/instrumentation.test.ts:124
- Draft comment:
Some assertions compare prompt token counts as strings (e.g., '15' or '8'). Ensure that the token values are consistently typed (number vs string) across instrumentation and tests. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
6. packages/instrumentation-openai/test/instrumentation.test.ts:180
- Draft comment:
Several tests for image editing and variation are marked as skipped. Please ensure this is intentional and consider adding a comment explaining why these tests are skipped. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_dPDFBsXB3Uzv0Xz4
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed fb35e1b in 1 minute and 0 seconds. Click for details.
- Reviewed
14lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/src/image-wrappers.ts:59
- Draft comment:
Minor formatting change: the tokensPerImage assignment is reflowed without altering logic. Ensure this reflow aligns with your project's code style guidelines. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_eoTCRdPaD9JLQLU7
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Important
Adds image support to OpenAI in Traceloop SDK, including tracing for image operations and new utilities for handling image uploads.
image-wrappers.ts.calculateImageGenerationTokens()for estimating token usage in image operations.processImageInRequest()for handling image uploads.wrapImageGeneration(),wrapImageEdit(), andwrapImageVariation()for wrapping image-related functions.OpenAIInstrumentationininstrumentation.tsto include image support.ImageUploaderclass inimage-uploader.tsfor uploading images.OpenAIInstrumentationConfigintypes.tsto includeuploadBase64Imagecallback.initInstrumentations()intracing/index.tsto handle image upload callbacks.instrumentation.test.ts.test_edit_only.tsandtest_generate_only.tsfor testing image editing and generation.package.jsonto include new scripts for running image tests.This description was created by
for fb35e1b. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Chores
Tests