diff --git a/CHANGELOG.md b/CHANGELOG.md index b507034a..e994a797 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -148,6 +148,35 @@ response = MyAgent.embed(inputs: ["Text 1", "Text 2"]).embed_now vectors = response.data.map { |d| d[:embedding] } ``` +**Normalized Usage Statistics** +```ruby +response = MyAgent.prompt("Hello").generate_now + +# Works across all providers +response.usage.input_tokens +response.usage.output_tokens +response.usage.total_tokens + +# Provider-specific fields when available +response.usage.cached_tokens # OpenAI, Anthropic +response.usage.reasoning_tokens # OpenAI o1 models +response.usage.service_tier # Anthropic +``` + +**Enhanced Instrumentation for APM Integration** +- Unified event structure: `prompt.active_agent` and `embed.active_agent` (top-level) plus `prompt.provider.active_agent` and `embed.provider.active_agent` (per-API-call) +- Event payloads include comprehensive data for monitoring tools (New Relic, DataDog, etc.): + - Request parameters: `model`, `temperature`, `max_tokens`, `top_p`, `stream`, `message_count`, `has_tools` + - Usage data: `input_tokens`, `output_tokens`, `total_tokens`, `cached_tokens`, `reasoning_tokens`, `audio_tokens`, `cache_creation_tokens` (critical for cost tracking) + - Response metadata: `finish_reason`, `response_model`, `response_id`, `embedding_count` +- Top-level events report cumulative usage across all API calls in multi-turn conversations +- Provider-level events report per-call usage for granular tracking + +**Multi-Turn Usage Tracking** +- `response.usage` now returns cumulative token counts across all API calls during tool calling +- New `response.usages` array contains individual usage objects from each API call +- `Usage` objects support addition: `usage1 + usage2` for combining statistics + **Provider Enhancements** - OpenAI Responses API: `api: :responses` or `api: :chat` - Anthropic JSON object mode with automatic extraction @@ -195,6 +224,7 @@ vectors = response.data.map { |d| d[:embedding] } - Template rendering without blocks - Schema generator key symbolization - Rails 8.0 and 8.1 compatibility +- Usage extraction across OpenAI/Anthropic response formats ### Removed diff --git a/docs/.vitepress/config.mts b/docs/.vitepress/config.mts index bc35ebfd..145d11f6 100644 --- a/docs/.vitepress/config.mts +++ b/docs/.vitepress/config.mts @@ -100,6 +100,7 @@ export default defineConfig({ { text: 'Embeddings', link: '/actions/embeddings' }, { text: 'Tools', link: '/actions/tools' }, { text: 'Structured Output', link: '/actions/structured_output' }, + { text: 'Usage', link: '/actions/usage' }, ] }, { diff --git a/docs/actions.md b/docs/actions.md index 1a36308c..22e6bc5d 100644 --- a/docs/actions.md +++ b/docs/actions.md @@ -43,6 +43,15 @@ Generate vectors for semantic search: <<< @/../test/docs/actions_examples_test.rb#embeddings_vectorize{ruby:line-numbers} +### [Usage Statistics](/actions/usage) + +Track token consumption and costs: + +```ruby +response = agent.summarize.generate_now +response.usage.total_tokens #=> 125 +``` + ## Common Patterns ### Multi-Capability Actions diff --git a/docs/actions/usage.md b/docs/actions/usage.md new file mode 100644 index 00000000..5ae341e8 --- /dev/null +++ b/docs/actions/usage.md @@ -0,0 +1,71 @@ +--- +title: Usage Statistics +description: Track token usage and performance metrics across all AI providers with normalized usage objects. +--- +# {{ $frontmatter.title }} + +Track token consumption and performance metrics from AI provider responses. All providers return normalized usage statistics for consistent cost tracking and monitoring. + +::: tip Monitor Usage in Production +See [Instrumentation](/framework/instrumentation) to monitor usage statistics in real-time using ActiveSupport::Notifications. +::: + +## Accessing Usage + +Get usage statistics from any response: + +<<< @/../test/docs/actions/usage_examples_test.rb#accessing_usage{ruby:line-numbers} + +## Common Fields + +These fields work across all providers: + +<<< @/../test/docs/actions/usage_examples_test.rb#common_fields{ruby:line-numbers} + +## Provider-Specific Fields + +Access advanced metrics when available: + +::: code-group +<<< @/../test/docs/actions/usage_examples_test.rb#provider_specific_openai{ruby:line-numbers} [OpenAI] +<<< @/../test/docs/actions/usage_examples_test.rb#provider_specific_anthropic{ruby:line-numbers} [Anthropic] +<<< @/../test/docs/actions/usage_examples_test.rb#provider_specific_ollama{ruby:line-numbers} [Ollama] +::: + +## Provider Details + +Raw provider data preserved in `provider_details`: + +::: code-group +<<< @/../test/docs/actions/usage_examples_test.rb#provider_details_openai{ruby:line-numbers} [OpenAI] +<<< @/../test/docs/actions/usage_examples_test.rb#provider_details_ollama{ruby:line-numbers} [Ollama] +::: + +## Cost Tracking + +Calculate costs using token counts: + +<<< @/../test/docs/actions/usage_examples_test.rb#cost_tracking{ruby:line-numbers} + +**Monitor costs in production:** Use [Instrumentation](/framework/instrumentation#cost-tracking) to automatically track costs across all requests. + +## Embeddings Usage + +Embedding responses have zero output tokens: + +<<< @/../test/docs/actions/usage_examples_test.rb#embeddings_usage{ruby:line-numbers} + +## Field Mapping + +How provider fields map to normalized names: + +| Provider | input_tokens | output_tokens | total_tokens | +|----------|--------------|---------------|--------------| +| OpenAI Chat | prompt_tokens | completion_tokens | total_tokens | +| OpenAI Embed | prompt_tokens | 0 | total_tokens | +| OpenAI Responses | input_tokens | output_tokens | total_tokens | +| Anthropic | input_tokens | output_tokens | calculated | +| Ollama | prompt_eval_count | eval_count | calculated | +| OpenRouter | prompt_tokens | completion_tokens | total_tokens | + +**Note:** `total_tokens` is automatically calculated as `input_tokens + output_tokens` when not provided by the provider. diff --git a/docs/agents/generation.md b/docs/agents/generation.md index 3c575754..fe7accf7 100644 --- a/docs/agents/generation.md +++ b/docs/agents/generation.md @@ -93,10 +93,11 @@ response.raw_request # The most recent request in provider format response.raw_response # The most recent response in provider format response.context # The original context that was sent -# Usage statistics (when available from provider) -response.prompt_tokens # Input tokens used -response.completion_tokens # Output tokens used -response.total_tokens # Total tokens used +# Usage statistics (see /actions/usage for details) +response.usage # Normalized usage object across all providers +response.usage.input_tokens +response.usage.output_tokens +response.usage.total_tokens ``` For embeddings: @@ -110,14 +111,16 @@ response.raw_request # The most recent request in provider format response.raw_response # The most recent response in provider format response.context # The original context that was sent -# Usage statistics (when available from provider) -response.prompt_tokens +# Usage statistics +response.usage # Normalized usage object +response.usage.input_tokens ``` ## Next Steps - [Agents](/agents) - Understanding the full agent lifecycle - [Actions](/actions) - Define what your agents can do +- [Usage Statistics](/actions/usage) - Track token consumption and costs - [Messages](/actions/messages) - Work with multimodal content - [Tools](/actions/tools) - Enable function calling capabilities - [Streaming](/agents/streaming) - Stream responses in real-time diff --git a/docs/framework.md b/docs/framework.md index 2c5105c5..465472ca 100644 --- a/docs/framework.md +++ b/docs/framework.md @@ -60,7 +60,7 @@ When you define an agent, you create a specialized participant that interacts wi - **Agent** (Controller) - Manages lifecycle, defines actions, configures providers - **Generation** (Request Proxy) - Coordinates execution, holds configuration, provides synchronous/async methods. Created by invocation, it's lazy—execution doesn't start until you call `.prompt_now`, `.embed_now`, or `.prompt_later`. -- **Response** (Result) - Contains messages, metadata, token usage, and parsed output. Returned after Generation executes. +- **Response** (Result) - Contains messages, metadata, and normalized usage statistics (see **[Usage Statistics](/actions/usage)**). Returned after Generation executes. **Request-Response Lifecycle:** diff --git a/docs/framework/instrumentation.md b/docs/framework/instrumentation.md index fc10801b..3c71084f 100644 --- a/docs/framework/instrumentation.md +++ b/docs/framework/instrumentation.md @@ -4,7 +4,7 @@ description: Monitor provider operations using ActiveSupport::Notifications. Tra --- # {{ $frontmatter.title }} -ActiveAgent instruments all provider operations using `ActiveSupport::Notifications`, enabling detailed monitoring, logging, and custom event handling. Track performance metrics, debug generation flows, and integrate with external monitoring services. +ActiveAgent instruments all provider operations using `ActiveSupport::Notifications`. ::: warning Beta Feature This instrumentation API is in beta and may change with Rails 8.1. Event names, payload structures, and subscriber interfaces could be updated as Rails evolves its instrumentation and events patterns. @@ -12,94 +12,75 @@ This instrumentation API is in beta and may change with Rails 8.1. Event names, ## Available Events -ActiveAgent publishes instrumentation events throughout the generation lifecycle. Subscribe to these events to monitor operations, track performance, and handle errors: +**Event namespaces:** +- **`.active_agent`** - Overall request/response lifecycle +- **`.provider.active_agent`** - Individual API calls in multi-turn conversations -### Provider Events +### Core Events -| Event | When Triggered | Description | -|-------|----------------|-------------| -| `prompt_start.provider.active_agent` | Before prompt request | Prompt generation initiated | -| `embed_start.provider.active_agent` | Before embedding request | Embedding generation initiated | -| `request_prepared.provider.active_agent` | After request built | Request prepared with formatted messages | -| `api_call.provider.active_agent` | After API response | Provider API call completed | -| `embed_call.provider.active_agent` | After embedding API response | Embedding API call completed | -| `prompt_complete.provider.active_agent` | After full generation | Entire generation cycle finished | +| Event | When Triggered | Key Payload Data | +|-------|----------------|------------------| +| `prompt.active_agent` | After prompt completion | `model`, `message_count`, `stream`, `usage`, `finish_reason`, `response_model`, `response_id` | +| `prompt.provider.active_agent` | After individual API call | Same as above (per-call usage in multi-turn) | +| `embed.active_agent` | After embedding completion | `model`, `input_size`, `embedding_count`, `usage`, `response_model`, `response_id` | +| `embed.provider.active_agent` | After individual embed call | Same as above | ### Streaming Events -| Event | When Triggered | Description | -|-------|----------------|-------------| -| `stream_open.provider.active_agent` | Stream connection starts | Streaming connection opened | -| `stream_close.provider.active_agent` | Stream connection ends | Streaming connection closed | +| Event | When Triggered | Key Payload Data | +|-------|----------------|------------------| +| `stream_open.active_agent` | Stream connection opens | Basic metadata | +| `stream_close.active_agent` | Stream connection closes | Basic metadata | +| `stream_chunk.active_agent` | Processing stream chunk | `chunk_type` (when available) | -### Processing Events +### Tool and Processing Events -| Event | When Triggered | Description | -|-------|----------------|-------------| -| `messages_extracted.provider.active_agent` | After parsing response | Messages extracted from API response | -| `tool_calls_processing.provider.active_agent` | Before executing tools | Tool/function calls detected and processing | -| `multi_turn_continue.provider.active_agent` | After tool execution | Continuing conversation after tool use | -| `tool_execute.provider.active_agent` | During tool execution | Individual tool being executed | +| Event | When Triggered | Key Payload Data | +|-------|----------------|------------------| +| `tool_call.active_agent` | Individual tool execution | `tool_name` | -### Error Events - -| Event | When Triggered | Description | -|-------|----------------|-------------| -| `retry_attempt.provider.active_agent` | After failed request | Retry attempt after error | -| `retry_exhausted.provider.active_agent` | After max retries | All retry attempts exhausted | - -### Agent Events - -| Event | When Triggered | Description | -|-------|----------------|-------------| -| `process.active_agent` | During agent action | Agent action processing | +### Infrastructure Events +| Event | When Triggered | Key Payload Data | +|-------|----------------|------------------| +| `process.active_agent` | Agent action processing | `agent`, `action`, `args`, `kwargs` | ## Built-in Log Subscriber -ActiveAgent includes a `LogSubscriber` that automatically logs all provider operations at the `debug` level when Rails loads: +ActiveAgent automatically logs all provider operations at the `debug` level: <<< @/../lib/active_agent/providers/log_subscriber.rb#log_subscriber_attach {ruby:line-numbers} -Logs include trace IDs for tracking related operations, provider names, timing information, and operation details. - -**Example log output:** +**Example output:** ``` -[trace-123] [ActiveAgent] [OpenAI::Responses] Starting prompt request -[trace-123] [ActiveAgent] [OpenAI::Responses] Prepared request with 2 message(s) -[trace-123] [ActiveAgent] [OpenAI::Responses] API call completed in 543.2ms (streaming: false) -[trace-123] [ActiveAgent] [OpenAI::Responses] Prompt completed with 3 message(s) in stack (total: 567.1ms) +[trace-123] [ActiveAgent] [OpenAI] Prompt completed: model=gpt-4o messages=2 stream=false tokens=150/75 finish=stop 543.2ms +[trace-456] [ActiveAgent] [OpenAI] Embed completed: model=text-embedding-ada-002 inputs=5 embeddings=5 tokens=150 89.1ms ``` ### Controlling Log Verbosity -By default, ActiveAgent automatically inherits the logger and log level settings from your Rails application via the Railtie. This means instrumentation logging respects your Rails environment configuration without additional setup. +ActiveAgent inherits your Rails logger configuration automatically. Non-Rails apps: see [Configuration](/framework/configuration). -If you're not using Rails, see the [Configuration](/framework/configuration) documentation for details on configuring logging behavior. - -**Log Level Guidance:** - -- **`DEBUG`** - All events logged with full detail (development default) -- **`INFO`** - Important operations like API calls and completions (production default) -- **`WARN`** - Only errors and retries (quiet production) -- **`ERROR`** - Only failures (minimal logging) -- **`FATAL`** - Disable instrumentation logging entirely +| Level | What's Logged | +|-------|---------------| +| `DEBUG` | All events with full detail | +| `INFO` | API calls and completions | +| `WARN` | Errors and retries only | +| `ERROR` | Failures only | +| `FATAL` | Nothing | ## Custom Event Subscribers -Subscribe to specific events or all ActiveAgent events for monitoring, metrics collection, debugging, and integration with external services. - ### Basic Subscription ```ruby -# Subscribe to a specific event -ActiveSupport::Notifications.subscribe("api_call.provider.active_agent") do |event| +# Subscribe to prompt completions +ActiveSupport::Notifications.subscribe("prompt.active_agent") do |event| duration = event.duration provider = event.payload[:provider_module] - trace_id = event.payload[:trace_id] + model = event.payload[:model] - # Your custom handling - Rails.logger.info "AI API call: #{provider} completed in #{duration}ms (trace: #{trace_id})" + Rails.logger.info "AI prompt: #{provider}/#{model} completed in #{duration}ms" end # Subscribe to all ActiveAgent events @@ -111,21 +92,61 @@ end ### Event Payload Data -Each event includes contextual data in the payload hash. Common fields across events: +**Common fields (all events):** +- `provider` - Provider name (`"OpenAI"`, `"Anthropic"`, `"Ollama"`) +- `provider_module` - Provider class +- `trace_id` - Unique identifier for tracking +- `event.duration` - Duration in milliseconds + +**Prompt events:** +```ruby +{ + model: "gpt-4o", + message_count: 2, + stream: false, + temperature: 0.7, # when set + max_tokens: 1000, # when set + has_tools: true, + tool_count: 3, + has_instructions: true, + usage: { + input_tokens: 100, + output_tokens: 50, + total_tokens: 150, + cached_tokens: 25, # when available + reasoning_tokens: 10 # when available + }, + finish_reason: "stop", # "stop", "length", "tool_calls" + response_model: "gpt-4o", + response_id: "chatcmpl-123" +} +``` + +::: tip Usage Object Details +See [Usage Statistics](/actions/usage) for field definitions and provider-specific metrics. +::: -| Field | Type | Description | -|-------|------|-------------| -| `trace_id` | String | Unique identifier for tracking related operations across the request lifecycle (optionally set when prompting) | -| `provider_module` | String | Provider class handling the request (e.g., `"OpenAI::Responses"`) | -| `message_count` | Integer | Number of messages in the context (varies by event) | -| `streaming` | Boolean | Whether streaming is enabled for this request | -| `tool_count` | Integer | Number of tool calls being processed (tool events only) | -| `usage` | Hash | Token usage information from provider response | -| `attempt` | Integer | Current retry attempt number (retry events only) | -| `max_retries` | Integer | Maximum retry attempts configured (retry events only) | -| `exception` | String | Exception class name (error events only) | +**Embed events:** +```ruby +{ + model: "text-embedding-ada-002", + input_size: 5, + embedding_count: 5, + encoding_format: "float", # when set + dimensions: 1536, # when set + usage: { + input_tokens: 150, + total_tokens: 150 + }, + response_model: "text-embedding-ada-002", + response_id: "emb-123" +} +``` -Access duration via `event.duration` (in milliseconds). +**Other events:** +- `tool_name` - Tool being executed +- `chunk_type` - Stream chunk type (when available) +- `uri_base`, `exception`, `message` - Connection error details ### Common Use Cases @@ -134,11 +155,12 @@ Access duration via `event.duration` (in milliseconds). Track slow API calls and alert when thresholds are exceeded: ```ruby -ActiveSupport::Notifications.subscribe("api_call.provider.active_agent") do |event| +ActiveSupport::Notifications.subscribe("prompt.active_agent") do |event| if event.duration > 5000 SlackNotifier.alert( - "Slow AI API call: #{event.duration}ms", + "Slow AI prompt: #{event.duration}ms", provider: event.payload[:provider_module], + model: event.payload[:model], trace_id: event.payload[:trace_id] ) end @@ -147,53 +169,43 @@ end **Cost Tracking:** -Monitor token usage and calculate costs by provider: - ```ruby -ActiveSupport::Notifications.subscribe("prompt_complete.provider.active_agent") do |event| +ActiveSupport::Notifications.subscribe("prompt.active_agent") do |event| usage = event.payload[:usage] next unless usage CostTracker.record( - provider: event.payload[:provider_module], - prompt_tokens: usage[:prompt_tokens], - completion_tokens: usage[:completion_tokens], - total_tokens: usage[:total_tokens], - trace_id: event.payload[:trace_id] + provider: event.payload[:provider], + model: event.payload[:response_model], + input_tokens: usage[:input_tokens], + output_tokens: usage[:output_tokens], + cached_tokens: usage[:cached_tokens], + reasoning_tokens: usage[:reasoning_tokens] ) end ``` -**Error Tracking:** - -Capture failures and send to error monitoring service: +**Analytics:** ```ruby -ActiveSupport::Notifications.subscribe("retry_exhausted.provider.active_agent") do |event| - Sentry.capture_message( - "AI request failed after #{event.payload[:max_retries]} retries", - level: :error, - extra: { - trace_id: event.payload[:trace_id], - provider: event.payload[:provider_module], - exception: event.payload[:exception] - } +ActiveSupport::Notifications.subscribe("prompt.active_agent") do |event| + Analytics.track( + "ai.prompt", + model: event.payload[:model], + tokens: event.payload[:usage]&.fetch(:total_tokens), + duration: event.duration ) end ``` -**Tool Usage Analytics:** - -Track which tools are being called and how often: +**Tool Tracking:** ```ruby -ActiveSupport::Notifications.subscribe("tool_execute.provider.active_agent") do |event| - Analytics.increment( - "agent.tool_usage", - tags: { - tool_name: event.payload[:tool_name], - agent_class: event.payload[:agent_class] - } +ActiveSupport::Notifications.subscribe("tool_call.active_agent") do |event| + Analytics.track( + "tool.call", + name: event.payload[:tool_name], + duration: event.duration ) end ``` @@ -204,65 +216,42 @@ Create a custom log subscriber to control formatting, verbosity, and output dest ```ruby # config/initializers/active_agent_logging.rb -class CustomAgentLogger < ActiveAgent::LogSubscriber - def api_call(event) - return unless logger.info? # Only log at info level or higher - - duration = event.duration.round(1) - provider = event.payload[:provider_module] - - info "🤖 #{provider} API call: #{duration}ms" - end - - def prompt_complete(event) +class CustomAgentLogger < ActiveAgent::Providers::LogSubscriber + def prompt(event) return unless logger.info? - message_count = event.payload[:message_count] + provider = event.payload[:provider_module] + model = event.payload[:model] duration = event.duration.round(1) - info "✅ Prompt completed: #{message_count} messages in #{duration}ms" + info "🤖 #{provider}/#{model}: #{duration}ms" end - def tool_execute(event) + def tool_call(event) return unless logger.debug? tool_name = event.payload[:tool_name] - debug "🔧 Tool executed: #{tool_name}" + duration = event.duration.round(1) + debug "🔧 Tool: #{tool_name} (#{duration}ms)" end - def retry_attempt(event) - attempt = event.payload[:attempt] - max_retries = event.payload[:max_retries] - exception = event.payload[:exception] - - warn "⚠️ Retry attempt #{attempt}/#{max_retries} (#{exception})" + def connection_error(event) + provider = event.payload[:provider_module] + uri = event.payload[:uri_base] + error "❌ #{provider} connection failed: #{uri}" end end # Replace the default subscriber -ActiveAgent::LogSubscriber.detach_from :active_agent +ActiveAgent::Providers::LogSubscriber.detach_from :active_agent +ActiveAgent::Providers::LogSubscriber.detach_from :"provider.active_agent" CustomAgentLogger.attach_to :active_agent +CustomAgentLogger.attach_to :"provider.active_agent" ``` -## Common Debugging Scenarios - -**Slow generation:** -1. Check `api_call` event duration -2. Look for multiple `tool_execute` events (multi-turn overhead) -3. Check `message_count` in `request_prepared` (large context) - -**Tool execution issues:** -1. Enable debug logging to see `tool_execute` events -2. Check `tool_calls_processing` for tool count -3. Look for `multi_turn_continue` to verify conversation flow - -**Retry behavior:** -1. Watch for `retry_attempt` events with backoff times -2. Check `retry_exhausted` for ultimate failures -3. Review exception types in retry payloads - ## Related Documentation +- **[Usage Statistics](/actions/usage)** - Understand usage fields and provider-specific metrics - **[Agents](/agents)** - Learn about agent lifecycle, callbacks, and the generation cycle - **[Callbacks](/agents/callbacks)** - Understand callback hooks like `before_generation` and `after_generation` - **[Providers](/providers)** - Explore provider-specific behavior and configuration diff --git a/docs/providers.md b/docs/providers.md index f6117c51..9cfe45b7 100644 --- a/docs/providers.md +++ b/docs/providers.md @@ -92,7 +92,7 @@ All providers return standardized response objects: **Common attributes:** - `message` / `messages` - Response content and conversation history -- `prompt_tokens` / `completion_tokens` - Token usage for cost tracking +- `usage` - Normalized token usage statistics (see **[Usage Statistics](/actions/usage)**) - `raw_request` / `raw_response` - Provider-specific data for debugging - `context` - Original request sent to provider diff --git a/lib/active_agent/providers/_base_provider.rb b/lib/active_agent/providers/_base_provider.rb index fa1fa1fc..db254f3a 100644 --- a/lib/active_agent/providers/_base_provider.rb +++ b/lib/active_agent/providers/_base_provider.rb @@ -2,21 +2,21 @@ require_relative "common/response" require_relative "concerns/exception_handler" +require_relative "concerns/instrumentation" require_relative "concerns/previewable" -# Maps provider types to their gem dependencies. # @private GEM_LOADERS = { anthropic: [ "anthropic", "~> 1.12", "anthropic" ], openai: [ "openai", "~> 0.34", "openai" ] } -# Loads and requires a provider's gem dependency. +# Requires a provider's gem dependency. # # @param type [Symbol] provider type (:anthropic, :openai) -# @param file_name [String] provider file path for error context +# @param file_name [String] for error context # @return [void] -# @raise [LoadError] when the required gem is not available +# @raise [LoadError] when required gem is not installed def require_gem!(type, file_name) gem_name, requirement, package_name = GEM_LOADERS.fetch(type) provider_name = file_name.split("/").last.delete_suffix(".rb").camelize @@ -31,9 +31,8 @@ def require_gem!(type, file_name) module ActiveAgent module Providers - # Base class for LLM provider integrations. + # Orchestrates LLM provider API requests, streaming, and multi-turn tool calling. # - # Orchestrates API requests, streaming responses, and multi-turn tool calling. # Each provider (OpenAI, Anthropic, etc.) subclasses this to implement # provider-specific API interactions. # @@ -44,6 +43,7 @@ class BaseProvider extend ActiveSupport::Delegation include ExceptionHandler + include Instrumentation include Previewable class ProvidersError < StandardError; end @@ -51,34 +51,35 @@ class ProvidersError < StandardError; end attr_internal :options, :context, :trace_id, # Setup :request, :message_stack, # Runtime :stream_broadcaster, :streaming, # Callback (Streams) - :tools_function # Callback (Tools) + :tools_function, # Callback (Tools) + :usage_stack # Usage Tracking - # @return [String] provider name extracted from class name (e.g., "Anthropic", "OpenAI") + # @return [String] e.g., "Anthropic", "OpenAI" def self.service_name name.split("::").last.delete_suffix("Provider") end - # @return [String] module-qualified provider name (e.g., "Anthropic", "OpenAI::Chat") + # @return [String] e.g., "Anthropic", "OpenAI::Chat" def self.tag_name name.delete_prefix("ActiveAgent::Providers::").delete_suffix("Provider") end - # @return [Module] provider's namespace module (e.g., ActiveAgent::Providers::OpenAI) + # @return [Module] e.g., ActiveAgent::Providers::OpenAI def self.namespace "#{name.deconstantize}::#{service_name}".safe_constantize end - # @return [Class] provider's options class + # @return [Class] def self.options_klass namespace::Options end - # @return [ActiveModel::Type::Value] provider-specific request type for prompt casting/serialization + # @return [ActiveModel::Type::Value] for prompt casting/serialization def self.prompt_request_type namespace::RequestType.new end - # @return [ActiveModel::Type::Value] provider-specific request type for embedding casting/serialization + # @return [ActiveModel::Type::Value] for embedding casting/serialization # @raise [NotImplementedError] when provider doesn't support embeddings def self.embed_request_type fail(NotImplementedError) @@ -86,12 +87,10 @@ def self.embed_request_type delegate :service_name, :tag_name, :namespace, :options_klass, :prompt_request_type, :embed_request_type, to: :class - # Initializes a provider instance. - # # @param kwargs [Hash] configuration and callbacks # @option kwargs [Symbol] :service validates against provider's service name - # @option kwargs [Proc] :stream_broadcaster callback for streaming events (:open, :update, :close) - # @option kwargs [Proc] :tools_function callback to execute tool/function calls + # @option kwargs [Proc] :stream_broadcaster for streaming events (:open, :update, :close) + # @option kwargs [Proc] :tools_function to execute tool/function calls # @raise [RuntimeError] when service name doesn't match provider def initialize(kwargs = {}) assert_service!(kwargs.delete(:service)) @@ -107,21 +106,10 @@ def initialize(kwargs = {}) self.options = options_klass.new(kwargs.extract!(*options_klass.keys)) self.context = kwargs self.message_stack = [] + self.usage_stack = [] end - # Executes a prompt request with error handling and instrumentation. - # - # @return [ActiveAgent::Providers::Common::PromptResponse] - def prompt - instrument("prompt_start.provider.active_agent") do - self.request = prompt_request_type.cast(context.except(:trace_id)) - resolve_prompt - end - end - - # Generates a preview of the prompt without executing the API call. - # - # Casts context into a request object and renders it as markdown for inspection. + # Generates prompt preview without executing the API call. # # @return [String] markdown-formatted preview def preview @@ -129,15 +117,31 @@ def preview preview_prompt end - # Executes an embedding request with error handling and instrumentation. + # Executes prompt request with error handling and instrumentation. # - # Converts text into vector representations for semantic search and similarity operations. + # @return [ActiveAgent::Providers::Common::PromptResponse] + def prompt + self.request = prompt_request_type.cast(context.except(:trace_id)) + + instrument("prompt.active_agent") do |payload| + response = resolve_prompt + instrumentation_prompt_payload(payload, request, response) + + response + end + end + + # Executes embedding request with error handling and instrumentation. # # @return [ActiveAgent::Providers::Common::EmbedResponse] def embed - instrument("embed_start.provider.active_agent") do - self.request = embed_request_type.cast(context.except(:trace_id)) - resolve_embed + self.request = embed_request_type.cast(context.except(:trace_id)) + + instrument("embed.active_agent") do |payload| + response = resolve_embed + instrumentation_embed_payload(payload, request, response) + + response end end @@ -149,8 +153,6 @@ def assert_service!(name) fail "Unexpected Service Name: #{name} != #{service_name}" if name && name != service_name end - # Instruments an event for logging and metrics. - # # @param name [String] # @param payload [Hash] # @yield block to instrument @@ -160,34 +162,42 @@ def instrument(name, payload = {}, &block) ActiveSupport::Notifications.instrument(name, full_payload, &block) end - # Orchestrates the complete prompt request lifecycle. + # Orchestrates complete prompt request lifecycle. # - # Prepares request, executes API call, processes response, and handles - # recursive tool/function calling until completion. + # Handles recursive tool/function calling until completion. # # @return [ActiveAgent::Providers::Common::PromptResponse] def resolve_prompt - request = prepare_prompt_request - - instrument("request_prepared.provider.active_agent", message_count: request.messages.size) - - # @todo Validate Request - api_parameters = api_request_build(request, prompt_request_type) - api_response = instrument("api_call.provider.active_agent", streaming: api_parameters[:stream].present?) do - with_exception_handling { api_prompt_execute(api_parameters) } + api_parameters = api_request_build(prepare_prompt_request, prompt_request_type) + api_response = instrument("prompt.provider.active_agent") do |payload| + raw_response = with_exception_handling { api_prompt_execute(api_parameters) } + + # Instrumentation Context Building + # Normalize response for instrumentation (providers may return gem objects) + normalized_response = api_response_normalize(raw_response) + common_response = Common::PromptResponse.new(raw_response: normalized_response) + instrumentation_prompt_payload(payload, self.request, common_response) + usage_stack.push(common_response.usage) if common_response&.usage + + raw_response end process_prompt_finished(api_response) end - # Orchestrates the complete embedding request lifecycle. + # Orchestrates complete embedding request lifecycle. # # @return [ActiveAgent::Providers::Common::EmbedResponse] def resolve_embed - # @todo Validate Request - api_parameters = api_request_build(request, embed_request_type) - api_response = instrument("embed_call.provider.active_agent") do - with_exception_handling { api_embed_execute(api_parameters) } + api_parameters = api_request_build(self.request, embed_request_type) + api_response = instrument("embed.provider.active_agent") do |payload| + raw_response = with_exception_handling { api_embed_execute(api_parameters) } + + # Instrumentation Context Building + common_response = Common::EmbedResponse.new(raw_response:) + instrumentation_embed_payload(payload, self.request, common_response) + + raw_response end process_embed_finished(api_response) @@ -195,7 +205,7 @@ def resolve_embed # Prepares request for next iteration in multi-turn conversation. # - # Appends accumulated messages from message stack and resets buffer for next cycle. + # Appends accumulated messages and resets buffer for next cycle. # # @return [Request] def prepare_prompt_request @@ -205,11 +215,9 @@ def prepare_prompt_request self.request end - # Builds API request parameters from request object. - # # @param request [Request] - # @param request_type [ActiveModel::Type::Value] type for serialization - # @return [Hash] + # @param request_type [ActiveModel::Type::Value] for serialization + # @return [Hash] API request parameters def api_request_build(request, request_type) parameters = request_type.serialize(request) parameters[:stream] = process_stream if request.try(:stream) @@ -221,7 +229,7 @@ def api_request_build(request, request_type) parameters end - # @return [Proc] invoked for each response chunk + # @return [Proc] for each response chunk def process_stream proc do |api_response_chunk| process_stream_chunk(api_response_chunk) @@ -231,12 +239,10 @@ def process_stream # Executes prompt request against provider's API. # # @abstract - # @param request_parameters [Hash] + # @param parameters [Hash] # @return [Object] provider-specific API response # @raise [NotImplementedError] def api_prompt_execute(parameters) - instrument("api_request.provider.active_agent", model: parameters[:model], streaming: !!parameters[:stream]) - unless parameters[:stream] api_prompt_executer.create(**parameters) else @@ -257,6 +263,18 @@ def api_prompt_executer fail NotImplementedError, "Subclass expected to implement" end + # Normalizes API response for instrumentation. + # + # Providers that return gem objects (like Anthropic::Models::Message) should + # override this to convert to a hash so usage data can be extracted. + # By default, returns the response as-is (for providers returning hashes). + # + # @param api_response [Object] provider-specific API response + # @return [Hash, Object] normalized response (preferably hash) + def api_response_normalize(api_response) + api_response + end + # Executes embedding request against provider's API. # # @abstract @@ -278,21 +296,21 @@ def process_stream_chunk(api_response_chunk) # Broadcasts stream open event. # - # Fires once per request cycle even during multi-turn tool calling. + # Fires once per request cycle, even during multi-turn tool calling. # # @return [void] def broadcast_stream_open return if streaming self.streaming = true - instrument("stream_open.provider.active_agent") + instrument("stream_open.active_agent") stream_broadcaster.call(nil, nil, :open) end # Broadcasts stream update with message content delta. # # @param message [Hash, Object] - # @param delta [String, nil] incremental content chunk + # @param delta [String, nil] # @return [void] def broadcast_stream_update(message, delta = nil) stream_broadcaster.call(message, delta, :update) @@ -300,35 +318,31 @@ def broadcast_stream_update(message, delta = nil) # Broadcasts stream close event. # - # Fires once per request cycle even during multi-turn tool calling. + # Fires once per request cycle, even during multi-turn tool calling. # # @return [void] def broadcast_stream_close return unless streaming self.streaming = false - instrument("stream_close.provider.active_agent") + instrument("stream_close.active_agent") stream_broadcaster.call(message_stack.last, nil, :close) end # Processes completed API response and handles tool calling recursion. # - # Extracts messages and function calls from the response. If tools were invoked, - # executes them and recursively continues the prompt until completion. + # Extracts messages and function calls. If tools were invoked, + # executes them and recursively continues until completion. # # @param api_response [Object, nil] provider-specific response # @return [Common::PromptResponse, nil] def process_prompt_finished(api_response = nil) if (api_messages = process_prompt_finished_extract_messages(api_response)) - instrument("messages_extracted.provider.active_agent", message_count: api_messages.size) message_stack.push(*api_messages) end if (tool_calls = process_prompt_finished_extract_function_calls)&.any? - instrument("tool_calls_processing.provider.active_agent", tool_count: tool_calls.size) process_function_calls(tool_calls) - - instrument("multi_turn_continue.provider.active_agent") resolve_prompt else @@ -337,27 +351,25 @@ def process_prompt_finished(api_response = nil) # as they continue to work. broadcast_stream_close - instrument("prompt_complete.provider.active_agent", message_count: message_stack.size) - # To convert the messages into common format we first need to merge the current # stack and then cast them to the provider type, so we can cast them out to common. messages = prompt_request_type.cast( messages: [ *request.messages, *message_stack ] ).messages + # Create response object with usage_stack array for multi-turn cumulative tracking. # This will returned as it closes up the recursive stack Common::PromptResponse.new( context:, + format: request.response_format, + messages:, raw_request: prompt_request_type.serialize(request), raw_response: api_response, - messages:, - format: request.response_format + usages: usage_stack ) end end - # Extracts messages from API response. - # # @abstract # @param api_response [Object] # @return [Array, nil] @@ -366,8 +378,6 @@ def process_prompt_finished_extract_messages(api_response) fail NotImplementedError, "Subclass expected to implement" end - # Extracts tool/function calls from API response. - # # @abstract # @return [Array, nil] # @raise [NotImplementedError] diff --git a/lib/active_agent/providers/anthropic_provider.rb b/lib/active_agent/providers/anthropic_provider.rb index 10684ed7..f243355d 100644 --- a/lib/active_agent/providers/anthropic_provider.rb +++ b/lib/active_agent/providers/anthropic_provider.rb @@ -53,7 +53,6 @@ def prepare_prompt_request_tools if (tool_choice_type == :any && functions_used.any?) || (tool_choice_type == :tool && tool_choice_name && functions_used.include?(tool_choice_name)) - instrument("tool_choice_removed.provider.active_agent") request.tool_choice = nil end end @@ -74,6 +73,15 @@ def api_prompt_executer client.messages end + # @see BaseProvider#api_response_normalize + # @param api_response [Anthropic::Models::Message] + # @return [Hash] normalized response hash + def api_response_normalize(api_response) + return api_response unless api_response + + Anthropic::Transforms.gem_to_hash(api_response) + end + # Processes streaming chunks and builds message incrementally in message_stack. # # Handles chunk types: message_start, content_block_start, content_block_delta, @@ -84,9 +92,9 @@ def api_prompt_executer # @param api_response_chunk [Anthropic::StreamEvent] # @return [void] def process_stream_chunk(api_response_chunk) - chunk_type = api_response_chunk.type.to_sym + chunk_type = api_response_chunk[:type]&.to_sym - instrument("stream_chunk_processing.provider.active_agent", chunk_type:) + instrument("stream_chunk.active_agent", chunk_type:) broadcast_stream_open @@ -178,26 +186,37 @@ def process_function_calls(api_function_calls) # @param api_function_call [Hash] with :name, :input, and :id keys # @return [Anthropic::Models::ToolResultBlockParam] def process_tool_call_function(api_function_call) - instrument("tool_execution.provider.active_agent", tool_name: api_function_call[:name]) - - results = tools_function.call( - api_function_call[:name], **api_function_call[:input] - ) - - ::Anthropic::Models::ToolResultBlockParam.new( - type: "tool_result", - tool_use_id: api_function_call[:id], - content: results.to_json, - ) + instrument("tool_call.active_agent", tool_name: api_function_call[:name]) do + results = tools_function.call( + api_function_call[:name], **api_function_call[:input] + ) + + ::Anthropic::Models::ToolResultBlockParam.new( + type: "tool_result", + tool_use_id: api_function_call[:id], + content: results.to_json, + is_error: false + ) + end end # Converts API response message to hash for message_stack. + # Converts Anthropic gem response object to hash for storage. + # + # @param api_response [Anthropic::Models::Message] + # @return [Common::PromptResponse, nil] + def process_prompt_finished(api_response = nil) + # Convert gem object to hash so that raw_response[:usage] works + api_response_hash = api_response ? Anthropic::Transforms.gem_to_hash(api_response) : nil + super(api_response_hash) + end + # # Handles JSON response format simulation by prepending `{` to the response # content after removing the assistant lead-in message. # # @see BaseProvider#process_prompt_finished_extract_messages - # @param api_response [Anthropic::Models::Message] + # @param api_response [Hash] converted response hash # @return [Array, nil] def process_prompt_finished_extract_messages(api_response) return unless api_response @@ -205,12 +224,10 @@ def process_prompt_finished_extract_messages(api_response) # Handle JSON response format simulation if request.response_format&.dig(:type) == "json_object" request.pop_message! - api_response.content[0].text = "{#{api_response.content[0].text}" + api_response[:content][0][:text] = "{#{api_response[:content][0][:text]}" end - message = Anthropic::Transforms.gem_to_hash(api_response) - - [ message ] + [ api_response ] end # Extracts tool_use blocks from message_stack and parses JSON inputs. diff --git a/lib/active_agent/providers/common/responses/base.rb b/lib/active_agent/providers/common/responses/base.rb index d9940675..74b28eb3 100644 --- a/lib/active_agent/providers/common/responses/base.rb +++ b/lib/active_agent/providers/common/responses/base.rb @@ -1,27 +1,24 @@ # frozen_string_literal: true require "active_agent/providers/common/model" +require "active_agent/providers/common/usage" module ActiveAgent module Providers module Common module Responses - # Base response model for provider responses. - # - # This class represents the standard response structure from AI providers - # across different services (OpenAI, Anthropic, etc.). It provides a unified - # interface for accessing response data, usage statistics, and request context. + # Provides unified interface for AI provider responses across OpenAI, Anthropic, etc. # # @abstract Subclass and override {#usage} if provider uses non-standard format # - # @note This is a base class. Use specialized subclasses for specific response types: - # - {Prompt} for conversational/completion responses with messages - # - {Embed} for embedding responses with vector data + # @note Use specialized subclasses for specific response types: + # - {Prompt} for conversational/completion responses + # - {Embed} for embedding responses # # @example Accessing response data # response = agent.prompt.generate_now # response.success? #=> true - # response.usage #=> { "prompt_tokens" => 10, "completion_tokens" => 20 } + # response.usage #=> Usage object with normalized fields # response.total_tokens #=> 30 # # @example Inspecting raw provider data @@ -33,117 +30,168 @@ module Responses # @see BaseModel class Base < BaseModel # @!attribute [r] context - # The original context that was sent to the provider. + # Original request context sent to the provider. # - # Contains structured information about the request including instructions, - # messages, tools, and other configuration passed to the LLM. + # Includes instructions, messages, tools, and configuration. # - # @return [Hash] the request context + # @return [Hash] attribute :context, writable: false # @!attribute [r] raw_request - # The most recent request in provider-specific format. + # Most recent request in provider-specific format. # - # Contains the actual API request payload sent to the provider, - # useful for debugging and logging. + # Useful for debugging and logging. # - # @return [Hash] the provider-formatted request + # @return [Hash] attribute :raw_request, writable: false # @!attribute [r] raw_response - # The most recent response in provider-specific format. + # Most recent response in provider-specific format. # - # Contains the raw API response from the provider, including all - # metadata, usage stats, and provider-specific fields. + # Includes metadata, usage stats, and provider-specific fields. + # Hash keys are deep symbolized for consistent access. # - # @return [Hash] the provider-formatted response + # @return [Hash] attribute :raw_response, writable: false - # Initializes a new response object with deep-duplicated attributes. + # @!attribute [r] usages + # Usage objects from each API call in multi-turn conversations. + # + # Each call (e.g., for tool calling) tracks usage separately. These are + # summed to provide cumulative statistics via {#usage}. # - # Deep duplication ensures that the response object maintains its own - # independent copy of the data, preventing external modifications from - # affecting the response's internal state. + # @return [Array] + attribute :usages, default: -> { [] }, writable: false + + # Initializes response with deep-duplicated attributes. # - # @param kwargs [Hash] response attributes - # @option kwargs [Hash] :context the original request context - # @option kwargs [Hash] :raw_request the provider-formatted request - # @option kwargs [Hash] :raw_response the provider-formatted response + # Deep duplication prevents external modifications from affecting internal state. + # The raw_response is deep symbolized for consistent key access across providers. # - # @return [Base] the initialized response object + # @param kwargs [Hash] + # @option kwargs [Hash] :context + # @option kwargs [Hash] :raw_request + # @option kwargs [Hash] :raw_response def initialize(kwargs = {}) - super(kwargs.deep_dup) # Ensure that userland can't fuck with our memory space + kwargs = kwargs.deep_dup # Ensure that userland can't fuck with our memory space + + # Deep symbolize raw_response for consistent access across all extraction methods + if kwargs[:raw_response].is_a?(Hash) + kwargs[:raw_response] = kwargs[:raw_response].deep_symbolize_keys + end + + super(kwargs) end - # Extracts instructions from the context. - # - # @return [String, Array, nil] the instructions that were sent to the provider + # @return [String, Array, nil] def instructions context[:instructions] end - # Indicates whether the generation request was successful. - # # @todo Better handling of failure flows - # - # @return [Boolean] true if successful, false otherwise + # @return [Boolean] def success? true end - # Extracts usage statistics from the raw response. + # Normalized usage statistics across all providers. + # + # For multi-turn conversations with tool calling, returns cumulative + # usage across all API calls (sum of {#usages}). # - # Most providers (OpenAI, Anthropic, etc.) return usage data in a - # standardized format within the response. This method extracts that - # information for token counting and billing purposes. + # @return [Usage, nil] # - # @return [Hash, nil] usage statistics hash with keys like "prompt_tokens", - # "completion_tokens", and "total_tokens", or nil if not available + # @example Single-turn usage + # response.usage.input_tokens #=> 100 + # response.usage.output_tokens #=> 25 + # response.usage.total_tokens #=> 125 # - # @example Usage data structure - # { - # "prompt_tokens" => 10, - # "completion_tokens" => 20, - # "total_tokens" => 30 - # } + # @example Multi-turn usage (cumulative) + # # After 3 API calls due to tool usage: + # response.usage.input_tokens #=> 350 (sum of all calls) + # response.usage.output_tokens #=> 120 (sum of all calls) + # + # @see Usage def usage - return nil unless raw_response - - # Most providers store usage in the same format - if raw_response.is_a?(Hash) && raw_response["usage"] - raw_response["usage"] + @usage ||= begin + if usages.any? + usages.reduce(:+) + elsif raw_response + Usage.from_provider_usage( + raw_response.is_a?(Hash) ? raw_response[:usage] : raw_response.usage + ) + end end end - # Extracts the number of tokens used in the prompt/input. + # Response ID from provider, useful for tracking and debugging. # - # @return [Integer, nil] number of prompt tokens used, or nil if unavailable + # @return [String, nil] # # @example - # response.prompt_tokens #=> 10 - def prompt_tokens - usage&.dig("prompt_tokens") + # response.id #=> "chatcmpl-CbDx1nXoNSBrNIMhiuy5fk7jXQjmT" (OpenAI) + # response.id #=> "msg_01RotDmSnYpKQjrTpaHUaEBz" (Anthropic) + # response.id #=> "gen-1761505659-yxgaVsqVABMQqw6oA7QF" (OpenRouter) + def id + @id ||= begin + return nil unless raw_response + + if raw_response.is_a?(Hash) + raw_response[:id] + elsif raw_response.respond_to?(:id) + raw_response.id + end + end end - # Extracts the number of tokens used in the completion/output. + # Model name from provider response. + # + # Useful for confirming which model was actually used, as providers may + # use different versions than requested. # - # @return [Integer, nil] number of completion tokens used, or nil if unavailable + # @return [String, nil] # # @example - # response.completion_tokens #=> 20 - def completion_tokens - usage&.dig("completion_tokens") + # response.model #=> "gpt-4o-mini-2024-07-18" + # response.model #=> "claude-3-5-haiku-20241022" + def model + @model ||= begin + return nil unless raw_response + + if raw_response.is_a?(Hash) + raw_response[:model] + elsif raw_response.respond_to?(:model) + raw_response.model + end + end end - # Extracts the total number of tokens used (prompt + completion). + # Finish reason from provider response. + # + # Indicates why generation stopped (e.g., "stop", "length", "tool_calls"). + # Normalizes access across providers that use different field names. # - # @return [Integer, nil] total number of tokens used, or nil if unavailable + # @return [String, nil] # # @example - # response.total_tokens #=> 30 - def total_tokens - usage&.dig("total_tokens") + # response.finish_reason #=> "stop" + # response.finish_reason #=> "length" + # response.finish_reason #=> "tool_calls" + # response.stop_reason #=> "stop" (alias) + def finish_reason + @finish_reason ||= begin + return nil unless raw_response + + if raw_response.is_a?(Hash) + # OpenAI format: choices[0].finish_reason or choices[0].message.finish_reason + raw_response.dig(:choices, 0, :finish_reason) || + raw_response.dig(:choices, 0, :message, :finish_reason) || + # Anthropic format: stop_reason + raw_response[:stop_reason] + end + end end + alias_method :stop_reason, :finish_reason end end end diff --git a/lib/active_agent/providers/common/usage.rb b/lib/active_agent/providers/common/usage.rb new file mode 100644 index 00000000..dd5082a6 --- /dev/null +++ b/lib/active_agent/providers/common/usage.rb @@ -0,0 +1,385 @@ +# frozen_string_literal: true + +require "active_agent/providers/common/model" + +module ActiveAgent + module Providers + module Common + # Normalizes token usage statistics across AI providers. + # + # Providers return usage data in different formats with different field names. + # This model normalizes them into a consistent structure, automatically calculating + # +total_tokens+ if not provided. + # + # @example Accessing normalized usage data + # usage = response.normalized_usage + # usage.input_tokens #=> 100 + # usage.output_tokens #=> 25 + # usage.total_tokens #=> 125 + # usage.cached_tokens #=> 20 (if available) + # + # @example Provider-specific details + # usage.provider_details #=> { "completion_tokens_details" => {...}, ... } + # usage.duration_ms #=> 5000 (for Ollama) + # usage.service_tier #=> "standard" (for Anthropic) + # + # @see https://platform.openai.com/docs/api-reference/chat/object OpenAI Chat Completion + # @see https://docs.anthropic.com/en/api/messages Anthropic Messages API + # @see https://github.com/ollama/ollama/blob/main/docs/api.md Ollama API + class Usage < BaseModel + # @!attribute [rw] input_tokens + # Normalized from: + # - OpenAI Chat/Embeddings: prompt_tokens + # - OpenAI Responses API: input_tokens + # - Anthropic: input_tokens + # - Ollama: prompt_eval_count + # - OpenRouter: prompt_tokens + # + # @return [Integer] + attribute :input_tokens, :integer, default: 0 + + # @!attribute [rw] output_tokens + # Normalized from: + # - OpenAI Chat: completion_tokens + # - OpenAI Responses API: output_tokens + # - Anthropic: output_tokens + # - Ollama: eval_count + # - OpenRouter: completion_tokens + # - OpenAI Embeddings: 0 (no output tokens) + # + # @return [Integer] + attribute :output_tokens, :integer, default: 0 + + # @!attribute [rw] total_tokens + # Automatically calculated as input_tokens + output_tokens if not provided by provider. + # + # @return [Integer] + attribute :total_tokens, :integer + + # @!attribute [rw] cached_tokens + # Available from: + # - OpenAI: prompt_tokens_details.cached_tokens or input_tokens_details.cached_tokens + # - Anthropic: cache_read_input_tokens + # + # @return [Integer, nil] + attribute :cached_tokens, :integer + + # @!attribute [rw] reasoning_tokens + # Available from: + # - OpenAI Chat: completion_tokens_details.reasoning_tokens + # - OpenAI Responses: output_tokens_details.reasoning_tokens + # + # @return [Integer, nil] + attribute :reasoning_tokens, :integer + + # @!attribute [rw] audio_tokens + # Available from: + # - OpenAI: sum of prompt_tokens_details.audio_tokens and completion_tokens_details.audio_tokens + # + # @return [Integer, nil] + attribute :audio_tokens, :integer + + # @!attribute [rw] cache_creation_tokens + # Available from: + # - Anthropic: cache_creation_input_tokens + # + # @return [Integer, nil] + attribute :cache_creation_tokens, :integer + + # @!attribute [rw] service_tier + # Available from: + # - Anthropic: service_tier ("standard", "priority", "batch") + # + # @return [String, nil] + attribute :service_tier, :string + + # @!attribute [rw] duration_ms + # Available from: + # - Ollama: total_duration (converted from nanoseconds) + # + # @return [Integer, nil] + attribute :duration_ms, :integer + + # @!attribute [rw] provider_details + # Preserves provider-specific information that doesn't fit the normalized structure. + # Useful for debugging or provider-specific features. + # + # @return [Hash] + attribute :provider_details, default: -> { {} } + + # Automatically calculates total_tokens if not provided. + # + # @param attributes [Hash] + # @option attributes [Integer] :input_tokens + # @option attributes [Integer] :output_tokens + # @option attributes [Integer] :total_tokens (calculated if not provided) + # @option attributes [Integer] :cached_tokens + # @option attributes [Integer] :reasoning_tokens + # @option attributes [Integer] :audio_tokens + # @option attributes [Integer] :cache_creation_tokens + # @option attributes [String] :service_tier + # @option attributes [Integer] :duration_ms + # @option attributes [Hash] :provider_details + def initialize(attributes = {}) + super + # Calculate total_tokens if not provided + self.total_tokens ||= (input_tokens || 0) + (output_tokens || 0) + end + + # Sums all token counts from two Usage objects. + # + # @param other [Usage] + # @return [Usage] + # + # @example + # usage1 = Usage.new(input_tokens: 100, output_tokens: 50) + # usage2 = Usage.new(input_tokens: 75, output_tokens: 25) + # combined = usage1 + usage2 + # combined.input_tokens #=> 175 + # combined.output_tokens #=> 75 + # combined.total_tokens #=> 250 + def +(other) + return self unless other + + self.class.new( + input_tokens: self.input_tokens + other.input_tokens, + output_tokens: self.output_tokens + other.output_tokens, + total_tokens: self.total_tokens + other.total_tokens, + cached_tokens: sum_optional(self.cached_tokens, other.cached_tokens), + cache_creation_tokens: sum_optional(self.cache_creation_tokens, other.cache_creation_tokens), + reasoning_tokens: sum_optional(self.reasoning_tokens, other.reasoning_tokens), + audio_tokens: sum_optional(self.audio_tokens, other.audio_tokens) + ) + end + + # Creates a Usage object from OpenAI Chat Completion usage data. + # + # @param usage_hash [Hash] + # @return [Usage] + # + # @example + # Usage.from_openai_chat({ + # "prompt_tokens" => 100, + # "completion_tokens" => 25, + # "total_tokens" => 125, + # "prompt_tokens_details" => { "cached_tokens" => 20 }, + # "completion_tokens_details" => { "reasoning_tokens" => 3 } + # }) + def self.from_openai_chat(usage_hash) + return nil unless usage_hash + + usage = usage_hash.deep_symbolize_keys + prompt_details = usage[:prompt_tokens_details] || {} + completion_details = usage[:completion_tokens_details] || {} + + audio_sum = [ + prompt_details[:audio_tokens], + completion_details[:audio_tokens] + ].compact.sum + + new( + **usage.slice(:total_tokens), + input_tokens: usage[:prompt_tokens] || 0, + output_tokens: usage[:completion_tokens] || 0, + cached_tokens: prompt_details[:cached_tokens], + reasoning_tokens: completion_details[:reasoning_tokens], + audio_tokens: audio_sum > 0 ? audio_sum : nil, + provider_details: usage.slice(:prompt_tokens_details, :completion_tokens_details).compact + ) + end + + # Creates a Usage object from OpenAI Embedding API usage data. + # + # @param usage_hash [Hash] + # @return [Usage] + # + # @example + # Usage.from_openai_embedding({ + # "prompt_tokens" => 8, + # "total_tokens" => 8 + # }) + def self.from_openai_embedding(usage_hash) + return nil unless usage_hash + + usage = usage_hash.deep_symbolize_keys + + new( + **usage.slice(:total_tokens), + input_tokens: usage[:prompt_tokens] || 0, + output_tokens: 0, # Embeddings don't generate output tokens + provider_details: usage.except(:prompt_tokens, :total_tokens) + ) + end + + # Creates a Usage object from OpenAI Responses API usage data. + # + # @param usage_hash [Hash] + # @return [Usage] + # + # @example + # Usage.from_openai_responses({ + # "input_tokens" => 150, + # "output_tokens" => 75, + # "total_tokens" => 225, + # "input_tokens_details" => { "cached_tokens" => 50 }, + # "output_tokens_details" => { "reasoning_tokens" => 10 } + # }) + def self.from_openai_responses(usage_hash) + return nil unless usage_hash + + usage = usage_hash.deep_symbolize_keys + input_details = usage[:input_tokens_details] || {} + output_details = usage[:output_tokens_details] || {} + + new( + **usage.slice(:input_tokens, :output_tokens, :total_tokens), + input_tokens: usage[:input_tokens] || 0, + output_tokens: usage[:output_tokens] || 0, + cached_tokens: input_details[:cached_tokens], + reasoning_tokens: output_details[:reasoning_tokens], + provider_details: usage.slice(:input_tokens_details, :output_tokens_details).compact + ) + end + + # Creates a Usage object from Anthropic usage data. + # + # @param usage_hash [Hash] + # @return [Usage] + # + # @example + # Usage.from_anthropic({ + # "input_tokens" => 2095, + # "output_tokens" => 503, + # "cache_read_input_tokens" => 1500, + # "cache_creation_input_tokens" => 2051, + # "service_tier" => "standard" + # }) + def self.from_anthropic(usage_hash) + return nil unless usage_hash + + usage = usage_hash.deep_symbolize_keys + + new( + **usage.slice(:input_tokens, :output_tokens, :service_tier), + input_tokens: usage[:input_tokens] || 0, + output_tokens: usage[:output_tokens] || 0, + cached_tokens: usage[:cache_read_input_tokens], + cache_creation_tokens: usage[:cache_creation_input_tokens], + provider_details: usage.slice(:cache_creation, :server_tool_use).compact + ) + end + + # Creates a Usage object from Ollama usage data. + # + # @param usage_hash [Hash] + # @return [Usage] + # + # @example + # Usage.from_ollama({ + # "prompt_eval_count" => 50, + # "eval_count" => 25, + # "total_duration" => 5000000000, + # "load_duration" => 1000000000 + # }) + def self.from_ollama(usage_hash) + return nil unless usage_hash + + usage = usage_hash.deep_symbolize_keys + + new( + input_tokens: usage[:prompt_eval_count] || 0, + output_tokens: usage[:eval_count] || 0, + duration_ms: convert_nanoseconds_to_ms(usage[:total_duration]), + provider_details: { + load_duration_ms: convert_nanoseconds_to_ms(usage[:load_duration]), + prompt_eval_duration_ms: convert_nanoseconds_to_ms(usage[:prompt_eval_duration]), + eval_duration_ms: convert_nanoseconds_to_ms(usage[:eval_duration]), + tokens_per_second: calculate_tokens_per_second(usage[:eval_count], usage[:eval_duration]) + }.compact + ) + end + + # Creates a Usage object from OpenRouter usage data. + # + # OpenRouter uses the same format as OpenAI Chat Completion. + # + # @param usage_hash [Hash] + # @return [Usage] + # + # @example + # Usage.from_openrouter({ + # "prompt_tokens" => 14, + # "completion_tokens" => 4, + # "total_tokens" => 18 + # }) + def self.from_openrouter(usage_hash) + from_openai_chat(usage_hash) + end + + # Auto-detects the provider format and creates a normalized Usage object. + # + # @note Detection is based on hash structure rather than native gem types + # because we cannot force-load all provider gems. This allows the framework + # to work with only the gems the user has installed. + # + # @param usage_hash [Hash] + # @return [Usage, nil] + # + # @example + # Usage.from_provider_usage(some_usage_hash) + def self.from_provider_usage(usage_hash) + return nil unless usage_hash.is_a?(Hash) + + usage = usage_hash.deep_symbolize_keys + + # Detect Ollama by presence of nanosecond duration fields + if usage.key?(:total_duration) + from_ollama(usage_hash) + # Detect Anthropic by presence of cache_creation or service_tier + elsif usage.key?(:cache_creation) || usage.key?(:service_tier) + from_anthropic(usage_hash) + # Detect OpenAI Responses API by input_tokens/output_tokens with details + elsif usage.key?(:input_tokens) && usage.key?(:input_tokens_details) + from_openai_responses(usage_hash) + # Detect OpenAI Chat/OpenRouter by prompt_tokens/completion_tokens + elsif usage.key?(:completion_tokens) + from_openai_chat(usage_hash) + # Detect OpenAI Embedding by prompt_tokens without completion_tokens + elsif usage.key?(:prompt_tokens) + from_openai_embedding(usage_hash) + # Default to raw initialization + else + new(usage_hash) + end + end + + private + + # @param a [Integer, nil] + # @param b [Integer, nil] + # @return [Integer, nil] nil if both inputs are nil + def sum_optional(a, b) + return nil if a.nil? && b.nil? + (a || 0) + (b || 0) + end + + # @param nanoseconds [Integer, nil] + # @return [Integer, nil] + def self.convert_nanoseconds_to_ms(nanoseconds) + return nil unless nanoseconds + + (nanoseconds / 1_000_000.0).round + end + + # @param tokens [Integer, nil] + # @param duration_ns [Integer, nil] + # @return [Float, nil] + def self.calculate_tokens_per_second(tokens, duration_ns) + return nil unless tokens && duration_ns && duration_ns > 0 + + (tokens.to_f / (duration_ns / 1_000_000_000.0)).round(2) + end + end + end + end +end diff --git a/lib/active_agent/providers/concerns/instrumentation.rb b/lib/active_agent/providers/concerns/instrumentation.rb new file mode 100644 index 00000000..d6ad276d --- /dev/null +++ b/lib/active_agent/providers/concerns/instrumentation.rb @@ -0,0 +1,263 @@ +# frozen_string_literal: true + +module ActiveAgent + module Providers + # Builds instrumentation event payloads for ActiveSupport::Notifications. + # + # Extracts request parameters and response metadata for monitoring, debugging, + # and APM integration (New Relic, DataDog, etc.). + # + # == Event Payloads + # + # Top-Level Events (overall request lifecycle): + # + # prompt.active_agent:: + # Initial: `{ model:, temperature:, max_tokens:, message_count:, has_tools:, stream: }` + # Final: `{ usage: { input_tokens:, output_tokens:, total_tokens: }, finish_reason:, response_model:, response_id: }` + # Note: Usage is cumulative across all API calls in multi-turn conversations + # + # embed.active_agent:: + # Initial: `{ model:, input_size:, encoding_format:, dimensions: }` + # Final: `{ usage: { input_tokens:, total_tokens: }, embedding_count:, response_model:, response_id: }` + # + # Provider-Level Events (per API call): + # + # prompt.provider.active_agent:: + # Initial: `{ model:, temperature:, max_tokens:, message_count:, has_tools:, stream: }` + # Final: `{ usage: { input_tokens:, output_tokens:, total_tokens: }, finish_reason:, response_model:, response_id: }` + # Note: Usage is per individual API call + # + # embed.provider.active_agent:: + # Initial: `{ model:, input_size:, encoding_format:, dimensions: }` + # Final: `{ usage: { input_tokens:, total_tokens: }, embedding_count:, response_model:, response_id: }` + module Instrumentation + extend ActiveSupport::Concern + + # Builds and merges payload data for prompt instrumentation events. + # + # Populates both request parameters and response metadata for top-level and + # provider-level events. Usage data (tokens) is CRITICAL for APM cost tracking + # and performance monitoring. + # + # @param payload [Hash] instrumentation payload to merge into + # @param request [Request] request object with parameters + # @param response [Common::PromptResponse] completed response with normalized data + # @return [void] + def instrumentation_prompt_payload(payload, request, response) + # message_count: prefer the request/input messages (pre-call), fall back to + # response messages only if the request doesn't expose messages. New Relic + # expects parameters[:messages] to be the request messages and computes + # total message counts by adding response choices to that count. + message_count = safe_access(request, :messages)&.size + message_count = safe_access(response, :messages)&.size if message_count.nil? + + payload.merge!(trace_id: trace_id, message_count: message_count || 0, stream: !!safe_access(request, :stream)) + + # Common parameters: prefer response-normalized values, then request + payload[:model] = safe_access(response, :model) || safe_access(request, :model) + payload[:temperature] = safe_access(request, :temperature) + payload[:max_tokens] = safe_access(request, :max_tokens) + payload[:top_p] = safe_access(request, :top_p) + + # Tools / instructions + if (tools_val = safe_access(request, :tools)) + payload[:has_tools] = tools_val.respond_to?(:present?) ? tools_val.present? : !!tools_val + payload[:tool_count] = tools_val&.size || 0 + end + + if (instr_val = safe_access(request, :instructions)) + payload[:has_instructions] = instr_val.respond_to?(:present?) ? instr_val.present? : !!instr_val + end + + # Usage (normalized) + if response.usage + usage = response.usage + payload[:usage] = { + input_tokens: usage.input_tokens, + output_tokens: usage.output_tokens, + total_tokens: usage.total_tokens + } + + payload[:usage][:cached_tokens] = usage.cached_tokens if usage.cached_tokens + payload[:usage][:cache_creation_tokens] = usage.cache_creation_tokens if usage.cache_creation_tokens + payload[:usage][:reasoning_tokens] = usage.reasoning_tokens if usage.reasoning_tokens + payload[:usage][:audio_tokens] = usage.audio_tokens if usage.audio_tokens + end + + # Response metadata + payload[:finish_reason] = safe_access(response, :finish_reason) || response.finish_reason + payload[:response_model] = safe_access(response, :model) || response.model + payload[:response_id] = safe_access(response, :id) || response.id + + # Build messages list: prefer request messages; if unavailable use prior + # response messages (all but the final generated message). + if (req_msgs = safe_access(request, :messages)).is_a?(Array) + payload[:messages] = req_msgs.map { |m| extract_message_hash(m, false) } + else + prior = safe_access(response, :messages) + prior = prior[0...-1] if prior.is_a?(Array) && prior.size > 1 + if prior.is_a?(Array) && prior.any? + payload[:messages] = prior.map { |m| extract_message_hash(m, false) } + end + end + + # Build a parameters hash that mirrors what New Relic's OpenAI + # instrumentation expects. This makes it easy for APM adapters to + # map our provider payload to their LLM event constructors. + parameters = {} + parameters[:model] = payload[:model] if payload[:model] + parameters[:max_tokens] = payload[:max_tokens] if payload[:max_tokens] + parameters[:temperature] = payload[:temperature] if payload[:temperature] + parameters[:top_p] = payload[:top_p] if payload[:top_p] + parameters[:stream] = payload[:stream] + parameters[:messages] = payload[:messages] if payload[:messages] + + # Include tools/instructions where available — New Relic ignores unknown keys, + # but having them here makes the parameter shape closer to OpenAI's. + parameters[:tools] = begin request.tools rescue nil end if begin request.tools rescue nil end + parameters[:instructions] = begin request.instructions rescue nil end if begin request.instructions rescue nil end + + payload[:parameters] = parameters + + # Attach raw response (provider-specific) so downstream APM integrations + # can inspect the provider response if needed. Use the normalized raw_response + # available on the Common::Response when possible. + begin + payload[:response_raw] = response.raw_response if response.respond_to?(:raw_response) && response.raw_response + rescue StandardError + # ignore + end + end + + private + + # Safely attempt to call a method or lookup a key on an object. We avoid + # probing with `respond_to?` to prevent ActiveModel attribute casting side + # effects; instead we attempt the call and rescue failures. + def safe_access(obj, name) + return nil if obj.nil? + + begin + return obj.public_send(name) + rescue StandardError + end + + begin + return obj[name] + rescue StandardError + end + + begin + return obj[name.to_s] + rescue StandardError + end + + nil + end + + # NOTE: message access is handled via `safe_access(obj, :messages)` to + # avoid duplicating guarded lookup logic. + + # Extract a simple hash from a provider message object or hash-like value. + def extract_message_hash(msg, is_response = false) + role = begin + if msg.respond_to?(:[]) + begin msg[:role] rescue (begin msg["role"] rescue nil end) end + elsif msg.respond_to?(:role) + msg.role + elsif msg.respond_to?(:type) + msg.type + end + rescue StandardError + begin msg.role rescue msg.type rescue nil end + end + + content = begin + if msg.respond_to?(:[]) + begin msg[:content] rescue (begin msg["content"] rescue nil end) end + elsif msg.respond_to?(:content) + msg.content + elsif msg.respond_to?(:text) + msg.text + elsif msg.respond_to?(:to_h) + begin msg.to_h[:content] rescue (begin msg.to_h["content"] rescue nil end) end + elsif msg.respond_to?(:to_s) + msg.to_s + end + rescue StandardError + begin msg.to_s rescue nil end + end + + { role: role, content: content, is_response: is_response } + end + + # Builds and merges payload data for embed instrumentation events. + # + # Embeddings typically only report input tokens (no output tokens). + # + # @param payload [Hash] instrumentation payload to merge into + # @param request [Request] request object with parameters + # @param response [Common::EmbedResponse] completed response with normalized data + # @return [void] + def instrumentation_embed_payload(payload, request, response) + # Add request parameters + payload[:trace_id] = trace_id + payload[:model] = request.model if request.respond_to?(:model) + + # Add input size if available + if request.respond_to?(:input) + begin + input = request.input + if input.is_a?(String) + payload[:input_size] = 1 + elsif input.is_a?(Array) + payload[:input_size] = input.size + end + rescue # OpenAI throws errors this for some reason when you try to look at the input. + payload[:input_size] = request[:input].size + end + end + + # Expose embedding input content similarly to message content. + # Use guarded access to avoid provider-specific errors. + begin + if (emb_input = safe_access(request, :input)) + # Keep the raw input (string or array) in the payload so APM adapters + # can inspect it. This matches how we include message content. + payload[:input] = emb_input + end + rescue StandardError + # ignore + end + + # Add encoding format if available (OpenAI) + payload[:encoding_format] = request.encoding_format if request.respond_to?(:encoding_format) + + # Add dimensions if available (OpenAI) + payload[:dimensions] = request.dimensions if request.respond_to?(:dimensions) + + # Add response data + payload[:embedding_count] = response.data&.size || 0 + + # Add usage data if available (CRITICAL for APM integration) + # Embeddings typically only have input tokens + if response.usage + payload[:usage] = { + input_tokens: response.usage.input_tokens, + total_tokens: response.usage.total_tokens + } + end + + # Add response metadata directly from response object + payload[:response_model] = response.model + payload[:response_id] = response.id + + # Build a parameters hash for embeddings to match New Relic's shape. + emb_params = {} + emb_params[:model] = payload[:model] if payload[:model] + emb_params[:input] = payload[:input] if payload.key?(:input) + payload[:parameters] = emb_params unless emb_params.empty? + end + end + end +end diff --git a/lib/active_agent/providers/log_subscriber.rb b/lib/active_agent/providers/log_subscriber.rb index 5dda0927..9525e0ad 100644 --- a/lib/active_agent/providers/log_subscriber.rb +++ b/lib/active_agent/providers/log_subscriber.rb @@ -4,100 +4,88 @@ module ActiveAgent module Providers - # Log subscriber for ActiveAgent provider operations. + # Logs provider operations via ActiveSupport::Notifications events. # - # This subscriber listens to ActiveSupport::Notifications events published - # during provider operations and logs them in a consistent, configurable format. - # - # Events are automatically instrumented in the providers and can be customized - # or disabled through log level configuration. + # Subscribes to instrumented provider events and formats them consistently. + # Customize by subclassing and attaching your subscriber, or adjust log levels. # # @example Custom log formatting - # class MyLogSubscriber < ActiveAgent::LogSubscriber - # def prompt_start(event) - # info "🚀 Starting prompt: #{event.payload[:provider]}" + # class MyLogSubscriber < ActiveAgent::Providers::LogSubscriber + # def prompt(event) + # info "🚀 #{event.payload[:provider_module]}: #{event.duration}ms" # end # end # - # ActiveAgent::LogSubscriber.detach_from :active_agent_provider - # MyLogSubscriber.attach_to :active_agent_provider + # ActiveAgent::Providers::LogSubscriber.detach_from :active_agent + # MyLogSubscriber.attach_to :active_agent class LogSubscriber < ActiveSupport::LogSubscriber # self.namespace = "active_agent" # Rails 8.1 - # Logs the start of a prompt request + # Logs completed prompt with model, message count, token usage, and duration. # # @param event [ActiveSupport::Notifications::Event] - def prompt_start(event) + # @return [void] + def prompt(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] + model = event.payload[:model] + message_count = event.payload[:message_count] + stream = event.payload[:stream] + usage = event.payload[:usage] + finish_reason = event.payload[:finish_reason] + duration = event.duration.round(1) debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Starting prompt request" - end - end - # event_log_level :prompt_start, :debug # Rails 8.1 + parts = [ "[#{trace_id}]", "[ActiveAgent]", "[#{provider_module}]" ] + parts << "Prompt completed:" + parts << "model=#{model}" if model + parts << "messages=#{message_count}" + parts << "stream=#{stream}" - # Logs the start of an embedding request - # - # @param event [ActiveSupport::Notifications::Event] - def embed_start(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Starting embed request" - end - end - # event_log_level :embed_start, :debug # Rails 8.1 + if usage + tokens = "tokens=#{usage[:input_tokens]}/#{usage[:output_tokens]}" + tokens += " (cached: #{usage[:cached_tokens]})" if usage[:cached_tokens]&.positive? + tokens += " (reasoning: #{usage[:reasoning_tokens]})" if usage[:reasoning_tokens]&.positive? + parts << tokens + end - # Logs request preparation details - # - # @param event [ActiveSupport::Notifications::Event] - def request_prepared(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - message_count = event.payload[:message_count] + parts << "finish=#{finish_reason}" if finish_reason + parts << "#{duration}ms" - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Prepared request with #{message_count} message(s)" + parts.join(" ") end end - # event_log_level :request_prepared, :debug # Rails 8.1 + # event_log_level :prompt, :debug # Rails 8.1 - # Logs API call execution + # Logs completed embedding with model, input size, and token usage. # # @param event [ActiveSupport::Notifications::Event] - def api_call(event) - return unless logger.debug? - + # @return [void] + def embed(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] - streaming = event.payload[:streaming] + model = event.payload[:model] + input_size = event.payload[:input_size] + embedding_count = event.payload[:embedding_count] + usage = event.payload[:usage] duration = event.duration.round(1) debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] API call completed in #{duration}ms (streaming: #{streaming})" - end - end - # event_log_level :api_call, :debug # Rails 8.1 + parts = [ "[#{trace_id}]", "[ActiveAgent]", "[#{provider_module}]" ] + parts << "Embed completed:" + parts << "model=#{model}" if model + parts << "inputs=#{input_size}" if input_size + parts << "embeddings=#{embedding_count}" if embedding_count + parts << "tokens=#{usage[:input_tokens]}" if usage + parts << "#{duration}ms" - # Logs embed API call execution - # - # @param event [ActiveSupport::Notifications::Event] - def embed_call(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - duration = event.duration.round(1) - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Embed API call completed in #{duration}ms" + parts.join(" ") end end - # event_log_level :embed_call, :debug # Rails 8.1 + # event_log_level :embed, :debug # Rails 8.1 - # Logs stream opening - # # @param event [ActiveSupport::Notifications::Event] + # @return [void] def stream_open(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] @@ -108,9 +96,8 @@ def stream_open(event) end # event_log_level :stream_open, :debug # Rails 8.1 - # Logs stream closing - # # @param event [ActiveSupport::Notifications::Event] + # @return [void] def stream_close(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] @@ -121,210 +108,41 @@ def stream_close(event) end # event_log_level :stream_close, :debug # Rails 8.1 - # Logs message extraction from API response - # # @param event [ActiveSupport::Notifications::Event] - def messages_extracted(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - message_count = event.payload[:message_count] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Extracted #{message_count} message(s) from API response" - end - end - # event_log_level :messages_extracted, :debug # Rails 8.1 - - # Logs tool/function call processing - # - # @param event [ActiveSupport::Notifications::Event] - def tool_calls_processing(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - tool_count = event.payload[:tool_count] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Processing #{tool_count} tool call(s)" - end - end - # event_log_level :tool_calls_processing, :debug # Rails 8.1 - - # Logs multi-turn conversation continuation - # - # @param event [ActiveSupport::Notifications::Event] - def multi_turn_continue(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Continuing multi-turn conversation after tool execution" - end - end - # event_log_level :multi_turn_continue, :debug # Rails 8.1 - - # Logs prompt completion - # - # @param event [ActiveSupport::Notifications::Event] - def prompt_complete(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - message_count = event.payload[:message_count] - duration = event.duration.round(1) - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Prompt completed with #{message_count} message(s) in stack (total: #{duration}ms)" - end - end - # event_log_level :prompt_complete, :debug # Rails 8.1 - - # Logs retry attempts - # - # @param event [ActiveSupport::Notifications::Event] - def retry_attempt(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - attempt = event.payload[:attempt] - max_retries = event.payload[:max_retries] - exception = event.payload[:exception] - backoff_time = event.payload[:backoff_time] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}:Retries] Attempt #{attempt}/#{max_retries} failed with #{exception}, retrying in #{backoff_time}s" - end - end - # event_log_level :retry_attempt, :debug # Rails 8.1 - - # Logs when max retries are exceeded - # - # @param event [ActiveSupport::Notifications::Event] - def retry_exhausted(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - max_retries = event.payload[:max_retries] - exception = event.payload[:exception] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}:Retries] Max retries (#{max_retries}) exceeded for #{exception}" - end - end - # event_log_level :retry_exhausted, :debug # Rails 8.1 - - # Logs tool execution - # - # @param event [ActiveSupport::Notifications::Event] - def tool_execution(event) + # @return [void] + def tool_call(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] tool_name = event.payload[:tool_name] + duration = event.duration.round(1) debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Executing tool: #{tool_name}" - end - end - # event_log_level :tool_execution, :debug # Rails 8.1 - - # Logs tool choice removal - # - # @param event [ActiveSupport::Notifications::Event] - def tool_choice_removed(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Removing tool_choice constraint after tool execution" - end - end - # event_log_level :tool_choice_removed, :debug # Rails 8.1 - - # Logs API request - # - # @param event [ActiveSupport::Notifications::Event] - def api_request(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - model = event.payload[:model] - streaming = event.payload[:streaming] - - - debug do - if streaming.nil? - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Executing request to #{model}" - else - mode = streaming ? "streaming" : "non-streaming" - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Executing #{mode} request to #{model}" - end + "[#{trace_id}] [ActiveAgent] [#{provider_module}] Tool call: #{tool_name} (#{duration}ms)" end end - # event_log_level :api_request, :debug # Rails 8.1 + # event_log_level :tool_call, :debug # Rails 8.1 - # Logs stream chunk processing - # # @param event [ActiveSupport::Notifications::Event] - def stream_chunk_processing(event) + # @return [void] + def stream_chunk(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] chunk_type = event.payload[:chunk_type] debug do if chunk_type - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Processing stream chunk: #{chunk_type}" + "[#{trace_id}] [ActiveAgent] [#{provider_module}] Stream chunk: #{chunk_type}" else - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Processing stream chunk" + "[#{trace_id}] [ActiveAgent] [#{provider_module}] Stream chunk" end end end - # event_log_level :stream_chunk_processing, :debug # Rails 8.1 + # event_log_level :stream_chunk, :debug # Rails 8.1 - # Logs stream finished - # - # @param event [ActiveSupport::Notifications::Event] - def stream_finished(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - finish_reason = event.payload[:finish_reason] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Stream finished with reason: #{finish_reason}" - end - end - # event_log_level :stream_finished, :debug # Rails 8.1 - - # Logs API routing decisions - # - # @param event [ActiveSupport::Notifications::Event] - def api_routing(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - api_type = event.payload[:api_type] - api_version = event.payload[:api_version] - has_audio = event.payload[:has_audio] - - debug do - if has_audio - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Routing to #{api_type.to_s.capitalize} API (api_version: #{api_version}, audio: #{has_audio})" - else - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Routing to #{api_type.to_s.capitalize} API (api_version: #{api_version})" - end - end - end - # event_log_level :api_routing, :debug # Rails 8.1 - - # Logs embeddings requests - # - # @param event [ActiveSupport::Notifications::Event] - def embeddings_request(event) - trace_id = event.payload[:trace_id] - provider_module = event.payload[:provider_module] - - debug do - "[#{trace_id}] [ActiveAgent] [#{provider_module}] Executing embeddings request" - end - end - # event_log_level :embeddings_request, :debug # Rails 8.1 - - # Logs connection errors + # Logs connection failures with service URI and error details. # # @param event [ActiveSupport::Notifications::Event] + # @return [void] def connection_error(event) trace_id = event.payload[:trace_id] provider_module = event.payload[:provider_module] @@ -340,8 +158,6 @@ def connection_error(event) private - # Use the logger configured for ActiveAgent::Base - # # @return [Logger] def logger ActiveAgent::Base.logger @@ -351,6 +167,8 @@ def logger end # region log_subscriber_attach +# Subscribe to both top-level (.active_agent) and provider-level (.provider.active_agent) events +ActiveAgent::Providers::LogSubscriber.attach_to :active_agent ActiveAgent::Providers::LogSubscriber.attach_to :"provider.active_agent" # endregion log_subscriber_attach diff --git a/lib/active_agent/providers/mock_provider.rb b/lib/active_agent/providers/mock_provider.rb index 1f9a8573..68065242 100644 --- a/lib/active_agent/providers/mock_provider.rb +++ b/lib/active_agent/providers/mock_provider.rb @@ -56,20 +56,20 @@ def api_prompt_execute(parameters) else # Return a complete response { - id: "mock-#{SecureRandom.hex(8)}", - type: "message", - role: "assistant", - content: [ + "id" => "mock-#{SecureRandom.hex(8)}", + "type" => "message", + "role" => "assistant", + "content" => [ { - type: "text", - text: pig_latin_content + "type" => "text", + "text" => pig_latin_content } ], - model: parameters[:model] || "mock-model", - stop_reason: "end_turn", - usage: { - input_tokens: content.length, - output_tokens: pig_latin_content.length + "model" => parameters[:model] || "mock-model", + "stop_reason" => "end_turn", + "usage" => { + "input_tokens" => content.length, + "output_tokens" => pig_latin_content.length } } end @@ -80,27 +80,27 @@ def api_prompt_execute(parameters) # Returns random embedding vectors for testing purposes. # # @param parameters [Hash] The embedding request parameters - # @return [Hash] A mock embedding response structure + # @return [Hash] A mock embedding response structure with symbol keys def api_embed_execute(parameters) input = parameters[:input] inputs = input.is_a?(Array) ? input : [ input ] dimensions = parameters[:dimensions] || 1536 { - object: "list", - data: inputs.map.with_index do |text, index| + "object" => "list", + "data" => inputs.map.with_index do |text, index| { - object: "embedding", - index: index, - embedding: generate_random_embedding(dimensions) + "object" => "embedding", + "index" => index, + "embedding" => generate_random_embedding(dimensions) } end, - model: parameters[:model] || "mock-embedding-model", - usage: { - prompt_tokens: inputs.sum { |text| text.to_s.length }, - total_tokens: inputs.sum { |text| text.to_s.length } + "model" => parameters[:model] || "mock-embedding-model", + "usage" => { + "prompt_tokens" => inputs.sum { |text| text.to_s.length }, + "total_tokens" => inputs.sum { |text| text.to_s.length } } - } + }.deep_symbolize_keys end # Processes streaming response chunks. @@ -112,7 +112,7 @@ def api_embed_execute(parameters) def process_stream_chunk(api_response_chunk) chunk_type = api_response_chunk[:type]&.to_sym - instrument("stream_chunk_processing.provider.active_agent", chunk_type: chunk_type) + instrument("stream_chunk.active_agent", chunk_type: chunk_type) broadcast_stream_open diff --git a/lib/active_agent/providers/ollama_provider.rb b/lib/active_agent/providers/ollama_provider.rb index b9ee6db4..bd589ad3 100644 --- a/lib/active_agent/providers/ollama_provider.rb +++ b/lib/active_agent/providers/ollama_provider.rb @@ -56,7 +56,6 @@ def api_prompt_execute(parameters) # @return [Hash] symbolized API response # @raise [OpenAI::Errors::APIConnectionError] when Ollama server unreachable def api_embed_execute(parameters) - instrument("embeddings_request.provider.active_agent") client.embeddings.create(**parameters).as_json.deep_symbolize_keys rescue ::OpenAI::Errors::APIConnectionError => exception diff --git a/lib/active_agent/providers/open_ai/_base.rb b/lib/active_agent/providers/open_ai/_base.rb index 76ab36f2..9525d6d6 100644 --- a/lib/active_agent/providers/open_ai/_base.rb +++ b/lib/active_agent/providers/open_ai/_base.rb @@ -49,8 +49,9 @@ def process_tool_call_function(api_function_call) name = api_function_call[:name] kwargs = JSON.parse(api_function_call[:arguments], symbolize_names: true) if api_function_call[:arguments] - instrument("tool_execution.provider.active_agent", tool_name: name) - tools_function.call(name, **kwargs) + instrument("tool_call.active_agent", tool_name: name) do + tools_function.call(name, **kwargs) + end end end end diff --git a/lib/active_agent/providers/open_ai/chat_provider.rb b/lib/active_agent/providers/open_ai/chat_provider.rb index 9cd35e87..75933901 100644 --- a/lib/active_agent/providers/open_ai/chat_provider.rb +++ b/lib/active_agent/providers/open_ai/chat_provider.rb @@ -30,6 +30,15 @@ def api_prompt_executer client.chat.completions end + # @see BaseProvider#api_response_normalize + # @param api_response [OpenAI::Models::ChatCompletion] + # @return [Hash] normalized response hash + def api_response_normalize(api_response) + return api_response unless api_response + + Chat::Transforms.gem_to_hash(api_response) + end + # Processes streaming response chunks from OpenAI's chat API # # Handles message deltas, content updates, and completion detection. @@ -46,7 +55,7 @@ def api_prompt_executer # @return [void] # @see Base#process_stream_chunk def process_stream_chunk(api_response_event) - instrument("stream_chunk_processing.provider.active_agent") + instrument("stream_chunk.active_agent") # Called Multiple Times: [Chunk, T] case api_response_event.type @@ -61,11 +70,6 @@ def process_stream_chunk(api_response_event) if api_message.delta.content broadcast_stream_update(message_stack.last, api_message.delta.content) end - - # If this is the last api_chunk to be processed - return unless api_message.finish_reason - - instrument("stream_finished.provider.active_agent", finish_reason: api_message.finish_reason) when :"content.delta" # Returns the deltas, without context # => {type: :"content.delta", delta: "", snapshot: "", parsed: nil} @@ -95,12 +99,13 @@ def process_stream_chunk(api_response_event) # @see Base#process_function_calls def process_function_calls(api_function_calls) api_function_calls.each do |api_function_call| - content = case api_function_call[:type] - when "function" - instrument("tool_execution.provider.active_agent", tool_name: api_function_call.dig(:function, :name)) - process_tool_call_function(api_function_call[:function]) - else - fail "Unexpected Tool Call Type: #{api_function_call[:type]}" + content = instrument("tool_call.active_agent", tool_name: api_function_call.dig(:function, :name)) do + case api_function_call[:type] + when "function" + process_tool_call_function(api_function_call[:function]) + else + fail "Unexpected Tool Call Type: #{api_function_call[:type]}" + end end # Create tool message using gem's message param class @@ -117,17 +122,27 @@ def process_function_calls(api_function_calls) end # Extracts messages from the completed API response. + # Converts OpenAI gem response object to hash for storage. # # @param api_response [OpenAI::Models::Chat::ChatCompletion] + # @return [Common::PromptResponse, nil] + def process_prompt_finished(api_response = nil) + # Convert gem object to hash so that raw_response["usage"] works + api_response_hash = api_response ? Chat::Transforms.gem_to_hash(api_response) : nil + super(api_response_hash) + end + + # Extracts messages from completed API response. + # + # @param api_response [Hash] converted response hash # @return [Array, nil] single-element array with message or nil if no message # @see Base#process_prompt_finished_extract_messages def process_prompt_finished_extract_messages(api_response) return unless api_response - api_message = api_response.choices[0].message - message = JSON.parse(api_message.to_json, symbolize_names: true) + api_message = api_response[:choices][0][:message] - [ message ] + [ api_message ] end # Extracts function calls from the last message in the stack. diff --git a/lib/active_agent/providers/open_ai/responses_provider.rb b/lib/active_agent/providers/open_ai/responses_provider.rb index 29248f77..b1b498e6 100644 --- a/lib/active_agent/providers/open_ai/responses_provider.rb +++ b/lib/active_agent/providers/open_ai/responses_provider.rb @@ -30,6 +30,15 @@ def api_prompt_executer client.responses end + # @see BaseProvider#api_response_normalize + # @param api_response [OpenAI::Models::Responses::Response] + # @return [Hash] normalized response hash + def api_response_normalize(api_response) + return api_response unless api_response + + Responses::Transforms.gem_to_hash(api_response) + end + # Processes streaming response chunks from the Responses API # # Event types handled: @@ -48,7 +57,7 @@ def api_prompt_executer # @return [void] # @see Base#process_stream_chunk def process_stream_chunk(api_response_event) - instrument("stream_chunk_processing.provider.active_agent", chunk_type: api_response_event.type) + instrument("stream_chunk.active_agent", chunk_type: api_response_event.type) case api_response_event.type # Response Created @@ -143,12 +152,14 @@ def process_stream_output_item_done(api_response_event) # @see Base#process_function_calls def process_function_calls(api_function_calls) api_function_calls.each do |api_function_call| - instrument("tool_execution.provider.active_agent", tool_name: api_function_call[:name]) + output = instrument("tool_call.active_agent", tool_name: api_function_call[:name]) do + process_tool_call_function(api_function_call).to_json + end # Create native gem input item for function call output message = ::OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput.new( call_id: api_function_call[:call_id], - output: process_tool_call_function(api_function_call).to_json + output: ) # Convert to hash for message_stack @@ -156,15 +167,25 @@ def process_function_calls(api_function_calls) end end - # Extracts messages from completed API response. + # Converts OpenAI gem response object to hash for storage. # # @param api_response [OpenAI::Models::Responses::Response] + # @return [Common::PromptResponse, nil] + def process_prompt_finished(api_response = nil) + # Convert gem object to hash so that raw_response["usage"] works + api_response_hash = api_response ? Responses::Transforms.gem_to_hash(api_response) : nil + super(api_response_hash) + end + + # Extracts messages from completed API response. + # + # @param api_response [Hash] converted response hash # @return [Array, nil] output array from response.output or nil def process_prompt_finished_extract_messages(api_response) return unless api_response - # Convert native gem output array to hash array for message_stack - api_response.output.map { |output| Responses::Transforms.gem_to_hash(output) } + # Response is already a hash from process_prompt_finished + api_response[:output] end # Extracts function calls from message stack. diff --git a/lib/active_agent/providers/open_ai_provider.rb b/lib/active_agent/providers/open_ai_provider.rb index e6d8d1a6..d96ae0ba 100644 --- a/lib/active_agent/providers/open_ai_provider.rb +++ b/lib/active_agent/providers/open_ai_provider.rb @@ -60,10 +60,8 @@ def initialize(kwargs = {}) # @see https://platform.openai.com/docs/guides/migrate-to-responses def prompt if api_version == :chat || context[:audio].present? - instrument("api_routing.provider.active_agent", api_type: :chat, api_version: api_version, has_audio: context[:audio].present?) OpenAI::ChatProvider.new(raw_options).prompt else # api_version == :responses || true - instrument("api_routing.provider.active_agent", api_type: :responses, api_version: api_version) OpenAI::ResponsesProvider.new(raw_options).prompt end end @@ -89,7 +87,6 @@ def preview # @param parameters [Hash] The embedding request parameters # @return [Object] The embedding response from OpenAI def api_embed_execute(parameters) - instrument("embeddings_request.provider.active_agent") client.embeddings.create(**parameters).as_json.deep_symbolize_keys end end diff --git a/lib/active_agent/providers/open_router_provider.rb b/lib/active_agent/providers/open_router_provider.rb index 5c822f24..cce5ff92 100644 --- a/lib/active_agent/providers/open_router_provider.rb +++ b/lib/active_agent/providers/open_router_provider.rb @@ -48,6 +48,15 @@ def message_merge_delta(message, delta) hash_merge_delta(message, delta) end + + # @see BaseProvider#api_response_normalize + # @param api_response [OpenAI::Models::ChatCompletion] + # @return [Hash] normalized response hash + def api_response_normalize(api_response) + return api_response unless api_response + + OpenAI::Chat::Transforms.gem_to_hash(api_response) + end end end end diff --git a/test/docs/actions/usage_examples_test.rb b/test/docs/actions/usage_examples_test.rb new file mode 100644 index 00000000..3c190832 --- /dev/null +++ b/test/docs/actions/usage_examples_test.rb @@ -0,0 +1,213 @@ +# frozen_string_literal: true + +require "test_helper" + +module Docs + module Actions + module Usage + class BasicUsageAgent < ApplicationAgent + generate_with :mock + + def chat + prompt(message: params[:message]) + end + end + + class UsageExamplesTest < ActiveAgentTestCase + test "accessing usage statistics" do + VCR.use_cassette("docs/actions/usage/accessing_usage") do + response = BasicUsageAgent.with(message: "Hello").chat.generate_now + + # region accessing_usage + # Normalized fields (available across all providers) + response.usage.input_tokens + response.usage.output_tokens + response.usage.total_tokens + # endregion accessing_usage + + assert_kind_of Integer, response.usage.input_tokens + assert_kind_of Integer, response.usage.output_tokens + assert_kind_of Integer, response.usage.total_tokens + end + end + + test "common fields across providers" do + VCR.use_cassette("docs/actions/usage/common_fields") do + response = BasicUsageAgent.with(message: "Hello").chat.generate_now + + # region common_fields + usage = response.usage + + # All providers support these + usage.input_tokens # Tokens in the prompt/input + usage.output_tokens # Tokens in the completion/output + usage.total_tokens # Total tokens used (auto-calculated if not provided) + # endregion common_fields + + assert usage.input_tokens >= 0 + assert usage.output_tokens >= 0 + assert usage.total_tokens >= 0 + end + end + + class OpenAIUsageAgent < ApplicationAgent + generate_with :openai, model: "gpt-4o-mini" + + def chat + prompt(message: params[:message]) + end + end + + test "OpenAI provider-specific fields" do + VCR.use_cassette("docs/actions/usage/provider_specific_openai") do + response = OpenAIUsageAgent.with(message: "Hello").chat.generate_now + + # region provider_specific_openai + usage = response.usage + + # OpenAI-specific fields + usage.cached_tokens # Prompt tokens served from cache + usage.reasoning_tokens # Tokens used for reasoning (o1 models) + usage.audio_tokens # Tokens for audio input/output + # endregion provider_specific_openai + + assert_respond_to usage, :cached_tokens + assert_respond_to usage, :reasoning_tokens + assert_respond_to usage, :audio_tokens + end + end + + class AnthropicUsageAgent < ApplicationAgent + generate_with :anthropic, model: "claude-3-5-haiku-20241022" + + def chat + prompt(message: params[:message]) + end + end + + test "Anthropic provider-specific fields" do + VCR.use_cassette("docs/actions/usage/provider_specific_anthropic") do + response = AnthropicUsageAgent.with(message: "Hello").chat.generate_now + + # region provider_specific_anthropic + usage = response.usage + + # Anthropic-specific fields + usage.cached_tokens # Tokens read from cache + usage.cache_creation_tokens # Tokens written to cache + usage.service_tier # "standard" or "prioritized" + # endregion provider_specific_anthropic + + assert_respond_to usage, :cached_tokens + assert_respond_to usage, :cache_creation_tokens + assert_respond_to usage, :service_tier + end + end + + class OllamaUsageAgent < ApplicationAgent + generate_with :ollama, model: "llama3.2" + + def chat + prompt(message: params[:message]) + end + end + + test "Ollama provider-specific fields" do + skip "Requires local Ollama server" + + response = OllamaUsageAgent.with(message: "Hello").chat.generate_now + + # region provider_specific_ollama + usage = response.usage + + # Ollama-specific fields + usage.duration_ms # Total request duration in ms + usage.provider_details[:tokens_per_second] # Generation throughput + # endregion provider_specific_ollama + + assert usage.duration_ms > 0 + assert usage.provider_details[:tokens_per_second] > 0 + end + + test "OpenAI provider details" do + VCR.use_cassette("docs/actions/usage/provider_details_openai") do + response = OpenAIUsageAgent.with(message: "Hello").chat.generate_now + + # region provider_details_openai + usage = response.usage + + # Access raw provider-specific data + usage.provider_details + # Contains: prompt_tokens_details, completion_tokens_details, etc. + # endregion provider_details_openai + + assert usage.provider_details.is_a?(Hash) + end + end + + test "Ollama timing breakdown" do + skip "Requires local Ollama server" + + response = OllamaUsageAgent.with(message: "Hello").chat.generate_now + + # region provider_details_ollama + usage = response.usage + + # Ollama provides detailed timing metrics + usage.provider_details[:load_duration_ms] # Model load time + usage.provider_details[:prompt_eval_duration_ms] # Prompt processing time + usage.provider_details[:eval_duration_ms] # Generation time + # endregion provider_details_ollama + + assert usage.provider_details[:load_duration_ms] + assert usage.provider_details[:prompt_eval_duration_ms] + assert usage.provider_details[:eval_duration_ms] + end + + test "cost tracking calculation" do + VCR.use_cassette("docs/actions/usage/cost_tracking") do + response = BasicUsageAgent.with(message: "Analyze this data").chat.generate_now + + # region cost_tracking + INPUT_PRICE_PER_TOKEN = 0.00001 + OUTPUT_PRICE_PER_TOKEN = 0.00003 + CACHE_DISCOUNT_PER_TOKEN = 0.000005 + + # Track usage per request + input_cost = response.usage.input_tokens * INPUT_PRICE_PER_TOKEN + output_cost = response.usage.output_tokens * OUTPUT_PRICE_PER_TOKEN + total_cost = input_cost + output_cost + + # Account for cached tokens (reduced cost) + if response.usage.cached_tokens + cache_savings = response.usage.cached_tokens * CACHE_DISCOUNT_PER_TOKEN + total_cost -= cache_savings + end + # endregion cost_tracking + + assert total_cost > 0 + assert input_cost >= 0 + assert output_cost >= 0 + end + end + + test "embeddings have zero output tokens" do + VCR.use_cassette("docs/actions/usage/embeddings_usage") do + response = BasicUsageAgent.embed(input: "Search text").embed_now + + # region embeddings_usage + # Embeddings only consume input tokens + response.usage.input_tokens # Text vectorized + response.usage.output_tokens # Always 0 for embeddings + response.usage.total_tokens # Same as input_tokens + # endregion embeddings_usage + + assert response.usage.input_tokens > 0 + assert_equal 0, response.usage.output_tokens + assert response.usage.total_tokens > 0 + end + end + end + end + end +end diff --git a/test/docs/framework/providers_examples_test.rb b/test/docs/framework/providers_examples_test.rb index 04c23e7d..a93a7c88 100644 --- a/test/docs/framework/providers_examples_test.rb +++ b/test/docs/framework/providers_examples_test.rb @@ -74,8 +74,8 @@ class CustomHostAgent < ActiveAgent::Base # Access usage statistics (if available) usage = response.usage - prompt_tokens = response.prompt_tokens - completion_tokens = response.completion_tokens + input_tokens = response.usage.input_tokens + output_tokens = response.usage.output_tokens # endregion generation_response_usage doc_example_output(response) @@ -83,10 +83,12 @@ class CustomHostAgent < ActiveAgent::Base assert_not_nil content assert_equal "assistant", role assert messages.is_a?(Array) + assert_kind_of Integer, input_tokens + assert_kind_of Integer, output_tokens assert context.is_a?(Hash) - assert usage.is_a?(Hash) if usage - assert prompt_tokens.is_a?(Integer) if prompt_tokens - assert completion_tokens.is_a?(Integer) if completion_tokens + assert_kind_of ActiveAgent::Providers::Common::Usage, usage if usage + assert_kind_of Integer, input_tokens if input_tokens + assert_kind_of Integer, output_tokens if output_tokens end end end diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/accessing_usage.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/accessing_usage.yml new file mode 100644 index 00000000..31013f51 --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/accessing_usage.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Hello"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '39' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:29:00 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999972' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_62e82dbaa46043b68bc915bf9b34862d + Openai-Processing-Ms: + - '1231' + X-Envoy-Upstream-Service-Time: + - '1235' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=jOQZKgjXVgzzXkJpsTIM8IcZrHmNMDv8gvYREQVg2tc-1762990140-1.0.1.1-JvvOr91wMqUGWrgxUz_rEn6RYuQwszhiXPKZad3YgxZQTDz7orI8proaUEMGVhljE9FFB_qVHbw0sQ6vsej4NJUoOk8wCNr4inuslNbtpuI; + path=/; expires=Wed, 12-Nov-25 23:59:00 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=uMhThWMYpKj2a6BxY9l6Er2bQJ.P4PINt1WKJBTRftw-1762990140010-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99d9cf0b6fc12832-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_0abb6ef00617114b006915183ac54481988bfb21f9c31318da", + "object": "response", + "created_at": 1762990138, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_0abb6ef00617114b006915183bab448198956d8f6deb9b7c12", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "Hello! How can I assist you today?" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 8, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 10, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 18 + }, + "user": null, + "metadata": {} + } + recorded_at: Wed, 12 Nov 2025 23:28:59 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/common_fields.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/common_fields.yml new file mode 100644 index 00000000..27048458 --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/common_fields.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Hello"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '39' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:29:01 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999972' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_b80e7fca6487446caed8d3ebcc994781 + Openai-Processing-Ms: + - '1821' + X-Envoy-Upstream-Service-Time: + - '1827' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=bBWzd_FUEAjpdkFec6IZ2VYIAGyCSFT3JRdB9.9BheI-1762990141-1.0.1.1-emI31PxOfuGU.1qQr8ZPaF254RU1o2U66Y2g7ubcUv8Tcj0pgniCRpjyvjdCBF5lLtonk4RQ47Gh_h5jUv.T5ccnu371YPmvit3Y5xfM8_M; + path=/; expires=Wed, 12-Nov-25 23:59:01 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=unEnj2VshMfN7lSF77iQgl3MYjjx1XV5ZYGxtzY78tA-1762990141936-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99d9cf175bd0171e-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_08b1b213774d685b006915183c1d04819b9471e7689b6fd652", + "object": "response", + "created_at": 1762990140, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_08b1b213774d685b006915183d67ac819bbf1bbbf83d4a8da0", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "Hello! How can I assist you today?" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 8, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 10, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 18 + }, + "user": null, + "metadata": {} + } + recorded_at: Wed, 12 Nov 2025 23:29:01 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/cost_tracking.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/cost_tracking.yml new file mode 100644 index 00000000..c94dc2fc --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/cost_tracking.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Analyze this data"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '51' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:29:04 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999972' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_3aa3f5b81cb74a5f96f6919c33fb6021 + Openai-Processing-Ms: + - '1588' + X-Envoy-Upstream-Service-Time: + - '1591' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=SYGePNCdAXcPQvzXZ.pqgzwECm3935Z3ScTnq9ZMWGY-1762990144-1.0.1.1-QdUwsugAIC_eFAjPEf65AvOpl497qrIgLFaqYhJkRjd.5CUA_VsaWuKVeuRDCKLdhRkZWYKBvsloSVMFFG4ezbM38XtbIwFgGjLMCp7JbIE; + path=/; expires=Wed, 12-Nov-25 23:59:04 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=.gLFiyetPJIqjeiWnkcFbXATnesDECFa6NDAim0ENaY-1762990144017-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99d9cf235bcacf27-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_00aba42807ba86ca006915183e6edc819bbd6f4f680f25ed62", + "object": "response", + "created_at": 1762990142, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_00aba42807ba86ca006915183f3874819b893d43666b711090", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "I don't have the ability to view images directly. However, if you provide the data in text form or describe it, I can help analyze it!" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 10, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 31, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 41 + }, + "user": null, + "metadata": {} + } + recorded_at: Wed, 12 Nov 2025 23:29:03 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/embeddings_usage.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/embeddings_usage.yml new file mode 100644 index 00000000..338e041f --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/embeddings_usage.yml @@ -0,0 +1,1665 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/embeddings + body: + encoding: UTF-8 + string: '{"model":"text-embedding-3-small","input":"Search text"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '56' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:29:04 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + Access-Control-Allow-Origin: + - "*" + Access-Control-Expose-Headers: + - X-Request-ID + Openai-Model: + - text-embedding-3-small + Openai-Organization: + - ORGANIZATION_ID + Openai-Processing-Ms: + - '152' + Openai-Project: + - PROJECT_ID + Openai-Version: + - '2020-10-01' + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + Via: + - envoy-router-6889f5648c-65kr7 + X-Envoy-Upstream-Service-Time: + - '329' + X-Ratelimit-Limit-Requests: + - '10000' + X-Ratelimit-Limit-Tokens: + - '10000000' + X-Ratelimit-Remaining-Requests: + - '9999' + X-Ratelimit-Remaining-Tokens: + - '9999998' + X-Ratelimit-Reset-Requests: + - 6ms + X-Ratelimit-Reset-Tokens: + - 0s + X-Request-Id: + - req_5985a0b646c24509b5b430c2ec74d2b0 + X-Openai-Proxy-Wasm: + - v0.1 + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=e67wbjwg7vCcNaO74UqMy2nwPzoTr7YitFJZkNArfkM-1762990144-1.0.1.1-iQIxhj9.yy.HUDi.kLp6LJQlW7Jxq_L8Z1e4mIYf2nE6wbc48Rl0hQtlKZEWfC8xwu8GeT4OfURTdqnmyC0s0lYDqPLm7cvtyuDGs1hysw8; + path=/; expires=Wed, 12-Nov-25 23:59:04 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=FYoTso4gNjNODYQXeOEUO0FIVdqOft32G9DVViW0.gs-1762990144722-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99d9cf307bd7fac2-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: | + { + "object": "list", + "data": [ + { + "object": "embedding", + "index": 0, + "embedding": [ + 0.017561762, + 0.04001304, + -0.00036797966, + -0.031073036, + -0.045973048, + -0.010408309, + -0.04128605, + 0.053263925, + -0.0047014602, + -0.016057294, + 0.014183943, + -0.02722507, + 0.021308463, + 0.0032819808, + 0.034978863, + 0.028657207, + -0.032577503, + -0.012578214, + 0.004972698, + 0.034747407, + 0.010437242, + -0.008296269, + -0.01640448, + 0.011739184, + 0.012882001, + -0.0033362284, + -0.010466173, + 0.021134872, + 0.066717334, + -0.015522052, + 0.006231243, + -0.039174013, + 0.0050414116, + 0.0076886956, + -0.018936034, + 0.014668556, + -0.005243936, + 0.010878456, + -0.008542191, + -0.046233434, + -0.00280641, + 0.024809243, + 0.015724575, + 0.016635936, + -0.01657807, + -0.047737904, + -0.04909771, + -0.01508807, + 0.047332853, + 0.034949932, + 0.05236703, + -0.021062542, + -0.028744005, + 0.03706197, + 0.015608847, + -0.022248756, + -0.0614517, + 0.007178768, + 0.04224081, + 0.035297114, + -0.011645155, + 0.06266685, + -0.0035604518, + 0.0055513396, + 0.007243865, + -0.007515103, + -0.03466061, + 0.05300354, + -0.015579916, + 0.0027810945, + -0.022147493, + -0.012165932, + -0.020064386, + -0.040765274, + -0.0019420647, + -0.014321371, + 0.05222237, + 0.041662168, + -0.016939722, + 0.0045206347, + 0.043513823, + 0.0037213864, + -0.03144915, + -0.026964681, + -0.020570695, + 0.025416818, + -0.06521287, + 0.018863704, + -0.025518078, + -0.0523381, + -0.05922393, + -0.013619768, + 0.01676613, + -0.006643525, + 0.025503613, + 0.017460499, + -0.018675646, + -0.015478653, + -0.01647681, + 0.05164373, + 0.0038117992, + -0.03315614, + 0.040852074, + -0.04559693, + -0.008650687, + -0.0002942933, + -0.0041842996, + -0.056735773, + -0.00062701205, + -0.033358667, + -0.07342958, + -0.038392846, + -0.026892351, + 0.037727408, + -0.013858458, + -0.017590694, + 0.017518364, + -0.052829947, + -0.036136147, + 0.002949262, + 0.009605445, + -0.014285206, + -0.0076742293, + -0.019558074, + -0.0044555375, + 0.0107120965, + -0.03874003, + -0.034602746, + -0.0066073597, + -0.03480527, + 0.043282364, + -0.009026803, + -0.007428307, + 0.001581318, + -0.052627422, + -0.008628988, + -0.035441775, + 0.028932063, + -0.040852074, + 0.042848386, + 0.00047828315, + -0.000022306507, + -0.005135441, + 0.026270313, + -0.07267734, + -0.03127556, + -0.026733225, + -0.00043963172, + 0.02974216, + -0.0013136965, + -0.00750787, + -0.015232731, + -0.077827245, + 0.030291868, + 0.0070594233, + -0.011189475, + 0.025980992, + -0.008108211, + -0.043311298, + -0.007905686, + -0.0039202943, + -0.0028733155, + 0.06341908, + -0.021771377, + 0.0010171428, + -0.02988682, + 0.0095765125, + -0.002905864, + -0.055433832, + 0.023189047, + -0.018285064, + -0.018097006, + -0.003211459, + 0.026082255, + -0.02453439, + 0.036830515, + 0.029857889, + 0.039231878, + -0.024838176, + 0.03648333, + 0.005135441, + 0.03926081, + 0.015825838, + -0.021988368, + 0.059252862, + -0.011898311, + 0.048200816, + -0.0042783287, + -0.02930818, + -0.013337681, + 0.028150896, + 0.015391857, + 0.055578493, + 0.010531271, + -0.010032193, + 0.0147553515, + -0.0124841845, + 0.004658062, + -0.015478653, + 0.00033068442, + 0.04348489, + 0.004231314, + -0.026371574, + 0.021366328, + -0.09090454, + 0.0011491453, + -0.022610407, + 0.028266625, + -0.023449436, + -0.010090057, + 0.09622804, + -0.017865548, + 0.00002294222, + -0.031478085, + -0.03188313, + 0.0012269003, + 0.00031802666, + -0.025460215, + 0.035268184, + -0.040707413, + -0.0013516698, + -0.012983263, + 0.028483614, + 0.01640448, + 0.023319243, + 0.031333424, + 0.020454967, + -0.016057294, + -0.018140404, + -0.022914194, + 0.047911495, + 0.032461774, + 0.043282364, + -0.010111756, + 0.034718476, + 0.011449863, + 0.013930788, + 0.008780881, + 0.0065277964, + -0.032751095, + -0.108321644, + -0.05236703, + 0.009822435, + 0.03142022, + 0.0077754916, + 0.0080358805, + 0.03535498, + -0.040909935, + 0.045105085, + 0.005627286, + -0.002556871, + 0.034168765, + 0.0032711313, + -0.058384903, + 0.07059423, + 0.0013625193, + -0.039463334, + 0.03254857, + 0.03874003, + -0.014111613, + -0.008925541, + -0.011985106, + -0.020975744, + -0.034429155, + -0.061393835, + -0.026429439, + 0.03801673, + -0.026371574, + 0.01153666, + -0.047361787, + -0.044005666, + 0.021626716, + 0.064460635, + -0.01657807, + -0.050399654, + 0.049734216, + -0.022827396, + 0.02180031, + 0.027384197, + 0.007536802, + -0.061393835, + -0.0019764216, + -0.011189475, + -0.016968654, + 0.01206467, + 0.015883703, + 0.043166637, + -0.030783715, + 0.024274, + 0.045220815, + -0.00075313775, + -0.04221188, + 0.046956737, + -0.03590469, + -0.006488015, + 0.035412844, + 0.038248185, + -0.013800594, + 0.041604307, + 0.03087051, + 0.083324336, + 0.024664583, + 0.027803712, + 0.003233158, + -0.00052077713, + 0.06584937, + 0.025358953, + 0.002130123, + 0.048229747, + -0.023116717, + -0.009952629, + -0.021453124, + 0.01844419, + -0.06116238, + -0.040909935, + -0.05375577, + 0.04796936, + 0.037958864, + 0.013735496, + -0.056504317, + 0.066890925, + -0.027977305, + 0.054970916, + 0.0025659122, + -0.0029564952, + 0.024028078, + -0.02329031, + -0.016809529, + -0.0021626716, + -0.011182242, + 0.010061124, + -0.0204839, + -0.03422663, + -0.046985667, + -0.024809243, + -0.044323917, + -0.0038732798, + 0.058008786, + -0.011522193, + -0.025416818, + -0.013547438, + -0.029105654, + -0.021453124, + -0.015232731, + 0.036165077, + -0.005967238, + -0.03633867, + -0.006715855, + -0.0083252005, + 0.011594524, + 0.0071281367, + -0.021756912, + 0.023319243, + -0.018400792, + 0.006770103, + -0.06492355, + 0.0031391287, + 0.007572967, + -0.00485697, + 0.0295541, + 0.015333993, + -0.048779458, + 0.019514676, + -0.022841863, + 0.015377391, + 0.021670114, + -0.019775065, + 0.011652388, + 0.024433127, + 0.0034447236, + 0.0056055873, + -0.009807969, + 0.007544035, + 0.037119836, + -0.012267195, + 0.013265351, + -0.02252361, + 0.030234005, + 0.015305061, + 0.045683727, + 0.008520492, + -0.008448162, + -0.031796336, + -0.058153447, + 0.011131611, + -0.015059139, + 0.04151751, + 0.03983945, + -0.0070666564, + -0.007797191, + -0.0117898155, + 0.0027901358, + 0.04224081, + -0.002479116, + -0.0017106081, + 0.018632248, + 0.038826827, + 0.048171885, + 0.019803997, + -0.021525454, + 0.031478085, + -0.030754782, + -0.0272974, + -0.020802153, + -0.027977305, + 0.022060698, + -0.01913856, + 0.026545167, + 0.010719329, + -0.0021500138, + 0.0029962766, + -0.035383914, + 0.021988368, + 0.05190412, + -0.001032513, + -0.015536518, + 0.038913622, + 0.06793248, + 0.017257975, + 0.00061751873, + 0.012831369, + -0.0020198196, + -0.034487016, + 0.0030252088, + -0.008368599, + 0.04632023, + 0.07429753, + -0.037380226, + 0.0630719, + -0.014972342, + -0.04064955, + 0.020093318, + -0.009605445, + -0.028772935, + -0.040244497, + -0.040736344, + -0.037582748, + 0.05499985, + -0.022190891, + -0.023463903, + -0.019615939, + -0.0098586, + -0.007388525, + 0.008759182, + -0.010082823, + 0.034429155, + 0.03518139, + 0.026617497, + -0.043195568, + -0.028816333, + 0.02485264, + -0.02475138, + -0.0060287183, + -0.01709885, + 0.018820306, + -0.009084667, + 0.0061589126, + -0.025720604, + 0.02259594, + 0.020643026, + 0.008231171, + 0.028845266, + 0.0032657066, + 0.031796336, + 0.057314415, + -0.0326643, + 0.014285206, + 0.055752084, + 0.04646489, + -0.013193021, + -0.017518364, + 0.050717905, + -0.030147208, + -0.014704721, + -0.005497092, + -0.006509714, + -0.03153595, + -0.04082314, + -0.022769533, + -0.06770103, + 0.0103649115, + -0.019008365, + 0.009323357, + 0.016390013, + -0.02741313, + 0.09946843, + -0.020122249, + 0.0017603352, + 0.010401077, + -0.03561537, + -0.030639054, + 0.0151604, + -0.010632533, + -0.017518364, + -0.0318542, + -0.026603032, + -0.0637084, + -0.017793218, + 0.0041915327, + -0.021713512, + -0.007258331, + 0.0036743719, + -0.022176426, + 0.012643311, + 0.031159831, + 0.02751439, + 0.013627001, + -0.0025821866, + 0.023550699, + 0.0054790094, + -0.0032819808, + -0.007392142, + 0.021424193, + 0.030407598, + 0.041922558, + 0.0013552863, + -0.0034935465, + 0.020657493, + -0.022190891, + -0.016216421, + 0.04921344, + 0.0010505955, + 0.029221382, + -0.066601604, + -0.012947097, + -0.035268184, + -0.018964967, + -0.0492713, + -0.011247339, + -0.021771377, + -0.014567293, + 0.010176853, + 0.03312721, + 0.014212876, + -0.00091994915, + -0.0387111, + -0.03408197, + 0.0037937167, + -0.027138274, + 0.031333424, + -0.0010261841, + 0.03547071, + 0.0033597357, + -0.009612678, + -0.007883987, + 0.0072619477, + -0.0004032406, + 0.003198801, + 0.017012052, + -0.014581759, + 0.02875847, + -0.037438087, + 0.00849156, + -0.003184335, + 0.014719186, + 0.015608847, + -0.023203515, + -0.011254572, + 0.015247197, + 0.0054753926, + 0.0014420825, + -0.018487588, + 0.006889447, + -0.02242235, + -0.013641467, + 0.0069762436, + -0.011760883, + -0.012390155, + 0.0069039133, + -0.041054595, + 0.010581901, + -0.018820306, + 0.008180541, + 0.013858458, + -0.022509145, + 0.0025731453, + -0.015290595, + 0.0038696632, + -0.062088206, + -0.01683846, + -0.019803997, + 0.025055166, + 0.022161959, + 0.020281376, + -0.027818177, + -0.017880015, + 0.0036454399, + 0.011898311, + -0.0030234004, + 0.0041445177, + 0.061509565, + 0.0043542753, + 0.017865548, + -0.009337823, + -0.007171535, + -0.01957254, + -0.0026183517, + -0.05864529, + 0.0010713905, + 0.0035658767, + 0.024881573, + -0.018256132, + 0.03309828, + 0.009149765, + -0.030205073, + 0.0059419223, + 0.00065051933, + -0.01727244, + 0.021192735, + -0.031940997, + -0.021279532, + 0.03072585, + -0.02187264, + 0.00933059, + 0.015753508, + 0.011572825, + -0.0356443, + -0.024375262, + -0.006437384, + 0.01767749, + -0.0086723855, + -0.029394975, + 0.017547296, + 0.0545948, + -0.007037724, + -0.012947097, + 0.008310735, + -0.019297685, + 0.04151751, + 0.0003112457, + 0.011659621, + 0.056591112, + 0.026588565, + -0.03607828, + -0.038942557, + -0.0030378664, + 0.06897403, + -0.019413413, + -0.019095162, + 0.010610834, + 0.014126079, + 0.022567008, + 0.02861381, + -0.045278676, + 0.017634092, + 0.011514961, + 0.017012052, + 0.011594524, + 0.026516235, + 0.001787459, + -0.010748261, + 0.007450006, + 0.030291868, + -0.015044672, + 0.020382637, + 0.01657807, + -0.0054428442, + -0.0061986945, + -0.008354133, + -0.007949084, + 0.0034176, + 0.048432272, + -0.0013561904, + -0.06706452, + -0.037611682, + 0.006401219, + -0.010972485, + -0.01069763, + -0.013945254, + 0.021192735, + -0.04417926, + 0.009554814, + 0.03312721, + 0.014277972, + 0.038045663, + 0.022190891, + 0.058818884, + -0.04067848, + 0.022755068, + 0.0036635224, + -0.008166075, + 0.013901856, + -0.030841578, + 0.0039058283, + 0.028772935, + 0.014567293, + 0.0060395678, + 0.016418945, + 0.00827457, + -0.0036906463, + -0.010835057, + -0.02511303, + 0.033734784, + 0.009511416, + 0.025908662, + -0.029800024, + -0.016635936, + 0.024722448, + -0.006730321, + -0.021192735, + 0.020527298, + 0.019731667, + -0.0046761446, + 0.00019167492, + -0.002471883, + -0.03590469, + -0.006730321, + -0.0017069917, + 0.037235565, + 0.0076163653, + -0.008064812, + -0.03301148, + -0.0033579275, + -0.03425556, + -0.0042421636, + 0.035875756, + 0.003710537, + -0.0227406, + 0.012476952, + -0.036020417, + -0.00697986, + 0.021120405, + -0.00019890793, + -0.020093318, + -0.012100835, + -0.052453827, + 0.0029402208, + -0.017764287, + -0.015898168, + 0.036975175, + 0.013127923, + 0.007522336, + 0.010205785, + 0.02609672, + -0.0062203933, + 0.035991486, + -0.055896744, + -0.011196708, + 0.010806126, + -0.01858885, + 0.0025858032, + 0.00026016252, + -0.011211175, + 0.003419408, + -0.018053606, + -0.03309828, + 0.02624138, + -0.024346331, + -0.05300354, + -0.023796622, + 0.004744858, + 0.0136487, + -0.002171713, + -0.034747407, + 0.024259534, + -0.06584937, + -0.008484327, + 0.024997301, + 0.026964681, + 0.014364769, + 0.049039844, + 0.007905686, + -0.032577503, + 0.009656075, + -0.012708409, + -0.0016943339, + -0.024114873, + -0.021568852, + 0.00069346535, + 0.01093632, + -0.013735496, + -0.042154014, + -0.007572967, + 0.023521766, + 0.0039998577, + -0.0025677206, + -0.024187203, + 0.019558074, + -0.004003474, + 0.033619057, + -0.0019981205, + 0.00964161, + -0.017417101, + 0.0032440075, + -0.032230318, + -0.013858458, + 0.032780025, + -0.01662147, + -0.0010433625, + -0.0018552685, + 0.025431283, + -0.020368172, + -0.002200645, + -0.027731381, + -0.017952345, + 0.021308463, + -0.028179828, + -0.04093887, + 0.01379336, + 0.00040346663, + 0.013113457, + -0.009026803, + -0.019832928, + 0.004506169, + 0.018545453, + 0.008780881, + 0.03518139, + 0.0016681142, + 0.02343497, + 0.018473122, + 0.015898168, + -0.028527014, + 0.03986838, + 0.06862685, + -0.011580057, + -0.0055079414, + -0.024389729, + -0.003894979, + 0.017518364, + 0.01434307, + 0.007016025, + -0.023102252, + -0.029655363, + -0.020541765, + 0.020165646, + 0.036396533, + -0.004914834, + -0.00024479238, + 0.01705545, + 0.015319527, + 0.036946245, + -0.0074066077, + -0.019268753, + 0.010762727, + -0.030436529, + 0.0012865727, + -0.02055623, + -0.011449863, + -0.03142022, + -0.0079780165, + 0.02868614, + 0.011977874, + 0.014068215, + -0.02172798, + 0.005428378, + 0.021279532, + 0.013185787, + -0.0019131326, + -0.026588565, + 0.024216136, + -0.014769818, + -0.0017847465, + 0.008679619, + 0.0032874055, + 0.0013435326, + -0.01000326, + -0.007609132, + -0.010191319, + 0.0032458156, + -0.024693515, + 0.010719329, + 0.017460499, + -0.023246912, + -0.0387111, + 0.0035170538, + 0.028671674, + 0.02650177, + 0.01372103, + 0.0041698334, + -0.023637494, + 0.050168198, + 0.010437242, + -0.020657493, + -0.036396533, + -0.007265564, + 0.036975175, + -0.026226914, + 0.037264496, + -0.028179828, + 0.009128066, + 0.032751095, + 0.0119634075, + -0.027760314, + 0.0013878349, + -0.016115159, + -0.029611966, + -0.03506566, + 0.04614664, + -0.030986238, + 0.024389729, + 0.03379265, + 0.010516805, + 0.013829526, + -0.0073740594, + -0.026125653, + 0.022277689, + 0.0067918017, + 0.014928944, + 0.011283504, + -0.013091758, + -0.061220244, + 0.005905757, + 0.0364544, + 0.0023977447, + 0.018183801, + 0.015232731, + -0.014249041, + 0.02376769, + 0.015464188, + 0.02897546, + -0.043021977, + 0.00189505, + 0.019977588, + 0.03251964, + 0.010176853, + -0.025633806, + -0.02686342, + 0.0056453687, + 0.00447362, + 0.026125653, + -0.020845551, + -0.02624138, + -0.031796336, + -0.0040794206, + 0.02758672, + 0.030667987, + 0.011037582, + 0.018805841, + 0.018559918, + 0.011956175, + -0.014661322, + 0.00023823745, + -0.026892351, + 0.030639054, + 0.036309738, + 0.017330306, + 0.052685287, + 0.018415257, + -0.0026509003, + -0.036020417, + 0.03735129, + -0.021453124, + 0.027167207, + -0.038248185, + 0.0071317535, + -0.027499925, + -0.019196423, + 0.021655649, + 0.00056553143, + -0.0016997587, + -0.016592538, + -0.031507015, + 0.008853211, + -0.018068073, + -0.0058732084, + 0.0045495667, + 0.026776623, + 0.013265351, + -0.00016330792, + 0.02938051, + -0.00025473777, + 0.016346615, + -0.0007151644, + -0.033300802, + 0.0126939425, + -0.008694084, + 0.032722164, + -0.016607003, + 0.042472266, + -0.028469149, + 0.013221952, + 0.02314565, + 0.023854485, + 0.050052468, + -0.044150326, + 0.0142924385, + 0.032085657, + -0.014509429, + -0.02070089, + -0.040504888, + 0.040041976, + 0.032317113, + 0.014299672, + -0.026386041, + 0.005280101, + 0.027022546, + 0.0341977, + 0.005370514, + 0.033069346, + -0.0033000633, + 0.012795204, + -0.006205927, + 0.018198267, + 0.0025514462, + -0.010806126, + -0.026892351, + -0.027051479, + 0.017474966, + 0.00036775364, + 0.030494394, + 0.02861381, + -0.020729823, + -0.0019565306, + -0.0005854222, + -0.039347604, + -0.02194497, + 0.02453439, + -0.0011066514, + 0.0047991057, + 0.0038154158, + 0.0182272, + 0.04640703, + 0.010965251, + -0.021351863, + 0.030494394, + -0.02172798, + 0.032317113, + -0.0050667273, + -0.03746702, + 0.008549425, + 0.038537506, + -0.0011355834, + 0.025764002, + -0.011384767, + 0.015131469, + -0.015984964, + 0.018719044, + 0.016390013, + -0.0079780165, + -0.002479116, + 0.006950928, + 0.014719186, + 0.0049220673, + -0.028150896, + -0.0037575515, + 0.03127556, + -0.023782155, + -0.015073604, + 0.022147493, + 0.015536518, + 0.008289035, + 0.013518506, + 0.007761026, + 0.0038262652, + 0.001388739, + -0.013865691, + 0.021684581, + -0.0052873343, + 0.0040794206, + -0.009547581, + -0.008245638, + -0.00993093, + -0.0033271872, + -0.016635936, + -0.00004721238, + -0.0019275986, + -0.0023489217, + 0.021742444, + 0.006187845, + 0.0065965103, + 0.018024676, + 0.03101517, + 0.037611682, + 0.012462486, + -0.0023561548, + 0.018574383, + -0.0027792861, + -0.0060070194, + 0.011052048, + 0.011218407, + 0.030667987, + -0.011435398, + 0.018502053, + 0.00009318158, + -0.004535101, + 0.023724291, + 0.009012338, + -0.06544433, + 0.03596255, + 0.040881004, + 0.01657807, + 0.003804566, + -0.037235565, + 0.012882001, + 0.023391573, + 0.012455253, + -0.018574383, + 0.0021409725, + -0.020758756, + 0.004036023, + -0.011493262, + -0.0013236419, + 0.010784426, + -0.008006948, + 0.018704578, + 0.032432843, + 0.004415756, + 0.027977305, + 0.042125084, + 0.013308749, + 0.020874484, + -0.00424578, + -0.020541765, + -0.00020625396, + -0.01662147, + -0.011500495, + -0.026689827, + 0.0394344, + -0.009120832, + 0.02671876, + 0.009540347, + -0.01709885, + 0.00485697, + 0.00066182093, + 0.0056019705, + -0.0142345745, + -0.0057032327, + -0.0016265244, + -0.006827967, + 0.020180114, + 0.010285348, + 0.004437455, + 0.015131469, + 0.028859733, + -0.029149052, + -0.008260104, + -0.02602439, + 0.016881859, + 0.0051679895, + -0.0038262652, + 0.00820224, + -0.0013525739, + -0.003419408, + -0.0031246627, + -0.041662168, + -0.024577787, + 0.017634092, + -0.029250314, + -0.018342927, + -0.022321086, + -0.021380793, + 0.021279532, + 0.0303208, + 0.007522336, + 0.020295842, + 0.052135576, + 0.0009208533, + 0.10311387, + 0.019876327, + -0.03576003, + -0.03130449, + -0.01921089, + 0.0013697523, + -0.01705545, + 0.002200645, + 0.015145934, + -0.0037901, + -0.012469719, + -0.023449436, + -0.010451707, + 0.019297685, + 0.009945396, + 0.0031011554, + 0.00820224, + -0.017474966, + 0.005334349, + -0.037004106, + 0.0083252005, + 0.014263507, + 0.026516235, + 0.02412934, + -0.009699474, + -0.009229328, + -0.011080979, + -0.0105385035, + 0.03440022, + 0.047072466, + 0.03170954, + -0.017503897, + -0.011601757, + 0.022841863, + -0.017995743, + -0.022118561, + -0.0064699324, + 0.00016138666, + -0.008845978, + -0.0039709257, + 0.009938164, + 0.03254857, + 0.032577503, + 0.037524886, + 0.0058298106, + 0.0022259606, + 0.05482626, + -0.0051028924, + 0.013807827, + 0.0021680964, + -0.042761587, + 0.025358953, + 0.0039419937, + -0.0025188976, + -0.01350404, + -0.019905258, + -0.020049918, + -0.02525769, + 0.0356443, + -0.030234005, + -0.024924971, + -0.00508481, + 0.022104096, + 0.015522052, + 0.009735639, + -0.037727408, + -0.023796622, + 0.020541765, + -0.016418945, + -0.0097501045, + -0.011601757, + 0.01578244, + -0.018342927, + -0.0076742293, + 0.006972627, + 0.008506026, + 0.014379235, + 0.015102536, + -0.007999715, + -0.0076163653, + 0.0072510983, + 0.06203034, + 0.00044257013, + 0.035991486, + 0.024288466, + -0.029611966, + -0.016520208, + -0.006650758, + 0.024592252, + 0.005576655, + -0.033358667, + 0.0049907807, + 0.039607994, + 0.049010914, + -0.00825287, + 0.014249041, + -0.011507728, + 0.01501574, + 0.024881573, + -0.038797896, + 0.0033778183, + 0.027528858, + -0.021655649, + 0.026689827, + 0.01923982, + 0.005034179, + -0.021438658, + -0.002030669, + -0.015117003, + -0.024302932, + 0.014523895, + 0.012824137, + 0.0024519924, + -0.014531128, + -0.022219824, + 0.011688553, + -0.03874003, + -0.03451595, + -0.019471278, + -0.030407598, + -0.0021988368, + -0.003126471, + -0.028801868, + 0.004636363, + -0.011080979, + -0.00713537, + -0.013062826, + 0.010777193, + 0.022176426, + 0.009728406, + 0.0016979504, + -0.020802153, + 0.011355834, + 0.014126079, + 0.024722448, + 0.008751948, + -0.0048605865, + -0.004285562, + 0.062261797, + -0.038797896, + -0.005070344, + 0.01705545, + -0.020498365, + 0.011645155, + 0.020686425, + -0.003909445, + 0.00918593, + 0.003446532, + -0.0056345193, + -0.051354412, + -0.044555377, + 0.017417101, + -0.037148766, + 0.008578356, + 0.037843138, + 0.049936738, + -0.04825868, + -0.0159271, + 0.003952843, + 0.022219824, + 0.0006871365, + 0.030899443, + -0.018357394, + 0.0056706844, + 0.0387111, + 0.0041409014, + -0.0055115577, + 0.020194579, + -0.0020758754, + -0.013417244, + 0.026660895, + -0.011196708, + -0.013113457, + 0.011457097, + -0.022292154, + 0.007493404, + -0.021062542, + -0.0031771022, + -0.007833356, + -0.016100693, + 0.008715784, + -0.049473826, + -0.007457239, + -0.026371574, + -0.012288894, + 0.025532546, + -0.0045893486, + -0.02300099, + 0.018386325, + 0.0056996164, + 0.0040179403, + -0.03845071, + 0.015984964, + 0.0014348495, + -0.006610976, + 0.011015883, + -0.019890793, + -0.018140404, + -0.019181957, + 0.010155153, + 0.027543323, + 0.014538362, + 0.027543323, + -0.0011084597, + -0.019847395, + -0.0140031185, + -0.027832644, + 0.015059139, + -0.0103649115, + -0.0032964468, + -0.0026725992, + -0.027311867, + 0.0056743007, + 0.009540347, + -0.018285064, + 0.012860302, + 0.01968827, + 0.04224081, + -0.014126079, + 0.001958339, + 0.0015053714, + -0.00371777, + -0.024100408, + 0.0054175286, + -0.0058298106, + 0.008585589, + -0.002784711, + -0.026197983, + -0.0071028215, + 0.008137142, + -0.020368172, + 0.009149765, + 0.0074029914, + -0.023406038, + -0.0011997764, + -0.0046544457, + -0.022465747, + -0.00083496125, + 0.0023597714, + 0.008216706, + -0.018473122, + -0.007044957, + -0.016028363, + 0.00036639743, + -0.01508807, + 0.031767406, + 0.0070304913, + 0.00394561, + 0.0063469713, + 0.015030206, + 0.009612678, + 0.038045663, + 0.000018675872, + -0.03596255, + -0.027355265, + -0.010589135, + 0.013164088, + -0.014155012, + -0.016650401, + 0.0041047363, + -0.010719329, + 0.0052584023, + 0.021207202, + 0.009323357, + -0.00851326, + -0.023782155, + 0.0057321647, + 0.014769818, + -0.027543323, + 0.055867814, + 0.017923413, + -0.0341109, + 0.0036797966, + 0.02722507, + -0.011601757, + -0.02478031, + 0.034920998, + 0.0053451983, + -0.008361366, + -0.0045568, + -0.0056996164, + -0.03518139, + 0.029800024, + -0.023203515, + 0.0182272, + -0.018285064, + -0.013070059, + -0.008665153, + 0.031217694, + 0.0029329879, + 0.017720887, + 0.011247339, + -0.0076669967, + 0.01599943, + -0.003996241, + -0.0041698334, + -0.01683846, + -0.025228757, + -0.036367603, + -0.020643026, + 0.008383065, + -0.0030794563, + 0.036685854, + -0.010184086, + -0.03072585, + 0.017648557, + 0.017576227, + 0.011486028, + 0.023174582, + 0.051846255, + -0.0037792507, + 0.0010487873, + -0.0038298818, + 0.002949262, + -0.0003333968, + 0.037582748, + 0.021019144, + 0.024577787, + -0.04064955, + -0.011037582, + 0.03017614, + -0.021655649, + -0.0072764135, + 0.009128066, + -0.017301373, + -0.05066004, + -0.02281293, + 0.0003797333, + 0.046956737, + 0.017229043, + 0.011240106, + 0.024765845, + -0.00050450285, + 0.0069400785, + -0.016664868, + 0.0032494322, + 0.010466173, + -0.01571011, + -0.0029709612, + -0.009265493, + 0.014335837, + 0.015956033, + 0.017171178, + 0.014299672, + 0.015883703, + 0.0017106081, + -0.0062999567, + 0.024476524, + 0.00029158095, + -0.03518139, + 0.002627393, + 0.0059238398, + 0.0056706844, + -0.021366328, + -0.023536233, + 0.015594382, + 0.017344771, + -0.011572825, + 0.012144233, + 0.011073747, + -0.013164088, + 0.008694084, + -0.003981775, + 0.026342643, + -0.018140404, + 0.02398468, + 0.015348459, + -0.00095204567, + 0.0076886956, + 0.011478796, + -0.030552257, + 0.029771091, + 0.010415542, + -0.015811373, + 0.007243865, + 0.025764002, + -0.000776645, + 0.017489431, + -0.036685854, + -0.050428584, + 0.019876327, + -0.032606434, + 0.018010208, + 0.004260246, + 0.004343426, + 0.014538362, + 0.012144233, + 0.0028733155, + -0.022161959 + ] + } + ], + "model": "text-embedding-3-small", + "usage": { + "prompt_tokens": 2, + "total_tokens": 2 + } + } + recorded_at: Wed, 12 Nov 2025 23:29:04 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/provider_details_openai.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/provider_details_openai.yml new file mode 100644 index 00000000..14b7460b --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/provider_details_openai.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Hello"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '39' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:30:15 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999975' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_10ea69985d0e44fc9be010b3c59c6402 + Openai-Processing-Ms: + - '3421' + X-Envoy-Upstream-Service-Time: + - '3444' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=xDwW7Oiq7k50et3ajNss6_XNRhLgCYpk9U9AsYo_Ceg-1762990215-1.0.1.1-rWT2W.lXVhAVLCQl.L7bmRg_0HmBMGf7rKmpCYmQFobVcvHg0MSwI0AtbyBYXThpSBLuul4LST70O_PVzUxh.OB244nXJ0caw51KuV9sRu4; + path=/; expires=Thu, 13-Nov-25 00:00:15 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=MN2DBqkNsphmsqY6SmpnGH566otRz5H4iylgetwWOLA-1762990215441-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99d9d0d3fe205c19-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_05bf124409484823006915188420b88193b8ecd210eb403639", + "object": "response", + "created_at": 1762990212, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_05bf1244094848230069151886f6e0819396ec614e58093b1b", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "Hello! How can I assist you today?" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 8, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 10, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 18 + }, + "user": null, + "metadata": {} + } + recorded_at: Wed, 12 Nov 2025 23:30:15 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/provider_specific_anthropic.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/provider_specific_anthropic.yml new file mode 100644 index 00000000..d46172ad --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/provider_specific_anthropic.yml @@ -0,0 +1,102 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.anthropic.com/v1/messages + body: + encoding: UTF-8 + string: '{"model":"claude-sonnet-4-5-20250929","messages":[{"content":"Hello","role":"user"}],"max_tokens":4096}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - Anthropic::Client/Ruby 1.14.0 + Host: + - api.anthropic.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 1.14.0 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Anthropic-Version: + - '2023-06-01' + X-Api-Key: + - ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '103' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:30:17 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + Anthropic-Ratelimit-Input-Tokens-Limit: + - '2000000' + Anthropic-Ratelimit-Input-Tokens-Remaining: + - '2000000' + Anthropic-Ratelimit-Input-Tokens-Reset: + - '2025-11-12T23:30:17Z' + Anthropic-Ratelimit-Output-Tokens-Limit: + - '400000' + Anthropic-Ratelimit-Output-Tokens-Remaining: + - '400000' + Anthropic-Ratelimit-Output-Tokens-Reset: + - '2025-11-12T23:30:17Z' + Anthropic-Ratelimit-Requests-Limit: + - '4000' + Anthropic-Ratelimit-Requests-Remaining: + - '3999' + Anthropic-Ratelimit-Requests-Reset: + - '2025-11-12T23:30:15Z' + Retry-After: + - '45' + Anthropic-Ratelimit-Tokens-Limit: + - '2400000' + Anthropic-Ratelimit-Tokens-Remaining: + - '2400000' + Anthropic-Ratelimit-Tokens-Reset: + - '2025-11-12T23:30:17Z' + Request-Id: + - req_011CV4s9kUdtGgpToN2GFVdE + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + Anthropic-Organization-Id: + - 2557c2f2-bcfa-4054-9fa8-ae13b3b47d6b + X-Envoy-Upstream-Service-Time: + - '1942' + Cf-Cache-Status: + - DYNAMIC + X-Robots-Tag: + - none + Server: + - cloudflare + Cf-Ray: + - 99d9d0ef98dd67f2-SJC + body: + encoding: ASCII-8BIT + string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_014ftmnS91EH3yGspxrY4nK7","type":"message","role":"assistant","content":[{"type":"text","text":"Hello! + How can I help you today?"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":8,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":12,"service_tier":"standard"}}' + recorded_at: Wed, 12 Nov 2025 23:30:17 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/docs/actions/usage/provider_specific_openai.yml b/test/fixtures/vcr_cassettes/docs/actions/usage/provider_specific_openai.yml new file mode 100644 index 00000000..5fd575fe --- /dev/null +++ b/test/fixtures/vcr_cassettes/docs/actions/usage/provider_specific_openai.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Hello"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '39' + response: + status: + code: 200 + message: OK + headers: + Date: + - Wed, 12 Nov 2025 23:30:19 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999972' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_16c74cedf9ea44d898017d37375b608b + Openai-Processing-Ms: + - '1312' + X-Envoy-Upstream-Service-Time: + - '1315' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=En_qHTTt6lDr2tmYjcbnW8Qgkqc92rSSej8cav45MnE-1762990219-1.0.1.1-eIKNDzths7Ju3yeW3KFjD4AqPE3LK_zqqG9QSSJPzz.pxT4eWiI9.UwFOBWC5zQwI__.nhj483Yd2tXzg7CesclB6ajKAI9jnjkRJovCHyc; + path=/; expires=Thu, 13-Nov-25 00:00:19 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=_On__tK7Yo9tVSGuUJ.hPbWpNOXcolg4WxoIh9Zz5as-1762990219557-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99d9d0fce99aeb2c-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_0054c3d74facf1fc006915188a3fa0819b8a586f696c00c805", + "object": "response", + "created_at": 1762990218, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_0054c3d74facf1fc006915188b38ac819b9a45b57ed8302e7a", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "Hello! How can I assist you today?" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 8, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 10, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 18 + }, + "user": null, + "metadata": {} + } + recorded_at: Wed, 12 Nov 2025 23:30:19 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/anthropic_prompt.yml b/test/fixtures/vcr_cassettes/usage/anthropic_prompt.yml new file mode 100644 index 00000000..7514d41a --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/anthropic_prompt.yml @@ -0,0 +1,102 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.anthropic.com/v1/messages + body: + encoding: UTF-8 + string: '{"model":"claude-3-5-haiku-20241022","messages":[{"content":"Say hello","role":"user"}],"max_tokens":4096}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - Anthropic::Client/Ruby 1.14.0 + Host: + - api.anthropic.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 1.14.0 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Anthropic-Version: + - '2023-06-01' + X-Api-Key: + - ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '106' + response: + status: + code: 200 + message: OK + headers: + Date: + - Fri, 14 Nov 2025 06:05:16 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + Anthropic-Ratelimit-Input-Tokens-Limit: + - '400000' + Anthropic-Ratelimit-Input-Tokens-Remaining: + - '400000' + Anthropic-Ratelimit-Input-Tokens-Reset: + - '2025-11-14T06:05:16Z' + Anthropic-Ratelimit-Output-Tokens-Limit: + - '80000' + Anthropic-Ratelimit-Output-Tokens-Remaining: + - '80000' + Anthropic-Ratelimit-Output-Tokens-Reset: + - '2025-11-14T06:05:16Z' + Anthropic-Ratelimit-Requests-Limit: + - '4000' + Anthropic-Ratelimit-Requests-Remaining: + - '3999' + Anthropic-Ratelimit-Requests-Reset: + - '2025-11-14T06:05:15Z' + Retry-After: + - '46' + Anthropic-Ratelimit-Tokens-Limit: + - '480000' + Anthropic-Ratelimit-Tokens-Remaining: + - '480000' + Anthropic-Ratelimit-Tokens-Reset: + - '2025-11-14T06:05:16Z' + Request-Id: + - req_011CV7H5iHrhR21sVyQ7w5Q3 + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + Anthropic-Organization-Id: + - 2557c2f2-bcfa-4054-9fa8-ae13b3b47d6b + X-Envoy-Upstream-Service-Time: + - '831' + Cf-Cache-Status: + - DYNAMIC + X-Robots-Tag: + - none + Server: + - cloudflare + Cf-Ray: + - 99e450ee4aeced39-SJC + body: + encoding: ASCII-8BIT + string: '{"model":"claude-3-5-haiku-20241022","id":"msg_01Qpa3TR8C7bwnQ2kSHq6bDq","type":"message","role":"assistant","content":[{"type":"text","text":"Hello! + How are you doing today?"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":9,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":11,"service_tier":"standard"}}' + recorded_at: Fri, 14 Nov 2025 06:05:16 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/ollama_chat_prompt.yml b/test/fixtures/vcr_cassettes/usage/ollama_chat_prompt.yml new file mode 100644 index 00000000..beccaab5 --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/ollama_chat_prompt.yml @@ -0,0 +1,57 @@ +--- +http_interactions: +- request: + method: post + uri: http://127.0.0.1:11434/v1/chat/completions + body: + encoding: UTF-8 + string: '{"model":"deepseek-r1:latest","messages":[{"role":"user","content":"Say + hello"}]}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - 127.0.0.1:11434 + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Authorization: + - Bearer ollama + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '81' + response: + status: + code: 200 + message: OK + headers: + Content-Type: + - application/json + Date: + - Fri, 14 Nov 2025 06:06:37 GMT + Content-Length: + - '1196' + body: + encoding: ASCII-8BIT + string: !binary |- + eyJpZCI6ImNoYXRjbXBsLTU4OSIsIm9iamVjdCI6ImNoYXQuY29tcGxldGlvbiIsImNyZWF0ZWQiOjE3NjMxMDAzOTcsIm1vZGVsIjoiZGVlcHNlZWstcjE6bGF0ZXN0Iiwic3lzdGVtX2ZpbmdlcnByaW50IjoiZnBfb2xsYW1hIiwiY2hvaWNlcyI6W3siaW5kZXgiOjAsIm1lc3NhZ2UiOnsicm9sZSI6ImFzc2lzdGFudCIsImNvbnRlbnQiOiJIZWxsbyEg8J+RiyBIb3cgY2FuIEkgYXNzaXN0IHlvdSB0b2RheT8iLCJyZWFzb25pbmciOiJPa2F5LCB1c2VyIGp1c3Qgc2FpZCBcIlNheSBoZWxsb1wiLiBUaGF0J3MgcHJldHR5IHNpbXBsZSwgYnV0IGxldCdzIHRoaW5rIGFib3V0IGl0LiBcblxuRmlyc3QsIHRoZSB1c2VyIG1pZ2h0IGJlIHRlc3RpbmcgdGhlIHJlc3BvbnNlLCBzZWVpbmcgaWYgdGhlIGFzc2lzdGFudCBpcyBhY3RpdmUuIE9yIG1heWJlIHRoZXkncmUganVzdCBzdGFydGluZyBhIGNvbnZlcnNhdGlvbiBjYXN1YWxseS4gVGhlIHF1ZXJ5IGZlZWxzIHZlcnkgbmV1dHJhbCwgbm8gaGlkZGVuIG1lYW5pbmcgb3IgdXJnZW5jeS5cblxuSG1tLCBzaG91bGQgSSBtYWtlIGl0IG1vcmUgZW5nYWdpbmc/IFNpbmNlIHRoZXkgaGF2ZW4ndCBwcm92aWRlZCBhbnkgY29udGV4dCwga2VlcGluZyBpdCBzaW1wbGUgc2VlbXMgYmVzdC4gQSBjaGVlcmZ1bCB0b25lIHdvdWxkIG1hdGNoIHRoZSBsb3ctc3Rha2VzIG5hdHVyZSBvZiB0aGUgcXVlcnkuIFxuXG5UaGUgZXhjbGFtYXRpb24gcG9pbnQgZmVlbHMgYXBwcm9wcmlhdGUgLSBmcmllbmRseSBidXQgbm90IG92ZXJiZWFyaW5nLiBcIkhlbGxvIVwiIHdpdGggYSBzcGFjZSBiZWZvcmUgdGhlIHB1bmN0dWF0aW9uIGZlZWxzIHJpZ2h0IGZvciBhIGRpZ2l0YWwgcmVzcG9uc2UuIE5vIG5lZWQgdG8gcmVhZCBpbnRvIHRoaXMgdG9vIG11Y2ggc2luY2UgdGhlIHVzZXIgZ2F2ZSBtaW5pbWFsIGlucHV0LlxuXG5JZiB0aGlzIGNvbnRpbnVlcywgSSdkIGJlIGN1cmlvdXMgdG8gc2VlIHdoZXJlIHRoZSB1c2VyIHdhbnRzIHRvIHRha2UgdGhlIGNvbnZlcnNhdGlvbi4gQnV0IGZvciBub3csIGEgc3RyYWlnaHRmb3J3YXJkIHdhcm0gZ3JlZXRpbmcgc2VlbXMgbW9zdCBzdWl0YWJsZS5cbiJ9LCJmaW5pc2hfcmVhc29uIjoic3RvcCJ9XSwidXNhZ2UiOnsicHJvbXB0X3Rva2VucyI6NCwiY29tcGxldGlvbl90b2tlbnMiOjE5MCwidG90YWxfdG9rZW5zIjoxOTR9fQo= + recorded_at: Fri, 14 Nov 2025 06:06:37 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/ollama_embedding.yml b/test/fixtures/vcr_cassettes/usage/ollama_embedding.yml new file mode 100644 index 00000000..ca9261f8 --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/ollama_embedding.yml @@ -0,0 +1,57 @@ +--- +http_interactions: +- request: + method: post + uri: http://127.0.0.1:11434/v1/embeddings + body: + encoding: UTF-8 + string: '{"model":"all-minilm","input":"Hello world"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - 127.0.0.1:11434 + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Authorization: + - Bearer ollama + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '44' + response: + status: + code: 200 + message: OK + headers: + Content-Type: + - application/json + Date: + - Fri, 14 Nov 2025 06:06:45 GMT + Transfer-Encoding: + - chunked + body: + encoding: UTF-8 + string: '{"object":"list","data":[{"object":"embedding","embedding":[-0.034477476,0.03089919,0.0066525997,0.026075302,-0.039411973,-0.16037956,0.06692076,-0.0065114987,-0.04746715,0.014774262,0.07094561,0.055527702,0.019183239,-0.026297176,-0.010018626,-0.02694714,0.022388367,-0.022206895,-0.14977267,-0.017530834,0.007593843,0.05425356,0.0032258914,0.03172468,-0.08466081,-0.029342309,0.05155044,0.04810574,-0.0032670277,-0.05822795,0.041971505,0.022229446,0.12815179,-0.022270953,-0.01172589,0.062949345,-0.032847937,-0.091243535,-0.031128705,0.0527483,0.047067724,-0.0841419,-0.029979296,-0.020692587,0.009497993,-0.0035992865,0.0074442513,0.039283764,0.09326075,-0.0037437186,-0.05266387,-0.05810145,-0.0069256593,0.005226947,0.08290669,0.019312404,0.0062818686,-0.010331655,0.008930681,-0.037712004,-0.0451757,0.023950823,-0.006926019,0.013429487,0.100097984,-0.07158877,-0.021700123,0.031693537,-0.051613886,-0.08224763,-0.06577985,-0.009853997,0.0058080866,0.073642164,-0.034008034,0.024907334,0.014441516,0.026451187,0.009659715,0.030284349,0.05287898,-0.07536943,0.009890015,0.029907776,0.01749893,0.023137782,0.0018918384,0.0013156137,-0.047173936,-0.011251151,-0.114226416,-0.019960184,0.0402782,0.002263406,-0.07986738,-0.025357267,0.09450008,-0.029062971,-0.14495482,0.23098148,0.027703557,0.032087322,0.03107304,0.042917974,0.064246915,0.032118835,-0.004844505,0.05577585,-0.03756279,-0.021487186,-0.028432509,-0.028887657,0.03842891,-0.017359545,0.05246581,-0.074936256,-0.03117573,0.02193601,-0.03982318,-0.00868192,0.026978249,-0.048551302,0.0114148,0.029628383,-0.020587105,0.013077944,0.028824616,-3.1978743e-33,0.064756095,-0.018065443,0.051900186,0.12193858,0.028755097,0.008794771,-0.07044012,-0.01685686,0.04067582,0.042228974,0.02545096,0.03577236,-0.04913409,0.0021394996,-0.015527416,0.050656527,-0.048141878,0.03587001,-0.0041341647,0.101653144,-0.055980507,-0.010677751,0.011231517,0.09068777,0.004311166,0.035094332,-0.009658418,-0.093830556,0.09275526,0.007997936,-0.0077075176,-0.05211923,-0.012592521,0.0032277433,0.0059896745,0.0075889966,0.010571875,-0.08629755,-0.06985889,-0.0025112403,-0.0910537,0.046871185,0.052033596,0.0072902804,0.010906427,-0.0052922554,0.013883791,0.021929426,0.03412567,0.06022737,0.00018939922,0.014662474,-0.070003435,0.028425347,-0.027542762,0.0108208675,0.034917552,-0.022430947,0.009681376,0.0772541,0.021618852,0.11491148,-0.06805402,0.023872936,-0.015999086,-0.017794142,0.06442478,0.032063078,0.05029356,-0.0059886253,-0.033769455,0.017821688,0.016568003,0.06333596,0.034753706,0.04658677,0.0978988,-0.006560697,0.025039855,-0.07780642,0.016878078,-0.0010056287,0.022576058,-0.03827208,0.095724806,-0.0052959668,0.010567642,-0.11538669,-0.013233598,-0.010786191,-0.08314754,0.07325493,0.049377635,-0.00902535,-0.09578929,3.368719e-33,0.12494061,0.019226594,-0.05817208,-0.035952363,-0.05086204,-0.045700986,-0.08266307,0.14819908,-0.088347495,0.060315363,0.051092636,0.010308114,0.14117527,0.030833809,0.061017472,-0.052806143,0.13661332,0.009174804,-0.01729589,-0.012849508,-0.007851705,-0.05108448,-0.05235089,0.0076632146,-0.015217304,0.017015407,0.021324564,0.020506728,-0.12004145,0.014523462,0.026743382,0.025221659,-0.042705655,0.0067635453,-0.014453524,0.045142446,-0.09138363,-0.0194595,-0.01780603,-0.055010416,-0.05270925,-0.010370771,-0.05205352,0.02091861,-0.0800377,-0.012147251,-0.05777769,0.023249486,-0.007838791,-0.025807643,-0.07987163,-0.020683097,0.048880856,-0.020459259,-0.049192835,0.014077997,-0.06374476,-0.0077936435,0.016429879,-0.02570753,0.013326135,0.026210405,0.009855113,0.06317215,0.0026150448,-0.0065878914,0.016604904,0.032400414,0.038005095,-0.036269885,-0.0069020735,0.00019546888,-0.0017537563,-0.027427409,-0.028019208,0.049696844,-0.028842364,-0.002381437,0.014814218,0.009768684,0.005769785,0.013410876,0.005515905,0.037237886,0.007291828,0.04006893,0.081418194,0.071973465,-0.013163481,-0.042782698,-0.010938228,0.004954725,-0.009230134,0.03506873,-0.051006988,-1.5708554e-8,-0.08855829,0.023913141,-0.01613273,0.031693846,0.027184805,0.052484535,-0.047118768,-0.05878992,-0.063239954,0.04077528,0.049807947,0.10646289,-0.074487366,-0.012401852,0.018361583,0.039486423,-0.024830224,0.0145000005,-0.03712331,0.020043187,0.000084006475,0.0098528005,0.024823241,-0.052528173,0.029328559,-0.08714939,-0.014472288,0.025996557,-0.018731995,-0.07618361,0.035059128,0.1036358,-0.028021293,0.012769875,-0.076482065,-0.018743372,0.024961002,0.08152013,0.06866304,-0.06411613,-0.08387692,0.06147997,-0.03345594,-0.10615395,-0.040166624,0.032536507,0.07665294,-0.072970055,0.0003983301,-0.04093929,-0.07580284,0.027465882,0.07468791,0.017779445,0.09106628,0.110334255,0.00065295195,0.05147226,-0.014612395,0.03323713,0.023671517,-0.022980435,0.03898893,0.030206425],"index":0}],"model":"all-minilm","usage":{"prompt_tokens":2,"total_tokens":2}} + + ' + recorded_at: Fri, 14 Nov 2025 06:06:45 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/openai_chat_prompt.yml b/test/fixtures/vcr_cassettes/usage/openai_chat_prompt.yml new file mode 100644 index 00000000..f3d0f4ba --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/openai_chat_prompt.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Say hello"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '43' + response: + status: + code: 200 + message: OK + headers: + Date: + - Fri, 14 Nov 2025 06:05:19 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999972' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_9ca1a561fcff4a4a938db8f940e1d611 + Openai-Processing-Ms: + - '1163' + X-Envoy-Upstream-Service-Time: + - '1166' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=3LVKtRtgxi0s7UK8FaO1pGQZrIsPW7XkYW2JKTQujqk-1763100319-1.0.1.1-i2cEs5_zLKJhT3hlvfP2EbqhEeAqDphqTdgmUuINt_hCz9xMTidwAEvj_4JbdZ_PW1Rwa1DP5dHi4XNQDA.ya1lpYmmyhCYI7XtjrTt128g; + path=/; expires=Fri, 14-Nov-25 06:35:19 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=MSd_sjWhqFXVBrumAKSrH_x4PdA.s.bCuTuzKYPatVg-1763100319680-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99e450fe5a7767e2-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_0a3b3d33c71c7add006916c69e8470819b92ea4b23997242b9", + "object": "response", + "created_at": 1763100318, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_0a3b3d33c71c7add006916c69f3a40819bbd56254de6105353", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "Hello! How can I assist you today?" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 9, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 10, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 19 + }, + "user": null, + "metadata": {} + } + recorded_at: Fri, 14 Nov 2025 06:05:19 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/openai_embedding.yml b/test/fixtures/vcr_cassettes/usage/openai_embedding.yml new file mode 100644 index 00000000..9618d28f --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/openai_embedding.yml @@ -0,0 +1,1665 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/embeddings + body: + encoding: UTF-8 + string: '{"model":"text-embedding-3-small","input":"Hello world"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '56' + response: + status: + code: 200 + message: OK + headers: + Date: + - Fri, 14 Nov 2025 06:05:18 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + Access-Control-Allow-Origin: + - "*" + Access-Control-Expose-Headers: + - X-Request-ID + Openai-Model: + - text-embedding-3-small + Openai-Organization: + - ORGANIZATION_ID + Openai-Processing-Ms: + - '77' + Openai-Project: + - PROJECT_ID + Openai-Version: + - '2020-10-01' + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + Via: + - envoy-router-5c77bdcc4-77zj7 + X-Envoy-Upstream-Service-Time: + - '327' + X-Ratelimit-Limit-Requests: + - '10000' + X-Ratelimit-Limit-Tokens: + - '10000000' + X-Ratelimit-Remaining-Requests: + - '9999' + X-Ratelimit-Remaining-Tokens: + - '9999998' + X-Ratelimit-Reset-Requests: + - 6ms + X-Ratelimit-Reset-Tokens: + - 0s + X-Request-Id: + - req_4a46b968974b492d80a89d171bfcd05b + X-Openai-Proxy-Wasm: + - v0.1 + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=ci2Z.f3o6mjWuajHK46BC4axlgz2DHQRo5mHgaxO9F4-1763100318-1.0.1.1-tQxjImpdjHSIrkIS0j3.NqO37nhT5JPM9NTwLYBY5tcR0dZ7tWkNoWETmnQhVDDC4pnwvdor7xiqUbnEtsWHfw62gp5n_tfAfEwN31SVo0c; + path=/; expires=Fri, 14-Nov-25 06:35:18 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=rFxV8yZukF6vXegzBpH.2uLENGe.LVx56zyvQTfP0K0-1763100318393-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99e450f8abdcfaf4-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: | + { + "object": "list", + "data": [ + { + "object": "embedding", + "index": 0, + "embedding": [ + -0.0020785425, + -0.049085874, + 0.02094679, + 0.031351026, + -0.045305308, + -0.026402483, + -0.028999701, + 0.060304623, + -0.025710916, + -0.014822582, + 0.015444992, + -0.029983262, + -0.020393535, + -0.03334889, + 0.025833862, + 0.014207856, + -0.07007877, + 0.012432834, + 0.014791845, + 0.048839983, + 0.020731635, + -0.008890475, + -0.015114577, + -0.016612971, + 0.02592607, + -0.0029026596, + -0.024327783, + 0.024281679, + 0.0017433246, + -0.055724915, + 0.023082962, + -0.04548973, + -0.008652269, + 0.003161997, + 0.004583551, + 0.0017942316, + 0.026694478, + 0.010158348, + -0.012056314, + -0.011472325, + -0.01491479, + -0.023129066, + 0.02535745, + 0.036822088, + -0.03550043, + 0.02126952, + -0.06307089, + 0.0403875, + 0.053542636, + 0.061534077, + -0.03365625, + -0.0066582514, + 0.025495762, + 0.10966712, + -0.004683444, + -0.039465412, + 0.007119296, + 0.05151404, + -0.026325643, + 0.027877826, + 0.030428939, + 0.020593323, + 0.017243065, + 0.0123559935, + 0.0010844151, + 0.007092402, + -0.03706798, + 0.02352864, + -0.010665497, + 0.040848546, + -0.002132331, + 0.031366397, + -0.04272346, + -0.0067581446, + -0.047241695, + -0.021807406, + -0.045674145, + 0.0009960483, + 0.00008950747, + -0.04745685, + -0.032949317, + 0.023421062, + -0.05286644, + -0.056216694, + -0.022560446, + 0.018042209, + -0.0359, + 0.001170861, + -0.006001263, + -0.019840283, + 0.011226434, + 0.0009922063, + -0.020101542, + 0.04512089, + 0.041371062, + 0.005490272, + -0.0072960295, + 0.03915805, + 0.059720635, + -0.008675321, + 0.027908562, + -0.024035787, + 0.02626417, + -0.016028982, + 0.008867423, + 0.011725899, + -0.00017241144, + -0.04613519, + -0.025065454, + 0.0045989193, + -0.11231045, + -0.015368151, + 0.0134163955, + -0.031197347, + 0.0051675406, + 0.03307226, + 0.0814512, + -0.05741541, + 0.017319907, + -0.0679887, + 0.020854581, + 0.0027163206, + -0.017719477, + -0.058552656, + -0.057937928, + -0.02634101, + -0.010511816, + 0.037498288, + -0.041647688, + 0.007199979, + 0.028999701, + 0.026771318, + 0.016397817, + -0.0065660425, + -0.04711875, + 0.012717145, + -0.03565411, + -0.022852441, + -0.03442466, + 0.039772775, + 0.05375779, + -0.05102226, + 0.002754741, + -0.042016525, + -0.0034386239, + -0.042016525, + -0.0021111998, + 0.03301079, + -0.023175173, + 0.005620901, + 0.043860704, + 0.05858339, + -0.05803014, + 0.015275942, + 0.029399272, + -0.007226873, + 0.035346746, + -0.035039384, + -0.011095805, + 0.0073805545, + 0.019871019, + 0.03190428, + -0.0365762, + -0.016151927, + -0.011426221, + 0.003467439, + -0.0019296635, + 0.0041417168, + -0.04773348, + 0.0071077696, + 0.01728917, + 0.014154067, + -0.023052227, + -0.056216694, + -0.013554709, + 0.014999315, + 0.017473588, + -0.052436132, + -0.010680865, + -0.04066413, + 0.04155548, + 0.062947944, + 0.005067648, + 0.019486815, + -0.039434675, + -0.013846704, + 0.004272346, + 0.020685531, + -0.003769039, + 0.024450729, + -0.04862483, + 0.056862157, + 0.02494251, + -0.01479953, + 0.01927166, + -0.03814375, + 0.073521234, + 0.010166032, + -0.008659953, + 0.03156618, + -0.014154067, + 0.002547271, + -0.029937157, + -0.014154067, + -0.034270976, + -0.05993579, + 0.008460167, + 0.075672776, + -0.012371361, + -0.042846404, + 0.07198442, + -0.019302398, + -0.024419991, + -0.00014023438, + -0.033471834, + -0.0076072346, + 0.025618708, + 0.016198032, + -0.00429924, + -0.04023382, + -0.04398365, + 0.070509076, + 0.07462774, + 0.005659322, + 0.030889984, + 0.014676584, + 0.0070847175, + -0.0051099104, + 0.038942896, + 0.009989298, + 0.016013613, + 0.012563463, + 0.01994786, + -0.034793492, + 0.0014618954, + -0.0060781036, + 0.03848185, + -0.020301327, + -0.014184804, + 0.003861248, + -0.0497006, + 0.00048145535, + 0.019486815, + 0.036483992, + 0.042508304, + -0.013347239, + -0.03301079, + -0.05062269, + 0.015329731, + 0.029967895, + 0.017135488, + 0.03888142, + 0.005113752, + 0.011034332, + 0.034885705, + 0.00062529166, + -0.008706057, + 0.0048294417, + 0.059782106, + -0.014807213, + -0.022560446, + -0.04930103, + -0.017412115, + -0.026248801, + -0.021054367, + -0.01387744, + 0.0015560252, + -0.0030986033, + 0.010288977, + -0.03854332, + 0.05111447, + -0.011618322, + 0.0635012, + -0.021469306, + 0.0005350038, + 0.023451798, + -0.010027719, + -0.012317573, + -0.040018667, + 0.03697577, + -0.028815283, + -0.010150664, + -0.011910317, + 0.044352483, + -0.06356267, + -0.0013639234, + 0.056370378, + -0.018226627, + 0.016751284, + -0.0017932712, + 0.017181592, + 0.045274574, + 0.03184281, + -0.01201021, + 0.009167102, + 0.012809354, + 0.007929966, + -0.017227698, + 0.029307064, + -0.023298116, + 0.07456627, + -0.008375642, + 0.017658005, + 0.014983947, + -0.009797196, + 0.034516867, + 0.02360548, + -0.05160625, + 0.06829606, + 0.017673373, + -0.034516867, + 0.004522078, + 0.04638108, + -0.024758091, + -0.0050407536, + -0.032303855, + -0.0045259204, + -0.03848185, + -0.010258241, + -0.046657708, + 0.028108347, + -0.053911474, + -0.0040302975, + 0.0036902772, + 0.026694478, + -0.008221961, + 0.053727057, + -0.0023532482, + -0.00880595, + -0.0050907, + 0.014968579, + -0.034178767, + -0.012647988, + 0.008659953, + -0.001723154, + -0.013193558, + -0.030889984, + 0.04548973, + -0.016966438, + 0.04779495, + -0.015829196, + -0.020086173, + 0.0072614513, + -0.01118033, + 0.009812565, + -0.009105629, + 0.03955762, + -0.039926454, + 0.0047180224, + 0.0022879334, + -0.0017529298, + 0.015859932, + 0.013155137, + -0.013485553, + -0.008444799, + -0.02143857, + 0.037190925, + -0.017903896, + -0.00080778846, + 0.024481464, + -0.014507535, + -0.021023631, + 0.0015569858, + -0.00062289037, + 0.05102226, + 0.004068718, + -0.018226627, + -0.05744615, + 0.0016386291, + 0.038358904, + -0.0040264553, + 0.0077724424, + 0.03150471, + -0.010942124, + -0.034363184, + -0.02701721, + 0.013270399, + -0.005978211, + 0.04837894, + -0.009605095, + 0.026233433, + -0.005801477, + -0.01670518, + -0.013001456, + 0.011111173, + -0.016920334, + 0.015698567, + 0.013577761, + 0.0016712864, + -0.0008851094, + 0.054311045, + -0.012640304, + 0.024466095, + 0.010119927, + 0.008667638, + -0.0113801155, + -0.0019940175, + -0.019871019, + -0.0052981703, + -0.0552024, + 0.044598375, + -0.040694863, + 0.028292766, + 0.017412115, + -0.064423285, + -0.04548973, + -0.007007877, + 0.046903595, + 0.020178381, + -0.027401414, + 0.01242515, + -0.041432533, + 0.033102997, + 0.02941464, + -0.028815283, + -0.007983754, + -0.012678725, + -0.033379626, + 0.0064661494, + 0.014761109, + 0.0031792861, + 0.035469692, + -0.018426413, + 0.027047945, + -0.010796126, + 0.04404512, + 0.0458893, + -0.030828511, + 0.0011862292, + -0.06669778, + -0.0144767985, + -0.012847775, + -0.02202256, + 0.014960895, + -0.001266912, + -0.043522604, + -0.014768793, + -0.0078108627, + 0.04324598, + 0.018641567, + 0.011941053, + 0.011072753, + -0.005786109, + -0.01487637, + -0.008759846, + -0.04481353, + -0.008421747, + -0.009420676, + -0.03439392, + -0.007906914, + 0.01944071, + -0.06657483, + 0.014438378, + 0.018180523, + -0.004910124, + 0.013408712, + 0.011149594, + -0.0023859055, + -0.013854388, + 0.03371772, + -0.005978211, + -0.0032119437, + -0.052067295, + 0.050192382, + 0.015391204, + -0.013324187, + -0.004253136, + 0.007534236, + -0.006700514, + -0.040510446, + -0.012732513, + 0.0036038314, + 0.01911798, + -0.013147453, + -0.031012928, + -0.006028157, + -0.030428939, + 0.0091517335, + -0.027047945, + -0.003250364, + 0.013485553, + -0.008875107, + 0.0075726565, + 0.0071538743, + 0.0069732983, + 0.08446336, + -0.06485359, + 0.0015790776, + -0.03799007, + -0.01662834, + -0.02185351, + 0.061195977, + 0.028907493, + -0.020485746, + 0.0010517578, + -0.043584075, + 0.0066736196, + -0.025818493, + 0.014983947, + -0.045858562, + 0.0048217573, + -0.012540411, + -0.011710531, + 0.000029640722, + 0.0445369, + -0.019686602, + -0.041248117, + -0.026586901, + 0.026586901, + -0.025249872, + -0.038113013, + 0.012409782, + 0.0015963666, + -0.016182663, + 0.012071682, + -0.008936579, + 0.0023916685, + 0.018672304, + -0.009658883, + -0.024957877, + 0.01193337, + 0.035377484, + -0.019071875, + -0.032457534, + 0.019348502, + 0.010434975, + -0.03267269, + -0.010473395, + -0.00023172291, + 0.026786687, + 0.005482588, + -0.0023570901, + 0.025096191, + -0.007007877, + -0.022283819, + -0.015813828, + -0.0014426851, + 0.011472325, + -0.0040072454, + -0.02816982, + 0.011518429, + 0.01953292, + 0.028600128, + 0.010173716, + -0.023820633, + 0.008014491, + 0.031197347, + 0.011564533, + -0.006496886, + 0.03854332, + 0.0053250645, + -0.023897475, + -0.0227756, + -0.013401028, + 0.032887843, + -0.005313538, + -0.022391396, + -0.012033262, + -0.00793765, + 0.0414018, + 0.021945719, + 0.0006036802, + 0.023590112, + 0.009812565, + 0.0027758724, + 0.03150471, + -0.020823844, + -0.0410637, + -0.024665883, + -0.03881995, + -0.014784161, + -0.02758583, + -0.028016139, + -0.019717338, + 0.004064876, + 0.0024185628, + 0.01645929, + 0.024957877, + 0.009028789, + -0.007226873, + -0.018411044, + -0.023082962, + -0.024619779, + 0.022990754, + -0.061687756, + 0.058737073, + 0.016812757, + 0.02236066, + 0.026448587, + -0.023774529, + 0.0101814, + 0.009243943, + 0.03258048, + 0.023236644, + -0.015152996, + -0.028707705, + -0.001632866, + -0.0011487693, + -0.039496146, + 0.012033262, + 0.012601884, + 0.006120366, + 0.007611077, + -0.049239557, + 0.016520763, + -0.024343152, + -0.025265241, + -0.010419606, + 0.007053981, + -0.029214855, + -0.032303855, + 0.038174488, + -0.009274679, + -0.0452131, + -0.015967509, + 0.031627655, + -0.01267104, + 0.0046219714, + -0.004057192, + -0.019871019, + -0.011610638, + -0.06559127, + 0.0075688143, + 0.025388185, + 0.00034866494, + 0.012832406, + -0.003058262, + -0.006189523, + -0.008344906, + 0.018656936, + 0.036115155, + 0.016182663, + 0.018580094, + -0.01221768, + -0.010104559, + -0.00377096, + 0.003776723, + -0.006331678, + -0.011579902, + -0.029614426, + 0.037529025, + 0.0032081015, + 0.02592607, + -0.025879966, + 0.048409674, + 0.021008262, + 0.017550427, + 0.0128861945, + 0.0066697774, + -0.007115454, + 0.01408491, + 0.018733775, + 0.025449658, + -0.009005737, + 0.025280608, + 0.005432641, + -0.02826203, + -0.018088313, + -0.024896404, + 0.0077109695, + -0.016028982, + 0.05784572, + -0.013700707, + -0.0111265415, + -0.023082962, + -0.006604463, + -0.012478938, + 0.02976811, + -0.005182909, + -0.00101814, + 0.003769039, + -0.001883559, + 0.03958836, + -0.004153243, + 0.046657708, + -0.0070001925, + -0.009643515, + -0.046780653, + -0.0063508884, + -0.03331815, + 0.006631357, + 0.030398203, + -0.040879283, + -0.035930738, + 0.009612778, + 0.014353853, + 0.01363155, + 0.018380309, + -0.002236066, + -0.0029948684, + -0.027447518, + -0.016643707, + 0.012409782, + 0.024297047, + -0.0022917755, + -0.028784547, + 0.013032192, + 0.00096435146, + -0.011533798, + -0.056093752, + 0.028384974, + -0.014107962, + -0.013785232, + 0.006631357, + 0.0026490851, + -0.010688549, + 0.0027816354, + 0.025326712, + 0.03334889, + 0.0016808915, + -0.026033647, + -0.010834547, + -0.013078297, + -0.0060819457, + -0.0513911, + 0.006208733, + -0.009412993, + -0.04804084, + 0.015214469, + 0.009397624, + -0.04505942, + 0.0149224745, + -0.013285766, + 0.003557727, + -0.010980544, + -0.008122068, + 0.009382256, + -0.024127997, + 0.02867697, + 0.01819589, + 0.023620848, + -0.016674444, + 0.02609512, + 0.031458605, + 0.0015646699, + -0.0038497217, + -0.050100174, + -0.010427291, + -0.0062817317, + -0.033441097, + -0.0126940925, + 0.01370839, + 0.01662834, + 0.035592638, + -0.0032388377, + -0.01904114, + 0.00588216, + 0.0176119, + -0.027708776, + 0.0015464202, + 0.0043184506, + 0.013032192, + -0.025080822, + -0.008659953, + 0.009128681, + -0.03765197, + 0.0038247486, + 0.00059119356, + -0.0077263378, + 0.025372818, + -0.0064430973, + 0.022176242, + -0.0079222815, + 0.040879283, + -0.018887458, + 0.011372432, + 0.023129066, + 0.005098384, + -0.0012035184, + 0.005670848, + 0.041340325, + -0.012578832, + 0.010288977, + 0.0064853597, + -0.0003685955, + -0.009620463, + -0.03897363, + 0.0053365906, + -0.03156618, + -0.0021515412, + -0.03780565, + 0.005282802, + -0.0255265, + -0.016674444, + 0.0067888806, + -0.037713442, + 0.0064430973, + 0.018518621, + 0.019517552, + -0.039342467, + -0.021469306, + -0.030029368, + -0.011856529, + -0.030229153, + -0.0029103437, + 0.05618596, + -0.013024508, + -0.041340325, + 0.010419606, + -0.004875546, + 0.012302205, + -0.029798845, + 0.030797774, + 0.024804195, + 0.0020074646, + -0.01238673, + -0.008268065, + 0.009382256, + -0.007242241, + -0.0056439536, + 0.006420045, + 0.055571236, + -0.015798459, + 0.0051175943, + -0.0114646405, + -0.050407536, + 0.013170505, + -0.017412115, + 0.00914405, + 0.01525289, + 0.012732513, + -0.0025856914, + -0.039496146, + -0.03814375, + 0.06254838, + 0.01887209, + -0.01795, + 0.028461816, + 0.01242515, + -0.022898545, + -0.013754495, + 0.0140311215, + -0.0071769264, + -0.0105732875, + 0.012786302, + 0.04256978, + 0.0024358518, + -0.024220206, + -0.029030437, + -0.02385137, + 0.01629024, + -0.009382256, + 0.012125471, + -0.012578832, + 0.059413273, + -0.006558358, + 0.016981807, + -0.008583113, + -0.0028930544, + 0.024957877, + -0.009466781, + -0.025050087, + 0.030874616, + -0.018088313, + -0.035623375, + 0.010911387, + -0.044967208, + -0.0030851562, + -0.0006190483, + -0.0407256, + -0.009266995, + 0.045028683, + 0.016336344, + 0.012601884, + 0.02335959, + -0.024573673, + 0.006923352, + 0.0009278521, + 0.033379626, + 0.031965755, + 0.022729496, + 0.0015444992, + -0.013262714, + -0.027047945, + 0.030782407, + 0.0462274, + -0.06313236, + 0.026586901, + -0.020516481, + -0.013792915, + 0.021177312, + 0.012517359, + 0.006496886, + -0.012332941, + -0.022575814, + -0.010872967, + -0.004084086, + 0.01979418, + 0.015498781, + -0.004414501, + 0.00059695664, + 0.0036422517, + 0.004364555, + -0.007637971, + 0.0059590004, + 0.0057246364, + -0.0024934825, + 0.0362381, + 0.006500728, + 0.020577954, + -0.0058898437, + -0.015314362, + -0.008652269, + 0.027293837, + 0.006116524, + -0.0025972174, + -0.00152721, + 0.0041417168, + 0.010988228, + 0.0038881423, + -0.003050578, + -0.047610532, + 0.023882106, + 0.0060473676, + 0.046842124, + 0.0037037244, + -0.0015550648, + 0.024512202, + 0.023836002, + -0.00044783752, + -0.01994786, + -0.03857406, + -0.018595463, + 0.015022367, + -0.0122714685, + 0.013885125, + -0.041770633, + 0.06909521, + 0.013301135, + 0.006527622, + 0.037836388, + 0.015229838, + -0.028415712, + 0.018380309, + 0.011249486, + -0.017888527, + -0.045704883, + 0.03331815, + -0.015498781, + -0.014131015, + -0.029691268, + 0.015590989, + 0.011295591, + 0.006112682, + 0.06983288, + 0.015398887, + 0.0014109884, + -0.0025203768, + -0.0015099208, + -0.007538078, + -0.008398695, + -0.0044452376, + 0.011318644, + -0.013769863, + -0.0355619, + 0.004064876, + 0.05919812, + -0.00914405, + -0.012893879, + 0.0004985044, + 0.0026221909, + -0.0032138645, + -0.020209119, + -0.00093553617, + -0.009113314, + -0.015798459, + -0.042784933, + -0.018825985, + -0.0010335081, + 0.0037229345, + 0.0021208047, + -0.0077954945, + 0.04321524, + 0.012379046, + -0.011234119, + 0.004495184, + -0.011695163, + 0.02019375, + 0.03264195, + 0.015521833, + 0.021884248, + 0.018964298, + 0.014200171, + -0.027785618, + -0.009059525, + -0.0069425623, + 0.010796126, + 0.006393151, + -0.00831417, + -0.009305416, + -0.00078233494, + -0.00918247, + -0.039096575, + 0.029168751, + 0.025311345, + -0.018825985, + -0.022929281, + 0.010834547, + -0.026110489, + -0.0006646725, + 0.043276712, + 0.02733994, + 0.022575814, + 0.017719477, + 0.023006123, + 0.016182663, + -0.0034021244, + 0.022990754, + -0.0067696706, + 0.009190154, + 0.01778095, + 0.021807406, + -0.023728425, + 0.0023244328, + 0.011787372, + 0.016198032, + -0.01121875, + 0.020224487, + -0.003757513, + -0.016351713, + -0.0045182365, + -0.0047064964, + -0.011280223, + -0.022714127, + 0.0035250697, + -0.006631357, + 0.005386537, + 0.00050907, + 0.008467851, + 0.048010103, + -0.019748073, + 0.0070655076, + -0.0019844125, + 0.029737372, + 0.0012025578, + 0.035807792, + -0.030905351, + -0.026018279, + -0.046319608, + -0.002445457, + -0.008099016, + -0.008890475, + -0.0032388377, + -0.0045873933, + -0.00032369167, + -0.00058254896, + -0.039680567, + 0.024343152, + -0.0017049043, + -0.008759846, + -0.020009333, + 0.0069732983, + -0.019486815, + 0.0019527157, + 0.011149594, + 0.021177312, + 0.02575702, + -0.0031735231, + -0.011518429, + 0.009528253, + -0.042600516, + -0.030859247, + -0.05185214, + 0.0043415027, + 0.02634101, + -0.014715005, + 0.011103489, + 0.011280223, + 0.012186944, + 0.039926454, + -0.026571533, + 0.012832406, + -0.008613849, + 0.03700651, + -0.006216417, + -0.026725214, + 0.01167211, + -0.03033673, + 0.020593323, + 0.0052366974, + -0.001423475, + -0.013039876, + 0.015045419, + -0.026171962, + -0.022790968, + -0.012332941, + 0.030813143, + -0.011702847, + 0.022806335, + 0.018933563, + 0.012279153, + 0.000637298, + -0.014991631, + -0.011334011, + -0.026940368, + 0.015698567, + 0.020900685, + 0.027278468, + 0.0359, + -0.022698758, + 0.013339555, + 0.036453255, + 0.00955899, + -0.010304345, + -0.007042455, + 0.0070770336, + -0.0079222815, + 0.0029564481, + -0.024250941, + 0.022145506, + 0.007864651, + 0.0031351028, + -0.013961965, + -0.004890914, + 0.02959906, + -0.0064661494, + -0.022422133, + -0.013454816, + 0.0032273117, + -0.007803179, + 0.00963583, + -0.026110489, + 0.02326738, + -0.0077801263, + -0.011495377, + 0.006558358, + 0.03746755, + 0.020793108, + -0.04413733, + 0.000812591, + 0.017688742, + 0.003867011, + 0.027462887, + -0.04281567, + 0.007257609, + 0.044321746, + -0.020101542, + -0.010527183, + -0.02328275, + 0.023559375, + 0.03359478, + -0.023590112, + 0.011195698, + -0.011403168, + -0.04930103, + -0.008137436, + 0.015360467, + 0.1335185, + -0.022391396, + -0.0276012, + 0.021161944, + 0.018072946, + -0.003056341, + -0.014515218, + 0.00793765, + -0.004360713, + -0.008275749, + -0.0016876151, + -0.025142295, + -0.036945034, + -0.007342134, + -0.012571148, + 0.019486815, + -0.0051521724, + -0.04444469, + -0.010719285, + -0.024373887, + 0.01811905, + -0.030305993, + -0.0088136345, + 0.0076763914, + 0.011141909, + -0.022468237, + -0.016075086, + -0.011748952, + 0.010450343, + 0.053665582, + 0.009297731, + 0.001532973, + 0.03301079, + 0.0085370075, + -0.0041609267, + 0.030183049, + -0.023144435, + -0.022145506, + -0.02758583, + 0.011948737, + -0.015506464, + 0.008467851, + 0.04413733, + -0.0140311215, + -0.00959741, + -0.011372432, + 0.009927825, + -0.0034482288, + 0.0001387936, + -0.012417466, + -0.03156618, + 0.05292791, + 0.0038420376, + 0.007899229, + 0.011272538, + -0.007853125, + 0.03341036, + 0.034455396, + 0.05627817, + -0.0044413954, + -0.00731524, + 0.0022533552, + 0.014100279, + 0.02343643, + 0.01508384, + -0.00019858533, + -0.01928703, + 0.035008647, + -0.01001235, + 0.03931173, + -0.008152804, + 0.016828125, + 0.015429623, + 0.024558306, + 0.019579025, + 0.03734461, + -0.025541866, + 0.00041013752, + -0.01604435, + 0.030475043, + -0.013646918, + -0.011848845, + 0.013085981, + 0.018011473, + 0.018426413, + 0.011679795, + 0.00731524, + 0.019901756, + -0.012901562, + -0.02477346, + 0.00735366, + 0.0020670162, + 0.02551113, + -0.012417466, + 0.0014446062, + 0.032365326, + -0.04954692, + 0.017796319, + 0.006393151, + 0.016105821, + 0.014761109, + 0.023159804, + 0.0045297625, + 0.022145506, + -0.0064354134, + -0.007856967, + 0.0038843001, + -0.04837894, + -0.043737758, + 0.004817915, + 0.0016972201, + -0.015675513, + -0.0022187768, + 0.02584923, + 0.040848546, + -0.06448476, + -0.02875381, + -0.025034718, + -0.0010315871, + 0.0054249573, + 0.05437252, + 0.019886388, + 0.014953211, + -0.020762373, + 0.03872774, + -0.0010085349, + 0.05252834, + 0.020455008, + 0.045274574, + -0.013308818, + -0.0077993367, + -0.014223224, + 0.016828125, + 0.008882791, + 0.0064776754, + 0.013270399, + 0.010442658, + -0.017519692, + 0.0021515412, + 0.016474659, + 0.0013831336, + -0.022422133, + 0.011164961, + -0.009858669, + -0.013570078, + 0.006677462, + 0.037160188, + -0.014945527, + -0.03224238, + -0.053788528, + -0.019148717, + -0.011918001, + 0.013262714, + -0.018026842, + 0.0063470462, + 0.011971789, + -0.034639813, + -0.0026817424, + -0.0023801425, + -0.023743793, + -0.03780565, + 0.0044106594, + 0.0072729774, + -0.0029622111, + -0.00963583, + 0.012709461, + -0.015368151, + 0.0016587998, + -0.021991825, + -0.007534236, + 0.042323887, + -0.049792808, + -0.022068664, + 0.030874616, + 0.026863528, + -0.013278082, + 0.0060319994, + 0.0023743794, + 0.029537586, + 0.010296661, + -0.045028683, + -0.02684816, + 0.033686988, + -0.010335081, + -0.023559375, + 0.024635145, + -0.017888527, + -0.006131892, + -0.0066659353, + -0.04223168, + 0.009436045, + -0.015298994, + -0.0026087437, + -0.017719477, + -0.0065660425, + -0.030690197, + -0.02053185, + -0.052989386, + -0.009551306, + -0.010142979, + -0.020762373, + -0.02460441, + 0.010650129, + 0.0019844125, + 0.013977333, + 0.02236066, + -0.015414256, + 0.008475536, + -0.0034309397, + -0.018303467, + 0.0114646405, + 0.032303855, + 0.006124208, + 0.0060319994, + 0.043860704, + 0.062825, + -0.004203189, + 0.020132277, + -0.0001019941, + 0.0050292276, + 0.0040226136, + -0.002562639, + 0.036115155, + -0.014653532, + -0.010173716, + 0.015375835, + 0.0039342465, + -0.020393535, + -0.024896404, + 0.0040302975, + 0.025249872, + 0.03150471, + 0.018841352, + 0.042170208, + -0.003868932, + 0.013746811, + -0.0043222923, + 0.009843301, + -0.0051521724, + -0.011172646, + -0.008444799, + -0.026617637, + -0.0076418133, + -0.010903703, + -0.01425396, + 0.044506166, + 0.039619092, + -0.013231978, + 0.0028315817, + 0.00060319994, + 0.009996982, + 0.03291858, + -0.017596534, + -0.008667638, + -0.016981807, + -0.0067581446, + 0.004191663, + 0.016059717, + 0.01679739, + -0.0039284835, + -0.036268838, + -0.0126326205, + 0.023313485, + -0.03058262, + 0.006131892, + -0.0352238, + 0.046473287, + 0.020593323, + -0.0015224074, + 0.032426797, + 0.004368397, + -0.008467851, + 0.021254152, + -0.01396965, + -0.022929281, + 0.021469306, + 0.031351026, + -0.007399765, + 0.021069735, + -0.01936387, + -0.01921019, + -0.013577761, + -0.004387607, + 0.0060781036, + 0.007434343, + -0.0025895333, + 0.011649058, + -0.009274679, + -0.027524358, + 0.00092977314, + 0.05707731, + -0.0027047945, + 0.0016319056, + 0.025295977, + 0.016643707, + 0.02392821, + -0.021085104, + 0.023052227, + 0.031473972, + -0.008337222, + -0.030060103, + -0.0008827082, + 0.023559375, + -0.01853399, + 0.008583113, + 0.027278468, + 0.03633031, + 0.006715882, + -0.024127997, + -0.015598673, + -0.0023148276, + 0.014937843, + -0.00076216424, + -0.026863528, + 0.0028238976, + 0.008391011, + -0.020393535, + 0.02976811, + -0.022037929, + 0.035623375, + 0.022683391, + -0.00011706209, + 0.0033041525, + 0.009205522, + 0.017796319, + -0.022913912, + 0.0021035157, + -0.016443921, + 0.028461816, + -0.027785618, + 0.014761109, + 0.017135488, + 0.053050857, + -0.026771318, + 0.016920334, + -0.0227756, + -0.015398887, + 0.02584923, + 0.013301135, + -0.015275942, + 0.026479324, + 0.026371747, + 0.0026202698, + 0.02110047, + 0.017120121, + 0.008421747, + 0.022498973, + -0.0061395764, + -0.0049677547, + 0.010358133, + -0.026786687, + 0.012409782, + 0.014146383, + 0.013731443, + -0.04023382, + 0.020885317, + -0.01953292, + -0.017396746, + 0.0026759794, + -0.026817424, + -0.027539726, + -0.035377484, + -0.012678725, + -0.0010604024, + -0.024650514, + 0.036945034, + -0.02093142, + -0.0003870853, + 0.02660227, + -0.028292766, + -0.002627954, + 0.0022994597, + 0.022206979, + -0.026448587, + 0.014753425, + -0.027078683, + -0.0014148304, + -0.021315625, + 0.03016768, + 0.00693872, + -0.0055824807, + -0.017673373, + 0.007023245, + 0.0040034032, + 0.018902825, + 0.0071846107, + -0.0035500429, + -0.013985017, + -0.006101156 + ] + } + ], + "model": "text-embedding-3-small", + "usage": { + "prompt_tokens": 2, + "total_tokens": 2 + } + } + recorded_at: Fri, 14 Nov 2025 06:05:18 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/openai_responses_prompt.yml b/test/fixtures/vcr_cassettes/usage/openai_responses_prompt.yml new file mode 100644 index 00000000..5139c40f --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/openai_responses_prompt.yml @@ -0,0 +1,171 @@ +--- +http_interactions: +- request: + method: post + uri: https://api.openai.com/v1/responses + body: + encoding: UTF-8 + string: '{"model":"gpt-4o-mini","input":"Say hello"}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - api.openai.com + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + Authorization: + - Bearer ACCESS_TOKEN + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '43' + response: + status: + code: 200 + message: OK + headers: + Date: + - Fri, 14 Nov 2025 06:05:15 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + X-Ratelimit-Limit-Requests: + - '30000' + X-Ratelimit-Limit-Tokens: + - '150000000' + X-Ratelimit-Remaining-Requests: + - '29999' + X-Ratelimit-Remaining-Tokens: + - '149999970' + X-Ratelimit-Reset-Requests: + - 2ms + X-Ratelimit-Reset-Tokens: + - 0s + Openai-Version: + - '2020-10-01' + Openai-Organization: + - ORGANIZATION_ID + Openai-Project: + - PROJECT_ID + X-Request-Id: + - req_8388e44c541b45eca79b641b8a46a00b + Openai-Processing-Ms: + - '649' + X-Envoy-Upstream-Service-Time: + - '651' + Cf-Cache-Status: + - DYNAMIC + Set-Cookie: + - __cf_bm=bVABSb7LDjpMUp4dy9Zzfr57XEdUgm4v97N3hHGlHAA-1763100315-1.0.1.1-pK8Ux99kUBftiPe18nanjkj0R6DxYGL3Rl8clhxXBycfqAHm.yGKJf7mxIdbWoUhBWSzMil1asqfVUTBqwoiXbNHz2koZ8xOZRPO1SOF_88; + path=/; expires=Fri, 14-Nov-25 06:35:15 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=YArILfluJUtha7lkAS2zBRCrGiU9DVyQm9cdlR73Fls-1763100315807-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Strict-Transport-Security: + - max-age=31536000; includeSubDomains; preload + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99e450e94cb2251d-SJC + Alt-Svc: + - h3=":443"; ma=86400 + body: + encoding: ASCII-8BIT + string: |- + { + "id": "resp_05d10aa207a7b1d7006916c69b24e481909761ace7f2d8fa25", + "object": "response", + "created_at": 1763100315, + "status": "completed", + "background": false, + "billing": { + "payer": "developer" + }, + "error": null, + "incomplete_details": null, + "instructions": null, + "max_output_tokens": null, + "max_tool_calls": null, + "model": "gpt-4o-mini-2024-07-18", + "output": [ + { + "id": "msg_05d10aa207a7b1d7006916c69b7fb081908663e66ac0628f16", + "type": "message", + "status": "completed", + "content": [ + { + "type": "output_text", + "annotations": [], + "logprobs": [], + "text": "Hello! How can I assist you today?" + } + ], + "role": "assistant" + } + ], + "parallel_tool_calls": true, + "previous_response_id": null, + "prompt_cache_key": null, + "prompt_cache_retention": null, + "reasoning": { + "effort": null, + "summary": null + }, + "safety_identifier": null, + "service_tier": "default", + "store": true, + "temperature": 1.0, + "text": { + "format": { + "type": "text" + }, + "verbosity": "medium" + }, + "tool_choice": "auto", + "tools": [], + "top_logprobs": 0, + "top_p": 1.0, + "truncation": "disabled", + "usage": { + "input_tokens": 9, + "input_tokens_details": { + "cached_tokens": 0 + }, + "output_tokens": 10, + "output_tokens_details": { + "reasoning_tokens": 0 + }, + "total_tokens": 19 + }, + "user": null, + "metadata": {} + } + recorded_at: Fri, 14 Nov 2025 06:05:15 GMT +recorded_with: VCR 6.3.1 diff --git a/test/fixtures/vcr_cassettes/usage/openrouter_chat_prompt.yml b/test/fixtures/vcr_cassettes/usage/openrouter_chat_prompt.yml new file mode 100644 index 00000000..1db7e0c4 --- /dev/null +++ b/test/fixtures/vcr_cassettes/usage/openrouter_chat_prompt.yml @@ -0,0 +1,78 @@ +--- +http_interactions: +- request: + method: post + uri: https://openrouter.ai/api/v1/chat/completions + body: + encoding: UTF-8 + string: '{"model":"anthropic/claude-3.5-haiku","messages":[{"role":"user","content":"Say + hello"}]}' + headers: + Accept-Encoding: + - gzip;q=1.0,deflate;q=0.6,identity;q=0.3 + Accept: + - application/json + User-Agent: + - OpenAI::Client/Ruby 0.35.1 + Host: + - openrouter.ai + X-Stainless-Arch: + - arm64 + X-Stainless-Lang: + - ruby + X-Stainless-Os: + - MacOS + X-Stainless-Package-Version: + - 0.35.1 + X-Stainless-Runtime: + - ruby + X-Stainless-Runtime-Version: + - 3.4.7 + Content-Type: + - application/json + Authorization: + - Bearer ACCESS_TOKEN + Http-Referer: + - https://example.com + X-Title: + - Dummy + X-Stainless-Retry-Count: + - '0' + X-Stainless-Timeout: + - '600.0' + Content-Length: + - '89' + response: + status: + code: 200 + message: OK + headers: + Date: + - Fri, 14 Nov 2025 06:05:20 GMT + Content-Type: + - application/json + Transfer-Encoding: + - chunked + Connection: + - keep-alive + Access-Control-Allow-Origin: + - "*" + Vary: + - Accept-Encoding + Permissions-Policy: + - payment=(self "https://checkout.stripe.com" "https://connect-js.stripe.com" + "https://js.stripe.com" "https://*.js.stripe.com" "https://hooks.stripe.com") + Referrer-Policy: + - no-referrer, strict-origin-when-cross-origin + X-Content-Type-Options: + - nosniff + Server: + - cloudflare + Cf-Ray: + - 99e45106586af8f3-SJC + body: + encoding: ASCII-8BIT + string: "\n \n\n \n{\"id\":\"gen-1763100319-JJEJM7pKTJ7IYvFUSSsY\",\"provider\":\"Anthropic\",\"model\":\"anthropic/claude-3.5-haiku\",\"object\":\"chat.completion\",\"created\":1763100320,\"choices\":[{\"logprobs\":null,\"finish_reason\":\"stop\",\"native_finish_reason\":\"stop\",\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"Hello! + How are you doing today?\",\"refusal\":null,\"reasoning\":null}}],\"usage\":{\"prompt_tokens\":9,\"completion_tokens\":11,\"total_tokens\":20}}" + recorded_at: Fri, 14 Nov 2025 06:05:21 GMT +recorded_with: VCR 6.3.1 diff --git a/test/providers/common/usage_test.rb b/test/providers/common/usage_test.rb new file mode 100644 index 00000000..c8b187fb --- /dev/null +++ b/test/providers/common/usage_test.rb @@ -0,0 +1,248 @@ +# frozen_string_literal: true + +require "test_helper" +require "active_agent/providers/common/usage" + +module ActiveAgent + module Providers + module Common + class UsageTest < ActiveSupport::TestCase + test "normalizes OpenAI Chat completion usage" do + usage_hash = { + "prompt_tokens" => 100, + "completion_tokens" => 25, + "total_tokens" => 125, + "prompt_tokens_details" => { "cached_tokens" => 20 }, + "completion_tokens_details" => { "reasoning_tokens" => 3, "audio_tokens" => 5 } + } + + usage = Usage.from_openai_chat(usage_hash) + + assert_equal 100, usage.input_tokens + assert_equal 25, usage.output_tokens + assert_equal 125, usage.total_tokens + assert_equal 20, usage.cached_tokens + assert_equal 3, usage.reasoning_tokens + assert_equal 5, usage.audio_tokens + end + + test "normalizes OpenAI Embedding usage with no output tokens" do + usage_hash = { + "prompt_tokens" => 8, + "total_tokens" => 8 + } + + usage = Usage.from_openai_embedding(usage_hash) + + assert_equal 8, usage.input_tokens + assert_equal 0, usage.output_tokens + assert_equal 8, usage.total_tokens + end + + test "normalizes OpenAI Responses API usage" do + usage_hash = { + "input_tokens" => 150, + "output_tokens" => 75, + "total_tokens" => 225, + "input_tokens_details" => { "cached_tokens" => 50 }, + "output_tokens_details" => { "reasoning_tokens" => 10 } + } + + usage = Usage.from_openai_responses(usage_hash) + + assert_equal 150, usage.input_tokens + assert_equal 75, usage.output_tokens + assert_equal 225, usage.total_tokens + assert_equal 50, usage.cached_tokens + assert_equal 10, usage.reasoning_tokens + end + + test "normalizes Anthropic usage and calculates total_tokens" do + usage_hash = { + "input_tokens" => 2095, + "output_tokens" => 503, + "cache_read_input_tokens" => 1500, + "cache_creation_input_tokens" => 2051, + "service_tier" => "standard" + } + + usage = Usage.from_anthropic(usage_hash) + + assert_equal 2095, usage.input_tokens + assert_equal 503, usage.output_tokens + assert_equal 2598, usage.total_tokens # Calculated + assert_equal 1500, usage.cached_tokens + assert_equal 2051, usage.cache_creation_tokens + assert_equal "standard", usage.service_tier + end + + test "normalizes Ollama usage and converts nanoseconds to milliseconds" do + usage_hash = { + "prompt_eval_count" => 50, + "eval_count" => 25, + "total_duration" => 5_000_000_000, + "load_duration" => 1_000_000_000, + "prompt_eval_duration" => 500_000_000, + "eval_duration" => 2_000_000_000 + } + + usage = Usage.from_ollama(usage_hash) + + assert_equal 50, usage.input_tokens + assert_equal 25, usage.output_tokens + assert_equal 75, usage.total_tokens # Calculated + assert_equal 5000, usage.duration_ms + assert_equal 1000, usage.provider_details[:load_duration_ms] + assert_equal 500, usage.provider_details[:prompt_eval_duration_ms] + assert_equal 2000, usage.provider_details[:eval_duration_ms] + assert_equal 12.5, usage.provider_details[:tokens_per_second] + end + + test "normalizes OpenRouter usage (same as OpenAI Chat)" do + usage_hash = { + "prompt_tokens" => 14, + "completion_tokens" => 4, + "total_tokens" => 18 + } + + usage = Usage.from_openrouter(usage_hash) + + assert_equal 14, usage.input_tokens + assert_equal 4, usage.output_tokens + assert_equal 18, usage.total_tokens + end + + test "auto-detects provider format for OpenAI Chat" do + usage_hash = { + "prompt_tokens" => 100, + "completion_tokens" => 25, + "total_tokens" => 125 + } + + usage = Usage.from_provider_usage(usage_hash) + + assert_equal 100, usage.input_tokens + assert_equal 25, usage.output_tokens + assert_equal 125, usage.total_tokens + end + + test "auto-detects provider format for Anthropic" do + usage_hash = { + "input_tokens" => 2095, + "output_tokens" => 503, + "service_tier" => "standard" + } + + usage = Usage.from_provider_usage(usage_hash) + + assert_equal 2095, usage.input_tokens + assert_equal 503, usage.output_tokens + assert_equal 2598, usage.total_tokens + end + + test "auto-detects provider format for Ollama" do + usage_hash = { + "prompt_eval_count" => 50, + "eval_count" => 25, + "total_duration" => 5_000_000_000 + } + + usage = Usage.from_provider_usage(usage_hash) + + assert_equal 50, usage.input_tokens + assert_equal 25, usage.output_tokens + assert_equal 75, usage.total_tokens + end + + test "auto-detects provider format for OpenAI Responses API" do + usage_hash = { + "input_tokens" => 150, + "output_tokens" => 75, + "total_tokens" => 225, + "input_tokens_details" => { "cached_tokens" => 50 } + } + + usage = Usage.from_provider_usage(usage_hash) + + assert_equal 150, usage.input_tokens + assert_equal 75, usage.output_tokens + assert_equal 225, usage.total_tokens + end + + test "auto-detects provider format for OpenAI Embeddings" do + usage_hash = { + "prompt_tokens" => 8, + "total_tokens" => 8 + } + + usage = Usage.from_provider_usage(usage_hash) + + assert_equal 8, usage.input_tokens + assert_equal 0, usage.output_tokens + assert_equal 8, usage.total_tokens + end + + test "calculates total_tokens if not provided" do + usage = Usage.new(input_tokens: 100, output_tokens: 25) + + assert_equal 125, usage.total_tokens + end + + test "works with symbol keys" do + usage_hash = { + prompt_tokens: 100, + completion_tokens: 25, + total_tokens: 125 + } + + usage = Usage.from_openai_chat(usage_hash) + + assert_equal 100, usage.input_tokens + assert_equal 25, usage.output_tokens + assert_equal 125, usage.total_tokens + end + + test "returns nil for nil input" do + assert_nil Usage.from_openai_chat(nil) + assert_nil Usage.from_anthropic(nil) + assert_nil Usage.from_ollama(nil) + assert_nil Usage.from_provider_usage(nil) + end + + test "handles missing optional fields gracefully" do + usage_hash = { + "prompt_tokens" => 100, + "completion_tokens" => 25, + "total_tokens" => 125 + } + + usage = Usage.from_openai_chat(usage_hash) + + assert_nil usage.cached_tokens + assert_nil usage.reasoning_tokens + assert_nil usage.audio_tokens + end + + test "preserves provider-specific details" do + usage_hash = { + "input_tokens" => 2095, + "output_tokens" => 503, + "cache_creation" => { + "ephemeral_5m_input_tokens" => 1000, + "ephemeral_1h_input_tokens" => 500 + }, + "server_tool_use" => { + "web_fetch_requests" => 2, + "web_search_requests" => 1 + } + } + + usage = Usage.from_anthropic(usage_hash) + + assert_equal 1000, usage.provider_details[:cache_creation][:ephemeral_5m_input_tokens] + assert_equal 2, usage.provider_details[:server_tool_use][:web_fetch_requests] + end + end + end + end +end diff --git a/test/providers/log_subscriber_test.rb b/test/providers/log_subscriber_test.rb index 3a65bbbd..8da48104 100644 --- a/test/providers/log_subscriber_test.rb +++ b/test/providers/log_subscriber_test.rb @@ -18,140 +18,145 @@ class LogSubscriberTest < ActiveSupport::TestCase assert ActiveSupport::LogSubscriber.log_subscribers.any? { _1.is_a?(ActiveAgent::Providers::LogSubscriber) } end - test "prompt_start event is logged" do - ActiveSupport::Notifications.instrument("prompt_start.provider.active_agent", - provider: "OpenAI", + test "prompt event is logged with model and message count" do + ActiveSupport::Notifications.instrument("prompt.active_agent", + trace_id: "test-123", provider_module: "OpenAI", - trace_id: "test-123") - - assert_match(/Starting prompt request/, @log_output.string) - assert_match(/OpenAI/, @log_output.string) - assert_match(/test-123/, @log_output.string) - end - - test "embed_start event is logged" do - ActiveSupport::Notifications.instrument("embed_start.provider.active_agent", - provider: "OpenAI", - provider_module: "OpenAI", - trace_id: "test-456") + model: "gpt-4", + message_count: 3, + stream: false, + finish_reason: "stop") do + sleep 0.01 # Simulate work for duration + end - assert_match(/Starting embed request/, @log_output.string) - assert_match(/OpenAI/, @log_output.string) + assert_match(/\[test-123\]/, @log_output.string) + assert_match(/\[ActiveAgent\]/, @log_output.string) + assert_match(/\[OpenAI\]/, @log_output.string) + assert_match(/Prompt completed:/, @log_output.string) + assert_match(/model=gpt-4/, @log_output.string) + assert_match(/messages=3/, @log_output.string) + assert_match(/stream=false/, @log_output.string) + assert_match(/finish=stop/, @log_output.string) + assert_match(/\d+\.\d+ms/, @log_output.string) end - test "request_prepared event is logged" do - ActiveSupport::Notifications.instrument("request_prepared.provider.active_agent", - provider: "Anthropic", + test "prompt event includes usage information" do + ActiveSupport::Notifications.instrument("prompt.active_agent", + trace_id: "test-usage", provider_module: "Anthropic", - trace_id: "test-789", - message_count: 5) - - assert_match(/Prepared request with 5 message/, @log_output.string) - assert_match(/Anthropic/, @log_output.string) - end - - test "api_call event is logged with duration" do - ActiveSupport::Notifications.instrument("api_call.provider.active_agent", - provider: "OpenAI", + model: "claude-3-5-sonnet-20241022", + message_count: 2, + stream: false, + usage: { + input_tokens: 100, + output_tokens: 50, + cached_tokens: 25, + reasoning_tokens: 10 + }) + + assert_match(/tokens=100\/50/, @log_output.string) + assert_match(/cached: 25/, @log_output.string) + assert_match(/reasoning: 10/, @log_output.string) + end + + test "embed event is logged with model and input size" do + ActiveSupport::Notifications.instrument("embed.active_agent", + trace_id: "test-456", provider_module: "OpenAI", - trace_id: "test-api", - streaming: true) do - sleep 0.01 # Simulate some work + model: "text-embedding-ada-002", + input_size: 5, + embedding_count: 5, + usage: { input_tokens: 150 }) do + sleep 0.01 # Simulate work end - assert_match(/API call completed in \d+\.\d+ms/, @log_output.string) - assert_match(/streaming: true/, @log_output.string) + assert_match(/\[test-456\]/, @log_output.string) + assert_match(/\[OpenAI\]/, @log_output.string) + assert_match(/Embed completed:/, @log_output.string) + assert_match(/model=text-embedding-ada-002/, @log_output.string) + assert_match(/inputs=5/, @log_output.string) + assert_match(/embeddings=5/, @log_output.string) + assert_match(/tokens=150/, @log_output.string) end test "stream_open event is logged" do - ActiveSupport::Notifications.instrument("stream_open.provider.active_agent", - provider: "Anthropic", - provider_module: "Anthropic", - trace_id: "test-stream") + ActiveSupport::Notifications.instrument("stream_open.active_agent", + trace_id: "test-stream", + provider_module: "Anthropic") + assert_match(/\[test-stream\]/, @log_output.string) + assert_match(/\[Anthropic\]/, @log_output.string) assert_match(/Opening stream/, @log_output.string) end test "stream_close event is logged" do - ActiveSupport::Notifications.instrument("stream_close.provider.active_agent", - provider: "Anthropic", - provider_module: "Anthropic", - trace_id: "test-stream") + ActiveSupport::Notifications.instrument("stream_close.active_agent", + trace_id: "test-stream", + provider_module: "Anthropic") + assert_match(/\[test-stream\]/, @log_output.string) + assert_match(/\[Anthropic\]/, @log_output.string) assert_match(/Closing stream/, @log_output.string) end - test "messages_extracted event is logged" do - ActiveSupport::Notifications.instrument("messages_extracted.provider.active_agent", - provider: "OpenAI", - provider_module: "OpenAI", - trace_id: "test-msg", - message_count: 3) - - assert_match(/Extracted 3 message/, @log_output.string) - end - - test "tool_calls_processing event is logged" do - ActiveSupport::Notifications.instrument("tool_calls_processing.provider.active_agent", - provider: "OpenAI", - provider_module: "OpenAI", + test "tool_call event is logged" do + ActiveSupport::Notifications.instrument("tool_call.active_agent", trace_id: "test-tool", - tool_count: 2) - - assert_match(/Processing 2 tool call/, @log_output.string) - end - - test "multi_turn_continue event is logged" do - ActiveSupport::Notifications.instrument("multi_turn_continue.provider.active_agent", - provider: "Anthropic", provider_module: "Anthropic", - trace_id: "test-turn") + tool_name: "weather_lookup") do + sleep 0.01 # Simulate work + end - assert_match(/Continuing multi-turn conversation/, @log_output.string) + assert_match(/\[test-tool\]/, @log_output.string) + assert_match(/\[Anthropic\]/, @log_output.string) + assert_match(/Tool call: weather_lookup/, @log_output.string) + assert_match(/\d+\.\d+ms/, @log_output.string) end - test "prompt_complete event is logged with duration" do - ActiveSupport::Notifications.instrument("prompt_complete.provider.active_agent", - provider: "OpenAI", - provider_module: "OpenAI", - trace_id: "test-complete", - message_count: 4) do - sleep 0.01 # Simulate some work - end + test "stream_chunk event is logged" do + ActiveSupport::Notifications.instrument("stream_chunk.active_agent", + trace_id: "test-chunk", + provider_module: "Anthropic", + chunk_type: "content_block_delta") - assert_match(/Prompt completed with 4 message/, @log_output.string) - assert_match(/total: \d+\.\d+ms/, @log_output.string) + assert_match(/\[test-chunk\]/, @log_output.string) + assert_match(/\[Anthropic\]/, @log_output.string) + assert_match(/Stream chunk: content_block_delta/, @log_output.string) end - test "retry_attempt event is logged" do - ActiveSupport::Notifications.instrument("retry_attempt.provider.active_agent", - provider_module: "OpenAI", - attempt: 2, - max_retries: 3, - exception: "TimeoutError", - backoff_time: 2.5) + test "stream_chunk event without chunk_type" do + ActiveSupport::Notifications.instrument("stream_chunk.active_agent", + trace_id: "test-chunk2", + provider_module: "OpenAI") - assert_match(/Attempt 2\/3 failed with TimeoutError/, @log_output.string) - assert_match(/retrying in 2.5s/, @log_output.string) + assert_match(/Stream chunk/, @log_output.string) + refute_match(/Stream chunk:/, @log_output.string) end - test "retry_exhausted event is logged" do - ActiveSupport::Notifications.instrument("retry_exhausted.provider.active_agent", - provider_module: "OpenAI", - max_retries: 3, - exception: "SocketError") + test "connection_error event is logged" do + ActiveSupport::Notifications.instrument("connection_error.active_agent", + trace_id: "test-error", + provider_module: "Ollama", + uri_base: "http://localhost:11434", + exception: "Errno::ECONNREFUSED", + message: "Connection refused") - assert_match(/Max retries \(3\) exceeded/, @log_output.string) - assert_match(/SocketError/, @log_output.string) + assert_match(/\[test-error\]/, @log_output.string) + assert_match(/\[Ollama\]/, @log_output.string) + assert_match(/Unable to connect to http:\/\/localhost:11434/, @log_output.string) + assert_match(/Errno::ECONNREFUSED/, @log_output.string) + assert_match(/Connection refused/, @log_output.string) end test "logs nothing when logger level is above debug" do ActiveAgent::Base.logger.level = Logger::INFO - ActiveSupport::Notifications.instrument("prompt_start.provider.active_agent", - provider: "OpenAI", + ActiveSupport::Notifications.instrument("prompt.active_agent", + trace_id: "test-level", provider_module: "OpenAI", - trace_id: "test-level") + model: "gpt-4", + message_count: 1, + stream: false) assert_empty @log_output.string end @@ -160,13 +165,17 @@ class LogSubscriberTest < ActiveSupport::TestCase events = [] custom_subscriber = ->(event) { events << event } - subscription = ActiveSupport::Notifications.subscribe("prompt_start.provider.active_agent", custom_subscriber) + subscription = ActiveSupport::Notifications.subscribe("prompt.active_agent", custom_subscriber) - ActiveSupport::Notifications.instrument("prompt_start.provider.active_agent", provider: "Test") + ActiveSupport::Notifications.instrument("prompt.active_agent", + trace_id: "test-custom", + provider_module: "Test", + message_count: 1, + stream: false) assert_equal 1, events.size - assert_equal "prompt_start.provider.active_agent", events.first.name - assert_equal "Test", events.first.payload[:provider] + assert_equal "prompt.active_agent", events.first.name + assert_equal "Test", events.first.payload[:provider_module] ensure ActiveSupport::Notifications.unsubscribe(subscription) if subscription end diff --git a/test/providers/usage_test.rb b/test/providers/usage_test.rb new file mode 100644 index 00000000..423cd1f9 --- /dev/null +++ b/test/providers/usage_test.rb @@ -0,0 +1,221 @@ +# frozen_string_literal: true + +require "test_helper" + +# Smoke tests to verify usage payload instrumentation works across all providers +class UsageTest < ActiveSupport::TestCase + # Anthropic Provider Tests + + class AnthropicTestAgent < ActiveAgent::Base + generate_with :anthropic, model: "claude-3-5-haiku-20241022" + + def chat + prompt(message: params[:message]) + end + end + + test "Anthropic provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/anthropic_prompt") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("prompt.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "Anthropic" + end + + response = AnthropicTestAgent.with(message: "Say hello").chat.generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + assert_not_nil received_payload[:usage], "Provider-level event should have usage" + assert_kind_of Integer, received_payload[:usage][:input_tokens] + assert_kind_of Integer, received_payload[:usage][:output_tokens] + assert_kind_of Integer, received_payload[:usage][:total_tokens] + assert received_payload[:usage][:input_tokens] > 0 + assert received_payload[:usage][:output_tokens] > 0 + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end + + # OpenAI Chat Provider Tests + + class OpenAIChatTestAgent < ActiveAgent::Base + generate_with :openai, model: "gpt-4o-mini" + + def chat + prompt(message: params[:message]) + end + end + + test "OpenAI Chat provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/openai_chat_prompt") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("prompt.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "OpenAI" + end + + response = OpenAIChatTestAgent.with(message: "Say hello").chat.generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + assert_not_nil received_payload[:usage], "Provider-level event should have usage" + assert_kind_of Integer, received_payload[:usage][:input_tokens] + assert_kind_of Integer, received_payload[:usage][:output_tokens] + assert_kind_of Integer, received_payload[:usage][:total_tokens] + assert received_payload[:usage][:input_tokens] > 0 + assert received_payload[:usage][:output_tokens] > 0 + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end + + # OpenAI Responses Provider Tests + + class OpenAIResponsesTestAgent < ActiveAgent::Base + generate_with :openai, model: "gpt-4o-mini", api_version: :responses + + def chat + prompt(message: params[:message]) + end + end + + test "OpenAI Responses provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/openai_responses_prompt") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("prompt.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "OpenAI" + end + + response = OpenAIResponsesTestAgent.with(message: "Say hello").chat.generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + assert_not_nil received_payload[:usage], "Provider-level event should have usage" + assert_kind_of Integer, received_payload[:usage][:input_tokens] + assert_kind_of Integer, received_payload[:usage][:output_tokens] + assert_kind_of Integer, received_payload[:usage][:total_tokens] + assert received_payload[:usage][:input_tokens] > 0 + assert received_payload[:usage][:output_tokens] > 0 + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end + + # OpenAI Embedding Provider Tests + + class OpenAIEmbeddingTestAgent < ActiveAgent::Base + embed_with :openai, model: "text-embedding-3-small" + end + + test "OpenAI Embedding provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/openai_embedding") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("embed.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "OpenAI" + end + + response = OpenAIEmbeddingTestAgent.embed(input: "Hello world").generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + assert_not_nil received_payload[:usage], "Provider-level event should have usage" + assert_kind_of Integer, received_payload[:usage][:input_tokens] + assert_kind_of Integer, received_payload[:usage][:total_tokens] + assert received_payload[:usage][:input_tokens] > 0 + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end + + # Ollama Provider Tests + + class OllamaTestAgent < ActiveAgent::Base + generate_with :ollama, model: "deepseek-r1:latest" + + def chat + prompt(message: params[:message]) + end + end + + test "Ollama Chat provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/ollama_chat_prompt") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("prompt.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "Ollama" + end + + response = OllamaTestAgent.with(message: "Say hello").chat.generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + assert_not_nil received_payload[:usage], "Provider-level event should have usage" + assert_kind_of Integer, received_payload[:usage][:input_tokens] + assert_kind_of Integer, received_payload[:usage][:output_tokens] + assert_kind_of Integer, received_payload[:usage][:total_tokens] + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end + + class OllamaEmbeddingTestAgent < ActiveAgent::Base + embed_with :ollama, model: "all-minilm" + end + + test "Ollama Embedding provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/ollama_embedding") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("embed.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "Ollama" + end + + response = OllamaEmbeddingTestAgent.embed(input: "Hello world").generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + # Note: Ollama may or may not include usage data for embeddings + if received_payload[:usage] + assert_kind_of Integer, received_payload[:usage][:input_tokens] + end + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end + + # OpenRouter Provider Tests + + class OpenRouterTestAgent < ActiveAgent::Base + generate_with :open_router, model: "anthropic/claude-3.5-haiku" + + def chat + prompt(message: params[:message]) + end + end + + test "OpenRouter Chat provider includes usage in instrumentation payload" do + VCR.use_cassette("usage/openrouter_chat_prompt") do + received_payload = nil + + subscription = ActiveSupport::Notifications.subscribe("prompt.provider.active_agent") do |event| + received_payload = event.payload if event.payload[:provider] == "OpenRouter" + end + + response = OpenRouterTestAgent.with(message: "Say hello").chat.generate_now + + assert response.success? + assert_not_nil received_payload, "Should receive provider-level event" + assert_not_nil received_payload[:usage], "Provider-level event should have usage" + assert_kind_of Integer, received_payload[:usage][:input_tokens] + assert_kind_of Integer, received_payload[:usage][:output_tokens] + assert_kind_of Integer, received_payload[:usage][:total_tokens] + assert received_payload[:usage][:input_tokens] > 0 + assert received_payload[:usage][:output_tokens] > 0 + ensure + ActiveSupport::Notifications.unsubscribe(subscription) if subscription + end + end +end