Module: Braintrust::Contrib::RubyLLM::Instrumentation::Common
- Defined in:
- lib/braintrust/contrib/ruby_llm/instrumentation/common.rb
Overview
Common utilities for RubyLLM instrumentation.
Class Method Summary collapse
-
.parse_usage_tokens(usage) ⇒ Hash<String, Integer>
Parse RubyLLM usage tokens into normalized Braintrust metrics.
Class Method Details
.parse_usage_tokens(usage) ⇒ Hash<String, Integer>
Parse RubyLLM usage tokens into normalized Braintrust metrics. RubyLLM normalizes token fields from all providers (OpenAI, Anthropic, etc.) into a consistent format:
- input_tokens: prompt tokens sent
- output_tokens: completion tokens received
- cached_tokens: tokens read from cache
- cache_creation_tokens: tokens written to cache
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
# File 'lib/braintrust/contrib/ruby_llm/instrumentation/common.rb', line 19 def self.parse_usage_tokens(usage) metrics = {} return metrics unless usage usage_hash = usage.respond_to?(:to_h) ? usage.to_h : usage return metrics unless usage_hash.is_a?(Hash) # RubyLLM normalized field mappings → Braintrust metrics field_map = { "input_tokens" => "prompt_tokens", "output_tokens" => "completion_tokens", "cached_tokens" => "prompt_cached_tokens", "cache_creation_tokens" => "prompt_cache_creation_tokens" } usage_hash.each do |key, value| next unless value.is_a?(Numeric) key_str = key.to_s target = field_map[key_str] metrics[target] = value.to_i if target end # Accumulate cache tokens into prompt_tokens (matching TS/Python SDKs) prompt_tokens = (metrics["prompt_tokens"] || 0) + (metrics["prompt_cached_tokens"] || 0) + (metrics["prompt_cache_creation_tokens"] || 0) metrics["prompt_tokens"] = prompt_tokens if prompt_tokens > 0 # Calculate total if metrics.key?("prompt_tokens") && metrics.key?("completion_tokens") metrics["tokens"] = metrics["prompt_tokens"] + metrics["completion_tokens"] end metrics end |