Class: RubyPi::LLM::Response

Inherits:
Object
  • Object
show all
Defined in:
lib/ruby_pi/llm/response.rb

Overview

A normalized response object returned by all LLM providers after a completion request. Encapsulates the generated text content, any tool calls the model wants to invoke, token usage statistics, and the reason the model stopped generating.

Examples:

Accessing response data

response = provider.complete(messages: messages)
puts response.content
response.tool_calls.each { |tc| handle_tool(tc) }
puts "Tokens used: #{response.usage[:total_tokens]}"

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(content: nil, tool_calls: [], usage: {}, finish_reason: nil) ⇒ Response

Creates a new Response instance.

Parameters:

  • content (String, nil) (defaults to: nil)

    the generated text content

  • tool_calls (Array<RubyPi::LLM::ToolCall>) (defaults to: [])

    list of tool invocations

  • usage (Hash) (defaults to: {})

    token usage statistics

  • finish_reason (String, nil) (defaults to: nil)

    why the model stopped generating



42
43
44
45
46
47
# File 'lib/ruby_pi/llm/response.rb', line 42

def initialize(content: nil, tool_calls: [], usage: {}, finish_reason: nil)
  @content = content
  @tool_calls = Array(tool_calls)
  @usage = usage
  @finish_reason = finish_reason
end

Instance Attribute Details

#contentString? (readonly)

Returns the generated text content from the model.

Returns:

  • (String, nil)

    the generated text content from the model



23
24
25
# File 'lib/ruby_pi/llm/response.rb', line 23

def content
  @content
end

#finish_reasonString? (readonly)

Returns the reason the model stopped generating (e.g., “stop”, “tool_calls”, “max_tokens”).

Returns:

  • (String, nil)

    the reason the model stopped generating (e.g., “stop”, “tool_calls”, “max_tokens”)



34
35
36
# File 'lib/ruby_pi/llm/response.rb', line 34

def finish_reason
  @finish_reason
end

#tool_callsArray<RubyPi::LLM::ToolCall> (readonly)

Returns tool calls the model wants to invoke.

Returns:



26
27
28
# File 'lib/ruby_pi/llm/response.rb', line 26

def tool_calls
  @tool_calls
end

#usageHash (readonly)

Returns token usage statistics with keys like :prompt_tokens, :completion_tokens, :total_tokens.

Returns:

  • (Hash)

    token usage statistics with keys like :prompt_tokens, :completion_tokens, :total_tokens



30
31
32
# File 'lib/ruby_pi/llm/response.rb', line 30

def usage
  @usage
end

Instance Method Details

#to_hHash

Returns a hash representation of the response for serialization.

Returns:

  • (Hash)

    the response as a plain hash



59
60
61
62
63
64
65
66
# File 'lib/ruby_pi/llm/response.rb', line 59

def to_h
  {
    content: @content,
    tool_calls: @tool_calls.map(&:to_h),
    usage: @usage,
    finish_reason: @finish_reason
  }
end

#to_sString Also known as: inspect

Returns a human-readable string representation of the response.

Returns:

  • (String)


71
72
73
74
75
76
77
# File 'lib/ruby_pi/llm/response.rb', line 71

def to_s
  parts = []
  parts << "content=#{@content.inspect}" if @content
  parts << "tool_calls=#{@tool_calls.length}" if tool_calls?
  parts << "finish_reason=#{@finish_reason}" if @finish_reason
  "#<RubyPi::LLM::Response #{parts.join(', ')}>"
end

#tool_calls?Boolean

Returns true if the response includes one or more tool calls.

Returns:

  • (Boolean)


52
53
54
# File 'lib/ruby_pi/llm/response.rb', line 52

def tool_calls?
  !@tool_calls.empty?
end