Class: RubyPi::LLM::Response
- Inherits:
-
Object
- Object
- RubyPi::LLM::Response
- Defined in:
- lib/ruby_pi/llm/response.rb
Overview
A normalized response object returned by all LLM providers after a completion request. Encapsulates the generated text content, any tool calls the model wants to invoke, token usage statistics, and the reason the model stopped generating.
Instance Attribute Summary collapse
-
#content ⇒ String?
readonly
The generated text content from the model.
-
#finish_reason ⇒ String?
readonly
The reason the model stopped generating (e.g., “stop”, “tool_calls”, “max_tokens”).
-
#tool_calls ⇒ Array<RubyPi::LLM::ToolCall>
readonly
Tool calls the model wants to invoke.
-
#usage ⇒ Hash
readonly
Token usage statistics with keys like :prompt_tokens, :completion_tokens, :total_tokens.
Instance Method Summary collapse
-
#initialize(content: nil, tool_calls: [], usage: {}, finish_reason: nil) ⇒ Response
constructor
Creates a new Response instance.
-
#to_h ⇒ Hash
Returns a hash representation of the response for serialization.
-
#to_s ⇒ String
(also: #inspect)
Returns a human-readable string representation of the response.
-
#tool_calls? ⇒ Boolean
Returns true if the response includes one or more tool calls.
Constructor Details
#initialize(content: nil, tool_calls: [], usage: {}, finish_reason: nil) ⇒ Response
Creates a new Response instance.
42 43 44 45 46 47 |
# File 'lib/ruby_pi/llm/response.rb', line 42 def initialize(content: nil, tool_calls: [], usage: {}, finish_reason: nil) @content = content @tool_calls = Array(tool_calls) @usage = usage @finish_reason = finish_reason end |
Instance Attribute Details
#content ⇒ String? (readonly)
Returns the generated text content from the model.
23 24 25 |
# File 'lib/ruby_pi/llm/response.rb', line 23 def content @content end |
#finish_reason ⇒ String? (readonly)
Returns the reason the model stopped generating (e.g., “stop”, “tool_calls”, “max_tokens”).
34 35 36 |
# File 'lib/ruby_pi/llm/response.rb', line 34 def finish_reason @finish_reason end |
#tool_calls ⇒ Array<RubyPi::LLM::ToolCall> (readonly)
Returns tool calls the model wants to invoke.
26 27 28 |
# File 'lib/ruby_pi/llm/response.rb', line 26 def tool_calls @tool_calls end |
#usage ⇒ Hash (readonly)
Returns token usage statistics with keys like :prompt_tokens, :completion_tokens, :total_tokens.
30 31 32 |
# File 'lib/ruby_pi/llm/response.rb', line 30 def usage @usage end |
Instance Method Details
#to_h ⇒ Hash
Returns a hash representation of the response for serialization.
59 60 61 62 63 64 65 66 |
# File 'lib/ruby_pi/llm/response.rb', line 59 def to_h { content: @content, tool_calls: @tool_calls.map(&:to_h), usage: @usage, finish_reason: @finish_reason } end |
#to_s ⇒ String Also known as: inspect
Returns a human-readable string representation of the response.
71 72 73 74 75 76 77 |
# File 'lib/ruby_pi/llm/response.rb', line 71 def to_s parts = [] parts << "content=#{@content.inspect}" if @content parts << "tool_calls=#{@tool_calls.length}" if tool_calls? parts << "finish_reason=#{@finish_reason}" if @finish_reason "#<RubyPi::LLM::Response #{parts.join(', ')}>" end |
#tool_calls? ⇒ Boolean
Returns true if the response includes one or more tool calls.
52 53 54 |
# File 'lib/ruby_pi/llm/response.rb', line 52 def tool_calls? !@tool_calls.empty? end |