Module: LLM::OpenAI::ResponseAdapter::Responds
- Includes:
- Contract::Completion
- Defined in:
- lib/llm/providers/openai/response_adapter/responds.rb
Constant Summary
Constants included from Contract
Instance Method Summary collapse
- #annotations ⇒ Array<Hash>
-
#content ⇒ String
Returns the LLM response.
-
#content! ⇒ Hash
Returns the LLM response after parsing it as JSON.
-
#input_tokens ⇒ Integer
(also: #prompt_tokens)
Returns the number of input tokens.
-
#messages ⇒ Array<LLM::Messsage>
(also: #choices)
Returns one or more messages.
-
#model ⇒ String
Returns the model name.
-
#output_text ⇒ String
Returns the aggregated text content from the response outputs.
-
#output_tokens ⇒ Integer
(also: #completion_tokens)
Returns the number of output tokens.
-
#reasoning_content ⇒ String?
Returns the reasoning content when the provider exposes it.
-
#reasoning_tokens ⇒ Integer
Returns the number of reasoning tokens.
- #response_id ⇒ String
-
#system_fingerprint ⇒ nil
OpenAI’s Responses API does not expose a system fingerprint.
-
#total_tokens ⇒ Integer
Returns the total number of tokens.
-
#usage ⇒ LLM::Usage
Returns usage information.
Methods included from Contract
Instance Method Details
#annotations ⇒ Array<Hash>
20 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 20 def annotations = [0].annotations |
#content ⇒ String
Returns the LLM response
79 80 81 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 79 def content super || "" end |
#content! ⇒ Hash
Returns the LLM response after parsing it as JSON
85 86 87 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 85 def content! super end |
#input_tokens ⇒ Integer Also known as: prompt_tokens
Returns the number of input tokens
24 25 26 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 24 def input_tokens body.usage&.input_tokens || 0 end |
#messages ⇒ Array<LLM::Messsage> Also known as: choices
Returns one or more messages
7 8 9 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 7 def [] end |
#model ⇒ String
Returns the model name
59 60 61 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 59 def model body.model end |
#output_text ⇒ String
Returns the aggregated text content from the response outputs.
73 74 75 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 73 def output_text content end |
#output_tokens ⇒ Integer Also known as: completion_tokens
Returns the number of output tokens
31 32 33 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 31 def output_tokens body.usage&.output_tokens || 0 end |
#reasoning_content ⇒ String?
Returns the reasoning content when the provider exposes it
91 92 93 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 91 def reasoning_content super end |
#reasoning_tokens ⇒ Integer
Returns the number of reasoning tokens
38 39 40 41 42 43 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 38 def reasoning_tokens body .usage &.output_tokens_details &.reasoning_tokens || 0 end |
#response_id ⇒ String
14 15 16 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 14 def response_id respond_to?(:response) ? response["id"] : id end |
#system_fingerprint ⇒ nil
OpenAI’s Responses API does not expose a system fingerprint.
66 67 68 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 66 def system_fingerprint nil end |
#total_tokens ⇒ Integer
Returns the total number of tokens
47 48 49 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 47 def total_tokens body.usage&.total_tokens || 0 end |
#usage ⇒ LLM::Usage
Returns usage information
53 54 55 |
# File 'lib/llm/providers/openai/response_adapter/responds.rb', line 53 def usage super end |