Class: Async::Ollama::Chat
- Inherits:
-
Object
- Object
- Async::Ollama::Chat
- Defined in:
- lib/async/ollama/chat.rb
Overview
Represents a chat response from the Ollama API, including message content, model, and timing information.
Instance Method Summary collapse
- #error ⇒ Object
-
#eval_count ⇒ Integer | nil
The number of tokens in the response.
-
#eval_duration ⇒ Integer | nil
The time spent generating the response, in nanoseconds.
-
#load_duration ⇒ Integer | nil
The time spent loading the model, in nanoseconds.
- #message ⇒ Object
- #model ⇒ Object
-
#prompt_eval_count ⇒ Integer | nil
The number of tokens in the prompt (the token count).
-
#prompt_eval_duration ⇒ Integer | nil
The time spent evaluating the prompt, in nanoseconds.
- #response ⇒ Object
-
#token_count ⇒ Integer
The sum of prompt and response token counts.
- #tool_calls ⇒ Object
-
#total_duration ⇒ Integer | nil
The time spent generating the response, in nanoseconds.
Instance Method Details
#error ⇒ Object
26 27 28 |
# File 'lib/async/ollama/chat.rb', line 26 def error self.value[:error] end |
#eval_count ⇒ Integer | nil
Returns The number of tokens in the response.
63 64 65 |
# File 'lib/async/ollama/chat.rb', line 63 def eval_count self.value[:eval_count] end |
#eval_duration ⇒ Integer | nil
Returns The time spent generating the response, in nanoseconds.
68 69 70 |
# File 'lib/async/ollama/chat.rb', line 68 def eval_duration self.value[:eval_duration] end |
#load_duration ⇒ Integer | nil
Returns The time spent loading the model, in nanoseconds.
48 49 50 |
# File 'lib/async/ollama/chat.rb', line 48 def load_duration self.value[:load_duration] end |
#message ⇒ Object
14 15 16 |
# File 'lib/async/ollama/chat.rb', line 14 def self.value[:message] end |
#model ⇒ Object
38 39 40 |
# File 'lib/async/ollama/chat.rb', line 38 def model self.value[:model] end |
#prompt_eval_count ⇒ Integer | nil
Returns The number of tokens in the prompt (the token count).
53 54 55 |
# File 'lib/async/ollama/chat.rb', line 53 def prompt_eval_count self.value[:prompt_eval_count] end |
#prompt_eval_duration ⇒ Integer | nil
Returns The time spent evaluating the prompt, in nanoseconds.
58 59 60 |
# File 'lib/async/ollama/chat.rb', line 58 def prompt_eval_duration self.value[:prompt_eval_duration] end |
#response ⇒ Object
19 20 21 22 23 |
# File 'lib/async/ollama/chat.rb', line 19 def response if = self. [:content] end end |
#token_count ⇒ Integer
Returns The sum of prompt and response token counts.
73 74 75 76 77 78 79 80 81 82 83 84 85 |
# File 'lib/async/ollama/chat.rb', line 73 def token_count count = 0 if prompt_eval_count = self.prompt_eval_count count += prompt_eval_count end if eval_count = self.eval_count count += eval_count end return count end |
#tool_calls ⇒ Object
31 32 33 34 35 |
# File 'lib/async/ollama/chat.rb', line 31 def tool_calls if = self. [:tool_calls] end end |
#total_duration ⇒ Integer | nil
Returns The time spent generating the response, in nanoseconds.
43 44 45 |
# File 'lib/async/ollama/chat.rb', line 43 def total_duration self.value[:total_duration] end |