Class: Async::Ollama::Chat
- Inherits:
-
Object
- Object
- Async::Ollama::Chat
- Defined in:
- lib/async/ollama/chat.rb
Overview
Represents a chat response from the Ollama API, including message content, model, and timing information.
Instance Method Summary collapse
- #error ⇒ Object
-
#eval_count ⇒ Integer | nil
The number of tokens in the response.
-
#eval_duration ⇒ Integer | nil
The time spent generating the response, in nanoseconds.
-
#load_duration ⇒ Integer | nil
The time spent loading the model, in nanoseconds.
- #message ⇒ Object
- #model ⇒ Object
-
#prompt_eval_count ⇒ Integer | nil
The number of tokens in the prompt (the token count).
-
#prompt_eval_duration ⇒ Integer | nil
The time spent evaluating the prompt, in nanoseconds.
- #response ⇒ Object
-
#token_count ⇒ Integer
The sum of prompt and response token counts.
- #tool_calls ⇒ Object
-
#total_duration ⇒ Integer | nil
The time spent generating the response, in nanoseconds.
Instance Method Details
#error ⇒ Object
25 26 27 |
# File 'lib/async/ollama/chat.rb', line 25 def error self.value[:error] end |
#eval_count ⇒ Integer | nil
Returns The number of tokens in the response.
62 63 64 |
# File 'lib/async/ollama/chat.rb', line 62 def eval_count self.value[:eval_count] end |
#eval_duration ⇒ Integer | nil
Returns The time spent generating the response, in nanoseconds.
67 68 69 |
# File 'lib/async/ollama/chat.rb', line 67 def eval_duration self.value[:eval_duration] end |
#load_duration ⇒ Integer | nil
Returns The time spent loading the model, in nanoseconds.
47 48 49 |
# File 'lib/async/ollama/chat.rb', line 47 def load_duration self.value[:load_duration] end |
#message ⇒ Object
14 15 16 |
# File 'lib/async/ollama/chat.rb', line 14 def self.value[:message] end |
#model ⇒ Object
37 38 39 |
# File 'lib/async/ollama/chat.rb', line 37 def model self.value[:model] end |
#prompt_eval_count ⇒ Integer | nil
Returns The number of tokens in the prompt (the token count).
52 53 54 |
# File 'lib/async/ollama/chat.rb', line 52 def prompt_eval_count self.value[:prompt_eval_count] end |
#prompt_eval_duration ⇒ Integer | nil
Returns The time spent evaluating the prompt, in nanoseconds.
57 58 59 |
# File 'lib/async/ollama/chat.rb', line 57 def prompt_eval_duration self.value[:prompt_eval_duration] end |
#response ⇒ Object
18 19 20 21 22 |
# File 'lib/async/ollama/chat.rb', line 18 def response if = self. [:content] end end |
#token_count ⇒ Integer
Returns The sum of prompt and response token counts.
72 73 74 75 76 77 78 79 80 81 82 83 84 |
# File 'lib/async/ollama/chat.rb', line 72 def token_count count = 0 if prompt_eval_count = self.prompt_eval_count count += prompt_eval_count end if eval_count = self.eval_count count += eval_count end return count end |
#tool_calls ⇒ Object
30 31 32 33 34 |
# File 'lib/async/ollama/chat.rb', line 30 def tool_calls if = self. [:tool_calls] end end |
#total_duration ⇒ Integer | nil
Returns The time spent generating the response, in nanoseconds.
42 43 44 |
# File 'lib/async/ollama/chat.rb', line 42 def total_duration self.value[:total_duration] end |