Class: LLM::Stream
- Inherits:
-
Object
- Object
- LLM::Stream
- Defined in:
- lib/llm/stream.rb,
lib/llm/stream/queue.rb
Overview
The ‘on_*` callbacks run inline with the streaming parser. They therefore block streaming progress and should generally return as quickly as possible.
The LLM::Stream class provides the callback interface for streamed model output in llm.rb.
A stream object can be an instance of LLM::Stream or a subclass that overrides the callbacks it needs. For basic streaming, llm.rb also accepts any object that implements ‘#<<`. #queue provides a small helper for collecting asynchronous tool work started from a callback, and #tool_not_found returns an in-band tool error when a streamed tool cannot be resolved.
The most common callback is #on_content, which also maps to #<<. Providers may also call #on_reasoning_content and #on_tool_call when that data is available.
Defined Under Namespace
Classes: Queue
Public callbacks collapse
-
#on_content(content) ⇒ nil
(also: #<<)
Called when visible assistant output is streamed.
-
#on_reasoning_content(content) ⇒ nil
Called when reasoning output is streamed separately from visible content.
-
#on_tool_call(tool, error) ⇒ nil
Called when a streamed tool call has been fully constructed.
-
#on_tool_return(tool, result) ⇒ nil
Called when queued streamed tool work returns.
Error handlers collapse
-
#tool_not_found(tool) ⇒ LLM::Function::Return
Returns a function return describing a streamed tool that could not be resolved.
Instance Method Summary collapse
-
#queue ⇒ LLM::Stream::Queue
Returns a lazily-initialized queue for tool results or spawned work.
-
#wait(strategy) ⇒ Array<LLM::Function::Return>
Waits for queued tool work to finish and returns function results.
Instance Method Details
#on_content(content) ⇒ nil Also known as: <<
Called when visible assistant output is streamed.
48 49 50 |
# File 'lib/llm/stream.rb', line 48 def on_content(content) nil end |
#on_reasoning_content(content) ⇒ nil
Called when reasoning output is streamed separately from visible content.
58 59 60 |
# File 'lib/llm/stream.rb', line 58 def on_reasoning_content(content) nil end |
#on_tool_call(tool, error) ⇒ nil
A stream implementation may start tool execution here, for example by pushing ‘tool.spawn(:thread)`, `tool.spawn(:fiber)`, or `tool.spawn(:task)` onto #queue. When a streamed tool cannot be resolved, `error` is passed as an Function::Return. It can be sent back to the model, allowing the tool-call path to recover and the session to continue. Tool resolution depends on Function.registry, which includes LLM::Tool subclasses, including MCP tools, but not functions defined with LLM.function.
Called when a streamed tool call has been fully constructed.
78 79 80 |
# File 'lib/llm/stream.rb', line 78 def on_tool_call(tool, error) nil end |
#on_tool_return(tool, result) ⇒ nil
This callback runs when #wait resolves work that was queued from #on_tool_call, such as values returned by ‘tool.spawn(:thread)`, `tool.spawn(:fiber)`, or `tool.spawn(:task)`.
Called when queued streamed tool work returns.
92 93 94 |
# File 'lib/llm/stream.rb', line 92 def on_tool_return(tool, result) nil end |
#queue ⇒ LLM::Stream::Queue
Returns a lazily-initialized queue for tool results or spawned work.
28 29 30 |
# File 'lib/llm/stream.rb', line 28 def queue @queue ||= Queue.new(self) end |
#tool_not_found(tool) ⇒ LLM::Function::Return
This is mainly useful as a fallback from #on_tool_call. It should be uncommon in normal use, since streamed tool callbacks only run for tools already defined in the context.
Returns a function return describing a streamed tool that could not be resolved.
108 109 110 111 112 |
# File 'lib/llm/stream.rb', line 108 def tool_not_found(tool) LLM::Function::Return.new(tool.id, tool.name, { error: true, type: LLM::NoSuchToolError.name, message: "tool not found" }) end |
#wait(strategy) ⇒ Array<LLM::Function::Return>
Waits for queued tool work to finish and returns function results.
37 38 39 |
# File 'lib/llm/stream.rb', line 37 def wait(strategy) queue.wait(strategy) end |