Class: LLM::Context
- Inherits:
-
Object
- Object
- LLM::Context
- Includes:
- Deserializer, Serializer
- Defined in:
- lib/llm/context.rb,
lib/llm/context/serializer.rb,
lib/llm/context/deserializer.rb
Overview
LLM::Context is the stateful execution boundary in llm.rb.
It holds the evolving runtime state for an LLM workflow: conversation history, tool calls and returns, schema and streaming configuration, accumulated usage, and request ownership for interruption.
This is broader than prompt context alone. A context is the object that lets one-off prompts, streaming turns, tool execution, persistence, retries, and serialized long-lived workflows all run through the same model.
A context can drive the chat completions API that all providers support or the Responses API on providers that expose it.
Defined Under Namespace
Modules: Deserializer, Serializer
Instance Attribute Summary collapse
-
#compacted ⇒ Boolean
(also: #compacted?)
private
Returns whether the context has been compacted and no later model response has cleared that state.
-
#llm ⇒ LLM::Provider
readonly
Returns a provider.
-
#messages ⇒ LLM::Buffer<LLM::Message>
readonly
Returns the accumulated message history for this context.
-
#mode ⇒ Symbol
readonly
Returns the context mode.
Instance Method Summary collapse
-
#call(target) ⇒ Array<LLM::Function::Return>
Calls a named collection of work through the context.
-
#compactor ⇒ LLM::Compactor
Returns a context compactor This feature is inspired by the compaction approach developed by General Intelligence Systems in [Brute](github.com/general-intelligence-systems/brute).
-
#compactor=(compactor) ⇒ LLM::Compactor, ...
Sets a context compactor or compactor config.
-
#context_window ⇒ Integer
Returns the model’s context window.
-
#cost ⇒ LLM::Cost
Returns an approximate cost for a given context based on both the provider, and model.
-
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called.
-
#guard ⇒ #call?
Returns a guard, if configured.
-
#guard=(guard) ⇒ #call, ...
Sets a guard or guard config.
-
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image.
-
#initialize(llm, params = {}) ⇒ Context
constructor
A new instance of Context.
- #inspect ⇒ String
-
#interrupt! ⇒ nil
(also: #cancel!)
Interrupt the active request, if any.
-
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file.
-
#model ⇒ String
Returns the model a Context is actively using.
-
#params ⇒ Hash
Returns the default params for this context.
-
#prompt(&b) ⇒ LLM::Prompt
(also: #build_prompt)
Build a role-aware prompt for a single request.
-
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file.
-
#respond(prompt, params = {}) ⇒ LLM::Response
Interact with the context via the responses API.
-
#returns ⇒ Array<LLM::Function::Return>
Returns tool returns accumulated in this context.
-
#serialize(path:) ⇒ void
(also: #save)
Save the current context state.
-
#spawn(function, strategy) ⇒ LLM::Function::Return, LLM::Function::Task
Spawns a function through the context.
-
#talk(prompt, params = {}) ⇒ LLM::Response
(also: #chat)
Interact with the context via the chat completions API.
- #to_h ⇒ Hash
- #to_json ⇒ String
-
#tracer ⇒ LLM::Tracer
Returns an LLM tracer.
-
#transformer ⇒ #call?
Returns a transformer, if configured.
-
#transformer=(transformer) ⇒ #call?
Sets a transformer.
-
#usage ⇒ LLM::Object
Returns token usage accumulated in this context.
-
#wait(strategy) ⇒ Array<LLM::Function::Return>
Waits for queued tool work to finish.
Methods included from Deserializer
#deserialize, #deserialize_message
Constructor Details
#initialize(llm, params = {}) ⇒ Context
Returns a new instance of Context.
84 85 86 87 88 89 90 91 92 93 94 |
# File 'lib/llm/context.rb', line 84 def initialize(llm, params = {}) @llm = llm @mode = params.delete(:mode) || :completions @compactor = params.delete(:compactor) @guard = params.delete(:guard) @transformer = params.delete(:transformer) tools = [*params.delete(:tools), *load_skills(params.delete(:skills))] @params = {model: llm.default_model, schema: nil}.compact.merge!(params) @params[:tools] = tools unless tools.empty? @messages = LLM::Buffer.new(llm) end |
Instance Attribute Details
#compacted ⇒ Boolean Also known as: compacted?
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
Returns whether the context has been compacted and no later model response has cleared that state.
120 121 122 |
# File 'lib/llm/context.rb', line 120 def compacted @compacted end |
#llm ⇒ LLM::Provider (readonly)
Returns a provider
59 60 61 |
# File 'lib/llm/context.rb', line 59 def llm @llm end |
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns the accumulated message history for this context
54 55 56 |
# File 'lib/llm/context.rb', line 54 def @messages end |
#mode ⇒ Symbol (readonly)
Returns the context mode
64 65 66 |
# File 'lib/llm/context.rb', line 64 def mode @mode end |
Instance Method Details
#call(target) ⇒ Array<LLM::Function::Return>
Calls a named collection of work through the context.
This currently supports ‘:functions`, forwarding to `functions.call`.
268 269 270 271 272 273 |
# File 'lib/llm/context.rb', line 268 def call(target) case target when :functions then guarded_returns || functions.call else raise ArgumentError, "Unknown target: #{target.inspect}. Expected :functions" end end |
#compactor ⇒ LLM::Compactor
Returns a context compactor This feature is inspired by the compaction approach developed by General Intelligence Systems in [Brute](github.com/general-intelligence-systems/brute).
102 103 104 105 |
# File 'lib/llm/context.rb', line 102 def compactor @compactor = LLM::Compactor.new(self, @compactor || {}) unless LLM::Compactor === @compactor @compactor end |
#compactor=(compactor) ⇒ LLM::Compactor, ...
Sets a context compactor or compactor config
111 112 113 |
# File 'lib/llm/context.rb', line 111 def compactor=(compactor) @compactor = compactor end |
#context_window ⇒ Integer
This method returns 0 when the provider or model can’t be found within Registry.
Returns the model’s context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.
478 479 480 481 482 483 484 485 |
# File 'lib/llm/context.rb', line 478 def context_window LLM .registry_for(llm) .limit(model:) .context rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError 0 end |
#cost ⇒ LLM::Cost
Returns an approximate cost for a given context based on both the provider, and model
461 462 463 464 465 466 467 468 |
# File 'lib/llm/context.rb', line 461 def cost cost = LLM.registry_for(llm).cost(model:) input_cost = (cost.input.to_f / 1_000_000.0) * usage.input_tokens output_cost = (cost.output.to_f / 1_000_000.0) * usage.output_tokens LLM::Cost.new(input_cost, output_cost) rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError LLM::Cost.new(0, 0) end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
247 248 249 250 251 252 253 254 255 256 257 258 |
# File 'lib/llm/context.rb', line 247 def functions return_ids = returns.map(&:id) @messages .select(&:assistant?) .flat_map do |msg| fns = msg.functions.select { _1.pending? && !return_ids.include?(_1.id) } fns.each do |fn| fn.tracer = tracer fn.model = msg.model end end.extend(LLM::Function::Array) end |
#guard ⇒ #call?
Returns a guard, if configured.
Guards are context-level supervisors for agentic execution. A guard can inspect the runtime state and decide whether pending tool work should be blocked before the context keeps looping.
The built-in implementation is LLM::LoopGuard, which detects repeated tool-call patterns and turns them into in-band LLM::GuardError tool returns.
135 136 137 138 139 140 |
# File 'lib/llm/context.rb', line 135 def guard return if @guard.nil? || @guard == false @guard = LLM::LoopGuard.new if @guard == true @guard = LLM::LoopGuard.new(@guard) if Hash === @guard @guard end |
#guard=(guard) ⇒ #call, ...
Sets a guard or guard config.
Guards must implement ‘call(ctx)` and return either `nil` or a warning string. Returning a warning tells the context to block pending tool work with guarded tool errors instead of continuing the loop.
151 152 153 |
# File 'lib/llm/context.rb', line 151 def guard=(guard) @guard = guard end |
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image
387 388 389 |
# File 'lib/llm/context.rb', line 387 def image_url(url) LLM::Object.from(value: url, kind: :image_url) end |
#inspect ⇒ String
238 239 240 241 242 |
# File 'lib/llm/context.rb', line 238 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@llm=#{@llm.class}, @mode=#{@mode.inspect}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#interrupt! ⇒ nil Also known as: cancel!
Interrupt the active request, if any. This is inspired by Go’s context cancellation model.
333 334 335 336 337 338 339 340 341 342 |
# File 'lib/llm/context.rb', line 333 def interrupt! pending = functions.to_a llm.interrupt!(@owner) queue&.interrupt! return if pending.empty? pending.each(&:interrupt!) returns = pending.map { _1.cancel(reason: "function call cancelled") } @messages << LLM::Message.new(@llm.tool_role, returns) nil end |
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file
397 398 399 |
# File 'lib/llm/context.rb', line 397 def local_file(path) LLM::Object.from(value: LLM.File(path), kind: :local_file) end |
#model ⇒ String
Returns the model a Context is actively using
421 422 423 |
# File 'lib/llm/context.rb', line 421 def model .find(&:assistant?)&.model || @params[:model] end |
#params ⇒ Hash
Returns the default params for this context
69 70 71 |
# File 'lib/llm/context.rb', line 69 def params @params.dup end |
#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt
Build a role-aware prompt for a single request.
Prefer this method over #build_prompt. The older method name is kept for backward compatibility.
376 377 378 |
# File 'lib/llm/context.rb', line 376 def prompt(&b) LLM::Prompt.new(@llm, &b) end |
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file
407 408 409 |
# File 'lib/llm/context.rb', line 407 def remote_file(res) LLM::Object.from(value: res, kind: :remote_file) end |
#respond(prompt, params = {}) ⇒ LLM::Response
Not all LLM providers support this API
Interact with the context via the responses API. This method immediately sends a request to the LLM and returns the response.
220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 |
# File 'lib/llm/context.rb', line 220 def respond(prompt, params = {}) @owner = @llm.request_owner compactor.compact!(prompt) if compactor.compact?(prompt) params = @params.merge(params) prompt, params = transform(prompt, params) bind!(params[:stream], params[:model], params[:tools]) res_id = params[:store] == false ? nil : @messages.find(&:assistant?)&.response&.response_id params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact res = @llm.responses.create(prompt, params) self.compacted = false role = params[:role] || @llm.user_role @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#returns ⇒ Array<LLM::Function::Return>
Returns tool returns accumulated in this context
293 294 295 296 297 298 299 300 301 |
# File 'lib/llm/context.rb', line 293 def returns @messages .select(&:tool_return?) .flat_map do |msg| LLM::Function::Return === msg.content ? [msg.content] : [*msg.content].grep(LLM::Function::Return) end end |
#serialize(path:) ⇒ void Also known as: save
This method returns an undefined value.
Save the current context state
452 453 454 |
# File 'lib/llm/context.rb', line 452 def serialize(path:) ::File.binwrite path, LLM.json.dump(to_h) end |
#spawn(function, strategy) ⇒ LLM::Function::Return, LLM::Function::Task
Spawns a function through the context.
When a guard is configured, this method can return an in-band guarded tool error instead of spawning work.
284 285 286 287 288 |
# File 'lib/llm/context.rb', line 284 def spawn(function, strategy) warning = guard&.call(self) return guarded_return_for(function, warning) if warning function.spawn(strategy) end |
#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat
Interact with the context via the chat completions API. This method immediately sends a request to the LLM and returns the response.
189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
# File 'lib/llm/context.rb', line 189 def talk(prompt, params = {}) return respond(prompt, params) if mode == :responses @owner = @llm.request_owner compactor.compact!(prompt) if compactor.compact?(prompt) params = params.merge(messages: @messages.to_a) params = @params.merge(params) prompt, params = transform(prompt, params) bind!(params[:stream], params[:model], params[:tools]) res = @llm.complete(prompt, params) self.compacted = false role = params[:role] || @llm.user_role role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any? @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#to_h ⇒ Hash
427 428 429 430 431 432 433 434 |
# File 'lib/llm/context.rb', line 427 def to_h { schema_version: 1, model:, compacted:, messages: @messages.map { (_1) } } end |
#to_json ⇒ String
438 439 440 |
# File 'lib/llm/context.rb', line 438 def to_json(...) to_h.to_json(...) end |
#tracer ⇒ LLM::Tracer
Returns an LLM tracer
414 415 416 |
# File 'lib/llm/context.rb', line 414 def tracer @llm.tracer end |
#transformer ⇒ #call?
Returns a transformer, if configured.
Transformers can rewrite outgoing prompts and params before a request is sent to the provider.
162 163 164 |
# File 'lib/llm/context.rb', line 162 def transformer @transformer end |
#transformer=(transformer) ⇒ #call?
Sets a transformer.
Transformers must implement ‘call(ctx, prompt, params)` and return a two-element array of `[prompt, params]`.
174 175 176 |
# File 'lib/llm/context.rb', line 174 def transformer=(transformer) @transformer = transformer end |
#usage ⇒ LLM::Object
Returns token usage accumulated in this context
348 349 350 351 352 353 354 355 356 357 358 359 |
# File 'lib/llm/context.rb', line 348 def usage if usage = @messages.find(&:assistant?)&.usage LLM::Object.from( input_tokens: usage.input_tokens || 0, output_tokens: usage.output_tokens || 0, reasoning_tokens: usage.reasoning_tokens || 0, total_tokens: usage.total_tokens || 0 ) else ZERO_USAGE end end |
#wait(strategy) ⇒ Array<LLM::Function::Return>
Waits for queued tool work to finish.
This prefers queued streamed tool work when the configured stream exposes a non-empty queue. Otherwise it falls back to waiting on the context’s pending functions directly.
315 316 317 318 319 320 321 322 323 324 325 326 327 |
# File 'lib/llm/context.rb', line 315 def wait(strategy) if LLM::Stream === stream && !stream.queue.empty? @queue = stream.queue @queue.wait(strategy) else return guarded_returns if guarded_returns @queue = functions.spawn(strategy) @queue.wait end ensure @queue = nil @stream = nil end |