Class: LLM::Agent

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/agent.rb

Overview

LLM::Agent provides a class-level DSL for defining reusable, preconfigured assistants with defaults for model, tools, schema, and instructions.

It wraps the same stateful runtime surface as LLM::Context: message history, usage, persistence, streaming parameters, and provider-backed requests still flow through an underlying context. The defining behavior of an agent is that it automatically resolves pending tool calls for you during ‘talk` and `respond`, instead of leaving tool loops to the caller.

Notes:

  • Instructions are injected once unless a system message is already present.

  • An agent automatically executes tool loops (unlike LLM::Context).

  • The automatic tool loop enables the wrapped context’s ‘guard` by default. The built-in LLM::LoopGuard detects repeated tool-call patterns and blocks stuck execution before more tool work is queued.

  • The default tool attempt budget is ‘25`. After that, the agent sends advisory tool errors back through the model and keeps the loop in-band. Set `tool_attempts: nil` to disable that advisory behavior.

  • Tool loop execution can be configured with ‘concurrency :call`, `:thread`, `:task`, `:fiber`, or `:ractor`.

Examples:

class SystemAdmin < LLM::Agent
  model "gpt-4.1-nano"
  instructions "You are a Linux system admin"
  tools Shell
  schema Result
end

llm = LLM.openai(key: ENV["KEY"])
agent = SystemAdmin.new(llm)
agent.talk("Run 'date'")

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(llm, params = {}) ⇒ Agent

Returns a new instance of Agent.

Parameters:

  • provider (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :model (String)

    Defaults to the provider’s default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

  • :skills (Array<String>, nil)

    Defaults to nil

  • :schema (#to_json, nil)

    Defaults to nil

  • :tracer (LLM::Tracer, Proc, nil)

    Optional tracer override for this agent instance

  • :concurrency (Symbol, Array<Symbol>, nil)

    Defaults to the agent class concurrency



173
174
175
176
177
178
179
180
181
182
# File 'lib/llm/agent.rb', line 173

def initialize(llm, params = {})
  defaults = {model: self.class.model, tools: self.class.tools, skills: self.class.skills, schema: self.class.schema}.compact
  @concurrency = params.delete(:concurrency) || self.class.concurrency
  @llm = llm
  tracer = params.key?(:tracer) ? params.delete(:tracer) : self.class.tracer
  stream = params.key?(:stream) ? params.delete(:stream) : self.class.stream
  @tracer = resolve_option(tracer) unless tracer.nil?
  params[:stream] = resolve_option(stream) unless stream.nil?
  @ctx = LLM::Context.new(llm, defaults.merge({guard: true}).merge(params))
end

Instance Attribute Details

#llmLLM::Provider (readonly)

Returns a provider

Returns:



43
44
45
# File 'lib/llm/agent.rb', line 43

def llm
  @llm
end

Class Method Details

.concurrency(concurrency = nil) ⇒ Symbol, ...

Set or get the tool execution concurrency.

Parameters:

  • concurrency (Symbol, Array<Symbol>, nil) (defaults to: nil)

    Controls how pending tool loops are executed:

    • ‘:call`: sequential calls

    • ‘:thread`: concurrent threads

    • ‘:task`: concurrent async tasks

    • ‘:fiber`: concurrent scheduler-backed fibers

    • ‘:fork`: forked child processes

    • ‘:ractor`: concurrent Ruby ractors for class-based tools; MCP tools are not supported, and this mode is especially useful for CPU-bound tool work

    Usually pass a single strategy. Arrays are only for advanced mixed-work cases and are not needed for normal queued stream tool loops.

Returns:

  • (Symbol, Array<Symbol>, nil)


115
116
117
118
# File 'lib/llm/agent.rb', line 115

def self.concurrency(concurrency = nil)
  return @concurrency if concurrency.nil?
  @concurrency = concurrency
end

.instructions(instructions = nil) ⇒ String?

Set or get the default instructions

Parameters:

  • instructions (String, nil) (defaults to: nil)

    The system instructions

Returns:

  • (String, nil)

    Returns the current instructions when no argument is provided



95
96
97
98
# File 'lib/llm/agent.rb', line 95

def self.instructions(instructions = nil)
  return @instructions if instructions.nil?
  @instructions = instructions
end

.model(model = nil) ⇒ String?

Set or get the default model

Parameters:

  • model (String, nil) (defaults to: nil)

    The model identifier

Returns:

  • (String, nil)

    Returns the current model when no argument is provided



51
52
53
54
# File 'lib/llm/agent.rb', line 51

def self.model(model = nil)
  return @model if model.nil?
  @model = model
end

.schema(schema = nil) ⇒ #to_json?

Set or get the default schema

Parameters:

  • schema (#to_json, nil) (defaults to: nil)

    The schema

Returns:

  • (#to_json, nil)

    Returns the current schema when no argument is provided



84
85
86
87
# File 'lib/llm/agent.rb', line 84

def self.schema(schema = nil)
  return @schema if schema.nil?
  @schema = schema
end

.skills(*skills) ⇒ Array<String>?

Set or get the default skills

Parameters:

  • skills (Array<String>, nil)

    One or more skill directories

Returns:

  • (Array<String>, nil)

    Returns the current skills when no argument is provided



73
74
75
76
# File 'lib/llm/agent.rb', line 73

def self.skills(*skills)
  return @skills if skills.empty?
  @skills = skills.flatten
end

.stream(stream = nil, &block) ⇒ Object, ...

Set or get the default stream.

When a block is provided, it is stored and evaluated lazily against the agent instance during initialization so it can build a fresh stream for each agent.

Examples:

class Agent < LLM::Agent
  stream { MyStream.new }
end

Parameters:

  • stream (Object, Proc, nil) (defaults to: nil)

Yield Returns:

Returns:



155
156
157
158
# File 'lib/llm/agent.rb', line 155

def self.stream(stream = nil, &block)
  return @stream if stream.nil? && !block
  @stream = block || stream
end

.tools(*tools) ⇒ Array<LLM::Function>

Set or get the default tools

Parameters:

Returns:

  • (Array<LLM::Function>)

    Returns the current tools when no argument is provided



62
63
64
65
# File 'lib/llm/agent.rb', line 62

def self.tools(*tools)
  return @tools || [] if tools.empty?
  @tools = tools.flatten
end

.tracer(tracer = nil, &block) ⇒ LLM::Tracer, ...

Set or get the default tracer.

When a block is provided, it is stored and evaluated lazily against the agent instance during initialization so it can build a tracer from the resolved provider.

Examples:

class Agent < LLM::Agent
  tracer { LLM::Tracer::Logger.new(llm, io: $stdout) }
end

Parameters:

Yield Returns:

Returns:



135
136
137
138
# File 'lib/llm/agent.rb', line 135

def self.tracer(tracer = nil, &block)
  return @tracer if tracer.nil? && !block
  @tracer = block || tracer
end

Instance Method Details

#concurrencySymbol, ...

Returns the configured tool execution concurrency.

Returns:

  • (Symbol, Array<Symbol>, nil)


332
333
334
# File 'lib/llm/agent.rb', line 332

def concurrency
  @concurrency
end

#context_windowInteger

Returns:

  • (Integer)

See Also:



346
347
348
# File 'lib/llm/agent.rb', line 346

def context_window
  @ctx.context_window
end

#costLLM::Cost

Returns:

See Also:



339
340
341
# File 'lib/llm/agent.rb', line 339

def cost
  @ctx.cost
end

#deserialize(**kw) ⇒ Object Also known as: restore



381
382
383
# File 'lib/llm/agent.rb', line 381

def deserialize(**kw)
  @ctx.deserialize(**kw)
end

#functionsArray<LLM::Function>

Returns:



234
235
236
# File 'lib/llm/agent.rb', line 234

def functions
  @tracer ? @llm.with_tracer(@tracer) { @ctx.functions } : @ctx.functions
end

#image_url(url) ⇒ LLM::Object

Returns a tagged object

Parameters:

  • url (String)

    The URL

Returns:



280
281
282
# File 'lib/llm/agent.rb', line 280

def image_url(url)
  @ctx.image_url(url)
end

#inspectString

Returns:

  • (String)


365
366
367
368
# File 'lib/llm/agent.rb', line 365

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @mode=#{mode.inspect}, @messages=#{messages.inspect}>"
end

#interrupt!nil Also known as: cancel!

Interrupt the active request, if any.

Returns:

  • (nil)


261
262
263
# File 'lib/llm/agent.rb', line 261

def interrupt!
  @ctx.interrupt!
end

#local_file(path) ⇒ LLM::Object

Returns a tagged object

Parameters:

  • path (String)

    The path

Returns:



289
290
291
# File 'lib/llm/agent.rb', line 289

def local_file(path)
  @ctx.local_file(path)
end

#messagesLLM::Buffer<LLM::Message>



228
229
230
# File 'lib/llm/agent.rb', line 228

def messages
  @ctx.messages
end

#modeSymbol

Returns:

  • (Symbol)


325
326
327
# File 'lib/llm/agent.rb', line 325

def mode
  @ctx.mode
end

#modelString

Returns the model an Agent is actively using

Returns:

  • (String)


319
320
321
# File 'lib/llm/agent.rb', line 319

def model
  @ctx.model
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:

See Also:



270
271
272
# File 'lib/llm/agent.rb', line 270

def prompt(&b)
  @ctx.prompt(&b)
end

#remote_file(res) ⇒ LLM::Object

Returns a tagged object

Parameters:

Returns:



298
299
300
# File 'lib/llm/agent.rb', line 298

def remote_file(res)
  @ctx.remote_file(res)
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
agent = LLM::Agent.new(llm)
res = agent.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (Hash) (defaults to: {})

    The params passed to the provider, including optional :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Options Hash (params):

  • :tool_attempts (Integer)

    The maxinum number of tool call iterations before the agent sends in-band advisory tool errors back through the model (default 25). Set to ‘nil` to disable advisory tool-limit returns.

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



222
223
224
# File 'lib/llm/agent.rb', line 222

def respond(prompt, params = {})
  run_loop(:respond, prompt, params)
end

#returnsArray<LLM::Function::Return>

Returns:

See Also:



241
242
243
# File 'lib/llm/agent.rb', line 241

def returns
  @ctx.returns
end

#serialize(**kw) ⇒ void Also known as: save

This method returns an undefined value.



373
374
375
# File 'lib/llm/agent.rb', line 373

def serialize(**kw)
  @ctx.serialize(**kw)
end

#streamLLM::Stream, ...

Returns a stream object, or nil

Returns:

  • (LLM::Stream, #<<, nil)

    Returns a stream object, or nil



312
313
314
# File 'lib/llm/agent.rb', line 312

def stream
  @ctx.stream
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
agent = LLM::Agent.new(llm)
response = agent.talk("Hello, what is your name?")
puts response.choices[0].content

Parameters:

  • params (Hash) (defaults to: {})

    The params passed to the provider, including optional :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Options Hash (params):

  • :tool_attempts (Integer)

    The maxinum number of tool call iterations before the agent sends in-band advisory tool errors back through the model (default 25). Set to ‘nil` to disable advisory tool-limit returns.

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



200
201
202
# File 'lib/llm/agent.rb', line 200

def talk(prompt, params = {})
  run_loop(:talk, prompt, params)
end

#to_hHash

Returns:

  • (Hash)

See Also:



353
354
355
# File 'lib/llm/agent.rb', line 353

def to_h
  @ctx.to_h
end

#to_jsonString

Returns:

  • (String)


359
360
361
# File 'lib/llm/agent.rb', line 359

def to_json(...)
  to_h.to_json(...)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



305
306
307
# File 'lib/llm/agent.rb', line 305

def tracer
  @tracer || @ctx.tracer
end

#usageLLM::Object

Returns:



254
255
256
# File 'lib/llm/agent.rb', line 254

def usage
  @ctx.usage
end

#waitArray<LLM::Function::Return>

Returns:

See Also:



248
249
250
# File 'lib/llm/agent.rb', line 248

def wait(...)
  @tracer ? @llm.with_tracer(@tracer) { @ctx.wait(...) } : @ctx.wait(...)
end