Class: LLM::Agent

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/agent.rb

Overview

LLM::Agent provides a class-level DSL for defining reusable, preconfigured assistants with defaults for model, tools, schema, and instructions.

It wraps the same stateful runtime surface as LLM::Context: message history, usage, persistence, streaming parameters, and provider-backed requests still flow through an underlying context. The defining behavior of an agent is that it automatically resolves pending tool calls for you during ‘talk` and `respond`, instead of leaving tool loops to the caller.

Notes:

  • Instructions are injected once unless a system message is already present.

  • An agent automatically executes tool loops (unlike LLM::Context).

  • The automatic tool loop enables the wrapped context’s ‘guard` by default. The built-in LLM::LoopGuard detects repeated tool-call patterns and blocks stuck execution before more tool work is queued.

  • The default tool attempt budget is ‘25`. After that, the agent sends advisory tool errors back through the model and keeps the loop in-band. Set `tool_attempts: nil` to disable that advisory behavior.

  • Tool loop execution can be configured with ‘concurrency :call`, `:thread`, `:task`, `:fiber`, `:ractor`, or a list of queued task types such as `[:thread, :ractor]`.

Examples:

class SystemAdmin < LLM::Agent
  model "gpt-4.1-nano"
  instructions "You are a Linux system admin"
  tools Shell
  schema Result
end

llm = LLM.openai(key: ENV["KEY"])
agent = SystemAdmin.new(llm)
agent.talk("Run 'date'")

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(llm, params = {}) ⇒ Agent

Returns a new instance of Agent.

Parameters:

  • provider (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :model (String)

    Defaults to the provider’s default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

  • :skills (Array<String>, nil)

    Defaults to nil

  • :schema (#to_json, nil)

    Defaults to nil

  • :concurrency (Symbol, Array<Symbol>, nil)

    Defaults to the agent class concurrency



153
154
155
156
157
158
159
# File 'lib/llm/agent.rb', line 153

def initialize(llm, params = {})
  defaults = {model: self.class.model, tools: self.class.tools, skills: self.class.skills, schema: self.class.schema}.compact
  @concurrency = params.delete(:concurrency) || self.class.concurrency
  @llm = llm
  @tracer = resolve_option(self.class.tracer) unless self.class.tracer.nil?
  @ctx = LLM::Context.new(llm, defaults.merge({guard: true}).merge(params))
end

Instance Attribute Details

#llmLLM::Provider (readonly)

Returns a provider

Returns:



44
45
46
# File 'lib/llm/agent.rb', line 44

def llm
  @llm
end

Class Method Details

.concurrency(concurrency = nil) ⇒ Symbol, ...

Set or get the tool execution concurrency.

Parameters:

  • concurrency (Symbol, Array<Symbol>, nil) (defaults to: nil)

    Controls how pending tool loops are executed:

    • ‘:call`: sequential calls

    • ‘:thread`: concurrent threads

    • ‘:task`: concurrent async tasks

    • ‘:fiber`: concurrent raw fibers

    • ‘:ractor`: concurrent Ruby ractors for class-based tools; MCP tools are not supported, and this mode is especially useful for CPU-bound tool work

    • ‘[:thread, :ractor]`: the possible concurrency strategies to wait on, in the given order. This is useful for mixed tool sets or when work may have been spawned with more than one concurrency strategy.

Returns:

  • (Symbol, Array<Symbol>, nil)


116
117
118
119
# File 'lib/llm/agent.rb', line 116

def self.concurrency(concurrency = nil)
  return @concurrency if concurrency.nil?
  @concurrency = concurrency
end

.instructions(instructions = nil) ⇒ String?

Set or get the default instructions

Parameters:

  • instructions (String, nil) (defaults to: nil)

    The system instructions

Returns:

  • (String, nil)

    Returns the current instructions when no argument is provided



96
97
98
99
# File 'lib/llm/agent.rb', line 96

def self.instructions(instructions = nil)
  return @instructions if instructions.nil?
  @instructions = instructions
end

.model(model = nil) ⇒ String?

Set or get the default model

Parameters:

  • model (String, nil) (defaults to: nil)

    The model identifier

Returns:

  • (String, nil)

    Returns the current model when no argument is provided



52
53
54
55
# File 'lib/llm/agent.rb', line 52

def self.model(model = nil)
  return @model if model.nil?
  @model = model
end

.schema(schema = nil) ⇒ #to_json?

Set or get the default schema

Parameters:

  • schema (#to_json, nil) (defaults to: nil)

    The schema

Returns:

  • (#to_json, nil)

    Returns the current schema when no argument is provided



85
86
87
88
# File 'lib/llm/agent.rb', line 85

def self.schema(schema = nil)
  return @schema if schema.nil?
  @schema = schema
end

.skills(*skills) ⇒ Array<String>?

Set or get the default skills

Parameters:

  • skills (Array<String>, nil)

    One or more skill directories

Returns:

  • (Array<String>, nil)

    Returns the current skills when no argument is provided



74
75
76
77
# File 'lib/llm/agent.rb', line 74

def self.skills(*skills)
  return @skills if skills.empty?
  @skills = skills.flatten
end

.tools(*tools) ⇒ Array<LLM::Function>

Set or get the default tools

Parameters:

Returns:

  • (Array<LLM::Function>)

    Returns the current tools when no argument is provided



63
64
65
66
# File 'lib/llm/agent.rb', line 63

def self.tools(*tools)
  return @tools || [] if tools.empty?
  @tools = tools.flatten
end

.tracer(tracer = nil, &block) ⇒ LLM::Tracer, ...

Set or get the default tracer.

When a block is provided, it is stored and evaluated lazily against the agent instance during initialization so it can build a tracer from the resolved provider.

Examples:

class Agent < LLM::Agent
  tracer { LLM::Tracer::Logger.new(llm, io: $stdout) }
end

Parameters:

Yield Returns:

Returns:



136
137
138
139
# File 'lib/llm/agent.rb', line 136

def self.tracer(tracer = nil, &block)
  return @tracer if tracer.nil? && !block
  @tracer = block || tracer
end

Instance Method Details

#callObject

Returns:

See Also:



225
226
227
# File 'lib/llm/agent.rb', line 225

def call(...)
  @tracer ? @llm.with_tracer(@tracer) { @ctx.call(...) } : @ctx.call(...)
end

#concurrencySymbol, ...

Returns the configured tool execution concurrency.

Returns:

  • (Symbol, Array<Symbol>, nil)


309
310
311
# File 'lib/llm/agent.rb', line 309

def concurrency
  @concurrency
end

#context_windowInteger

Returns:

  • (Integer)

See Also:



323
324
325
# File 'lib/llm/agent.rb', line 323

def context_window
  @ctx.context_window
end

#costLLM::Cost

Returns:

See Also:



316
317
318
# File 'lib/llm/agent.rb', line 316

def cost
  @ctx.cost
end

#deserialize(**kw) ⇒ Object Also known as: restore



358
359
360
# File 'lib/llm/agent.rb', line 358

def deserialize(**kw)
  @ctx.deserialize(**kw)
end

#functionsArray<LLM::Function>

Returns:



211
212
213
# File 'lib/llm/agent.rb', line 211

def functions
  @tracer ? @llm.with_tracer(@tracer) { @ctx.functions } : @ctx.functions
end

#image_url(url) ⇒ LLM::Object

Returns a tagged object

Parameters:

  • url (String)

    The URL

Returns:



264
265
266
# File 'lib/llm/agent.rb', line 264

def image_url(url)
  @ctx.image_url(url)
end

#inspectString

Returns:

  • (String)


342
343
344
345
# File 'lib/llm/agent.rb', line 342

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @mode=#{mode.inspect}, @messages=#{messages.inspect}>"
end

#interrupt!nil Also known as: cancel!

Interrupt the active request, if any.

Returns:

  • (nil)


245
246
247
# File 'lib/llm/agent.rb', line 245

def interrupt!
  @ctx.interrupt!
end

#local_file(path) ⇒ LLM::Object

Returns a tagged object

Parameters:

  • path (String)

    The path

Returns:



273
274
275
# File 'lib/llm/agent.rb', line 273

def local_file(path)
  @ctx.local_file(path)
end

#messagesLLM::Buffer<LLM::Message>



205
206
207
# File 'lib/llm/agent.rb', line 205

def messages
  @ctx.messages
end

#modeSymbol

Returns:

  • (Symbol)


302
303
304
# File 'lib/llm/agent.rb', line 302

def mode
  @ctx.mode
end

#modelString

Returns the model an Agent is actively using

Returns:

  • (String)


296
297
298
# File 'lib/llm/agent.rb', line 296

def model
  @ctx.model
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:

See Also:



254
255
256
# File 'lib/llm/agent.rb', line 254

def prompt(&b)
  @ctx.prompt(&b)
end

#remote_file(res) ⇒ LLM::Object

Returns a tagged object

Parameters:

Returns:



282
283
284
# File 'lib/llm/agent.rb', line 282

def remote_file(res)
  @ctx.remote_file(res)
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
agent = LLM::Agent.new(llm)
res = agent.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (Hash) (defaults to: {})

    The params passed to the provider, including optional :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Options Hash (params):

  • :tool_attempts (Integer)

    The maxinum number of tool call iterations before the agent sends in-band advisory tool errors back through the model (default 25). Set to ‘nil` to disable advisory tool-limit returns.

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



199
200
201
# File 'lib/llm/agent.rb', line 199

def respond(prompt, params = {})
  run_loop(:respond, prompt, params)
end

#returnsArray<LLM::Function::Return>

Returns:

See Also:



218
219
220
# File 'lib/llm/agent.rb', line 218

def returns
  @ctx.returns
end

#serialize(**kw) ⇒ void Also known as: save

This method returns an undefined value.



350
351
352
# File 'lib/llm/agent.rb', line 350

def serialize(**kw)
  @ctx.serialize(**kw)
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
agent = LLM::Agent.new(llm)
response = agent.talk("Hello, what is your name?")
puts response.choices[0].content

Parameters:

  • params (Hash) (defaults to: {})

    The params passed to the provider, including optional :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Options Hash (params):

  • :tool_attempts (Integer)

    The maxinum number of tool call iterations before the agent sends in-band advisory tool errors back through the model (default 25). Set to ‘nil` to disable advisory tool-limit returns.

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.



177
178
179
# File 'lib/llm/agent.rb', line 177

def talk(prompt, params = {})
  run_loop(:talk, prompt, params)
end

#to_hHash

Returns:

  • (Hash)

See Also:



330
331
332
# File 'lib/llm/agent.rb', line 330

def to_h
  @ctx.to_h
end

#to_jsonString

Returns:

  • (String)


336
337
338
# File 'lib/llm/agent.rb', line 336

def to_json(...)
  to_h.to_json(...)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



289
290
291
# File 'lib/llm/agent.rb', line 289

def tracer
  @tracer || @ctx.tracer
end

#usageLLM::Object

Returns:



238
239
240
# File 'lib/llm/agent.rb', line 238

def usage
  @ctx.usage
end

#waitArray<LLM::Function::Return>

Returns:

See Also:



232
233
234
# File 'lib/llm/agent.rb', line 232

def wait(...)
  @tracer ? @llm.with_tracer(@tracer) { @ctx.wait(...) } : @ctx.wait(...)
end