Class: LLM::Agent

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/agent.rb

Overview

LLM::Agent provides a class-level DSL for defining reusable, preconfigured assistants with defaults for model, tools, schema, and instructions.

It wraps the same stateful runtime surface as LLM::Context: message history, usage, persistence, streaming parameters, and provider-backed requests still flow through an underlying context. The defining behavior of an agent is that it automatically resolves pending tool calls for you during ‘talk` and `respond`, instead of leaving tool loops to the caller.

Notes:

  • Instructions are injected only on the first request.

  • An agent automatically executes tool loops (unlike LLM::Context).

  • Tool loop execution can be configured with ‘concurrency :call`, `:thread`, `:task`, `:fiber`, `:ractor`, or a list of queued task types such as `[:thread, :ractor]`.

Examples:

class SystemAdmin < LLM::Agent
  model "gpt-4.1-nano"
  instructions "You are a Linux system admin"
  tools Shell
  schema Result
end

llm = LLM.openai(key: ENV["KEY"])
agent = SystemAdmin.new(llm)
agent.talk("Run 'date'")

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(llm, params = {}) ⇒ Agent

Returns a new instance of Agent.

Parameters:

  • provider (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :model (String)

    Defaults to the provider’s default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

  • :schema (#to_json, nil)

    Defaults to nil

  • :concurrency (Symbol, Array<Symbol>, nil)

    Defaults to the agent class concurrency



115
116
117
118
119
120
# File 'lib/llm/agent.rb', line 115

def initialize(llm, params = {})
  defaults = {model: self.class.model, tools: self.class.tools, schema: self.class.schema}.compact
  @concurrency = params.delete(:concurrency) || self.class.concurrency
  @llm = llm
  @ctx = LLM::Context.new(llm, defaults.merge(params))
end

Instance Attribute Details

#llmLLM::Provider (readonly)

Returns a provider

Returns:



38
39
40
# File 'lib/llm/agent.rb', line 38

def llm
  @llm
end

Class Method Details

.concurrency(concurrency = nil) ⇒ Symbol, ...

Set or get the tool execution concurrency.

Parameters:

  • concurrency (Symbol, Array<Symbol>, nil) (defaults to: nil)

    Controls how pending tool loops are executed:

    • ‘:call`: sequential calls

    • ‘:thread`: concurrent threads

    • ‘:task`: concurrent async tasks

    • ‘:fiber`: concurrent raw fibers

    • ‘:ractor`: concurrent Ruby ractors for class-based tools; MCP tools are not supported, and this mode is especially useful for CPU-bound tool work

    • ‘[:thread, :ractor]`: the possible concurrency strategies to wait on, in the given order. This is useful for mixed tool sets or when work may have been spawned with more than one concurrency strategy.

Returns:

  • (Symbol, Array<Symbol>, nil)


99
100
101
102
# File 'lib/llm/agent.rb', line 99

def self.concurrency(concurrency = nil)
  return @concurrency if concurrency.nil?
  @concurrency = concurrency
end

.instructions(instructions = nil) ⇒ String?

Set or get the default instructions

Parameters:

  • instructions (String, nil) (defaults to: nil)

    The system instructions

Returns:

  • (String, nil)

    Returns the current instructions when no argument is provided



79
80
81
82
# File 'lib/llm/agent.rb', line 79

def self.instructions(instructions = nil)
  return @instructions if instructions.nil?
  @instructions = instructions
end

.model(model = nil) ⇒ String?

Set or get the default model

Parameters:

  • model (String, nil) (defaults to: nil)

    The model identifier

Returns:

  • (String, nil)

    Returns the current model when no argument is provided



46
47
48
49
# File 'lib/llm/agent.rb', line 46

def self.model(model = nil)
  return @model if model.nil?
  @model = model
end

.schema(schema = nil) ⇒ #to_json?

Set or get the default schema

Parameters:

  • schema (#to_json, nil) (defaults to: nil)

    The schema

Returns:

  • (#to_json, nil)

    Returns the current schema when no argument is provided



68
69
70
71
# File 'lib/llm/agent.rb', line 68

def self.schema(schema = nil)
  return @schema if schema.nil?
  @schema = schema
end

.tools(*tools) ⇒ Array<LLM::Function>

Set or get the default tools

Parameters:

Returns:

  • (Array<LLM::Function>)

    Returns the current tools when no argument is provided



57
58
59
60
# File 'lib/llm/agent.rb', line 57

def self.tools(*tools)
  return @tools || [] if tools.empty?
  @tools = tools.flatten
end

Instance Method Details

#callObject

Returns:

See Also:



194
195
196
# File 'lib/llm/agent.rb', line 194

def call(...)
  @ctx.call(...)
end

#concurrencySymbol, ...

Returns the configured tool execution concurrency.

Returns:

  • (Symbol, Array<Symbol>, nil)


278
279
280
# File 'lib/llm/agent.rb', line 278

def concurrency
  @concurrency
end

#context_windowInteger

Returns:

  • (Integer)

See Also:



292
293
294
# File 'lib/llm/agent.rb', line 292

def context_window
  @ctx.context_window
end

#costLLM::Cost

Returns:

See Also:



285
286
287
# File 'lib/llm/agent.rb', line 285

def cost
  @ctx.cost
end

#deserialize(**kw) ⇒ Object Also known as: restore



327
328
329
# File 'lib/llm/agent.rb', line 327

def deserialize(**kw)
  @ctx.deserialize(**kw)
end

#functionsArray<LLM::Function>

Returns:



180
181
182
# File 'lib/llm/agent.rb', line 180

def functions
  @ctx.functions
end

#image_url(url) ⇒ LLM::Object

Returns a tagged object

Parameters:

  • url (String)

    The URL

Returns:



233
234
235
# File 'lib/llm/agent.rb', line 233

def image_url(url)
  @ctx.image_url(url)
end

#inspectString

Returns:

  • (String)


311
312
313
314
# File 'lib/llm/agent.rb', line 311

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @mode=#{mode.inspect}, @messages=#{messages.inspect}>"
end

#interrupt!nil Also known as: cancel!

Interrupt the active request, if any.

Returns:

  • (nil)


214
215
216
# File 'lib/llm/agent.rb', line 214

def interrupt!
  @ctx.interrupt!
end

#local_file(path) ⇒ LLM::Object

Returns a tagged object

Parameters:

  • path (String)

    The path

Returns:



242
243
244
# File 'lib/llm/agent.rb', line 242

def local_file(path)
  @ctx.local_file(path)
end

#messagesLLM::Buffer<LLM::Message>



174
175
176
# File 'lib/llm/agent.rb', line 174

def messages
  @ctx.messages
end

#modeSymbol

Returns:

  • (Symbol)


271
272
273
# File 'lib/llm/agent.rb', line 271

def mode
  @ctx.mode
end

#modelString

Returns the model an Agent is actively using

Returns:

  • (String)


265
266
267
# File 'lib/llm/agent.rb', line 265

def model
  @ctx.model
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:

See Also:



223
224
225
# File 'lib/llm/agent.rb', line 223

def prompt(&b)
  @ctx.prompt(&b)
end

#remote_file(res) ⇒ LLM::Object

Returns a tagged object

Parameters:

Returns:



251
252
253
# File 'lib/llm/agent.rb', line 251

def remote_file(res)
  @ctx.remote_file(res)
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
agent = LLM::Agent.new(llm)
res = agent.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (Hash) (defaults to: {})

    The params passed to the provider, including optional :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Options Hash (params):

  • :tool_attempts (Integer)

    The maxinum number of tool call iterations (default 10)

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.

Raises:



161
162
163
164
165
166
167
168
169
170
# File 'lib/llm/agent.rb', line 161

def respond(prompt, params = {})
  max = Integer(params.delete(:tool_attempts) || 10)
  res = @ctx.respond(apply_instructions(prompt), params)
  max.times do
    break if @ctx.functions.empty?
    res = @ctx.respond(call_functions, params)
  end
  raise LLM::ToolLoopError, "pending tool calls remain" unless @ctx.functions.empty?
  res
end

#returnsArray<LLM::Function::Return>

Returns:

See Also:



187
188
189
# File 'lib/llm/agent.rb', line 187

def returns
  @ctx.returns
end

#serialize(**kw) ⇒ void Also known as: save

This method returns an undefined value.



319
320
321
# File 'lib/llm/agent.rb', line 319

def serialize(**kw)
  @ctx.serialize(**kw)
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
agent = LLM::Agent.new(llm)
response = agent.talk("Hello, what is your name?")
puts response.choices[0].content

Parameters:

  • params (Hash) (defaults to: {})

    The params passed to the provider, including optional :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Options Hash (params):

  • :tool_attempts (Integer)

    The maxinum number of tool call iterations (default 10)

Returns:

  • (LLM::Response)

    Returns the LLM’s response for this turn.

Raises:



135
136
137
138
139
140
141
142
143
144
# File 'lib/llm/agent.rb', line 135

def talk(prompt, params = {})
  max = Integer(params.delete(:tool_attempts) || 10)
  res = @ctx.talk(apply_instructions(prompt), params)
  max.times do
    break if @ctx.functions.empty?
    res = @ctx.talk(call_functions, params)
  end
  raise LLM::ToolLoopError, "pending tool calls remain" unless @ctx.functions.empty?
  res
end

#to_hHash

Returns:

  • (Hash)

See Also:



299
300
301
# File 'lib/llm/agent.rb', line 299

def to_h
  @ctx.to_h
end

#to_jsonString

Returns:

  • (String)


305
306
307
# File 'lib/llm/agent.rb', line 305

def to_json(...)
  to_h.to_json(...)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



258
259
260
# File 'lib/llm/agent.rb', line 258

def tracer
  @ctx.tracer
end

#usageLLM::Object

Returns:



207
208
209
# File 'lib/llm/agent.rb', line 207

def usage
  @ctx.usage
end

#waitArray<LLM::Function::Return>

Returns:

See Also:



201
202
203
# File 'lib/llm/agent.rb', line 201

def wait(...)
  @ctx.wait(...)
end