Class: RubyLLM::Agents::BaseAgent
- Inherits:
-
Object
- Object
- RubyLLM::Agents::BaseAgent
- Extended by:
- DSL::Base, DSL::Caching, DSL::Knowledge, DSL::Queryable, DSL::Reliability
- Includes:
- CacheHelper, DSL::Knowledge::InstanceMethods
- Defined in:
- lib/ruby_llm/agents/base_agent.rb
Overview
Base class for all agents using the middleware pipeline architecture.
BaseAgent provides a unified foundation for building LLM-powered agents with configurable middleware for caching, reliability, instrumentation, budgeting, and multi-tenancy.
Direct Known Subclasses
Constant Summary
Constants included from DSL::Base
DSL::Base::PLACEHOLDER_PATTERN
Constants included from DSL::Caching
DSL::Caching::DEFAULT_CACHE_TTL
Constants included from CacheHelper
Thinking DSL collapse
-
#client ⇒ RubyLLM::Chat
readonly
The configured RubyLLM client.
-
#model ⇒ String
readonly
The LLM model being used.
-
#temperature ⇒ Float
readonly
The temperature setting.
-
#tracked_tool_calls ⇒ Object
readonly
Returns the value of attribute tracked_tool_calls.
Custom Middleware DSL collapse
-
.agent_middleware ⇒ Array<Hash>
Returns custom middleware registered on this agent (including inherited).
-
.use_middleware(middleware_class, before: nil, after: nil) ⇒ void
Registers a custom middleware for this agent class.
Parameter DSL collapse
-
.param(name, required: false, default: nil, type: nil, desc: nil, description: nil) ⇒ void
Defines a parameter for the agent.
-
.params ⇒ Hash{Symbol => Hash}
Returns all defined parameters including inherited ones.
Streaming DSL collapse
-
.streaming(value = nil, overridable: nil) ⇒ Boolean
Enables or returns streaming mode for this agent.
Tools DSL collapse
-
.tools(*tool_classes) ⇒ Array<Class>
Sets or returns the tools available to this agent.
Temperature DSL collapse
-
.temperature(value = nil, overridable: nil) ⇒ Float
Sets or returns the temperature for LLM responses.
Thinking DSL collapse
-
.thinking(effort: nil, budget: nil) ⇒ Hash?
Configures extended thinking/reasoning for this agent.
-
.thinking_config ⇒ Hash?
Returns the thinking configuration.
-
#call {|chunk| ... } ⇒ Object
Executes the agent through the middleware pipeline.
-
#initialize(model: self.class.model, temperature: self.class.temperature, **options) ⇒ BaseAgent
constructor
Creates a new agent instance.
Template Methods (override in subclasses) collapse
-
#assistant_prompt ⇒ String?
Assistant prefill to prime the model’s response.
-
#messages ⇒ Array<Hash>
Conversation history for multi-turn conversations.
-
#process_response(response) ⇒ Object
Post-processes the LLM response.
-
#schema ⇒ RubyLLM::Schema?
Response schema for structured output.
-
#system_prompt ⇒ String?
System prompt for LLM instructions.
-
#user_prompt ⇒ String
User prompt to send to the LLM.
Class Method Summary collapse
-
.agent_type ⇒ Symbol
Returns the agent type for this class.
-
.aliases(*names) ⇒ Array<String>
Declares previous class names for this agent.
-
.all_agent_names ⇒ Array<String>
Returns all known names for this agent (current + aliases).
-
.ask(message, with: nil, **kwargs) {|chunk| ... } ⇒ Result
Executes the agent with a freeform message as the user prompt.
-
.call(**kwargs) {|chunk| ... } ⇒ Object
Factory method to instantiate and execute an agent.
-
.config_summary ⇒ Hash
Returns a summary of the agent’s DSL configuration.
-
.stream(**kwargs) {|chunk| ... } ⇒ Result
Streams agent execution, yielding chunks as they arrive.
Instance Method Summary collapse
-
#agent_cache_key ⇒ String
Generates the cache key for this agent invocation.
-
#cache_key_data ⇒ Hash
Returns data to include in cache key generation.
-
#cache_key_hash ⇒ String
Generates a hash of the cache key data.
-
#resolved_thinking ⇒ Hash?
Resolves thinking configuration.
Methods included from DSL::Base
active_overrides, assistant, assistant_config, cache_prompts, clear_override_cache!, description, overridable?, overridable_fields, prompt, returns, system, system_config, timeout, user, user_config
Methods included from DSL::Reliability
circuit_breaker, circuit_breaker_config, fallback_models, fallback_provider, fallback_providers, non_fallback_errors, on_failure, reliability, reliability_config, reliability_configured?, retries, retries_config, retryable_patterns, total_timeout
Methods included from DSL::Caching
cache, cache_enabled?, cache_for, cache_key_excludes, cache_key_includes, cache_ttl, caching_config
Methods included from DSL::Queryable
cost_by_model, executions, failures, last_run, stats, total_spent, with_params
Methods included from DSL::Knowledge
knowledge_entries, knowledge_path, knows
Methods included from CacheHelper
#cache_delete, #cache_exist?, #cache_increment, #cache_key, #cache_read, #cache_store, #cache_write
Methods included from DSL::Knowledge::InstanceMethods
Constructor Details
#initialize(model: self.class.model, temperature: self.class.temperature, **options) ⇒ BaseAgent
Creates a new agent instance
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 347 def initialize(model: self.class.model, temperature: self.class.temperature, **) # Merge tracker defaults (shared options like tenant) — explicit opts win tracker = Thread.current[:ruby_llm_agents_tracker] if tracker = tracker.defaults.merge() @_track_request_id = tracker.request_id @_track_tags = tracker. end @ask_message = .delete(:_ask_message) @parent_execution_id = .delete(:_parent_execution_id) @root_execution_id = .delete(:_root_execution_id) @model = model @temperature = temperature @options = @tracked_tool_calls = [] @pending_tool_call = nil validate_required_params! unless @ask_message end |
Instance Attribute Details
#client ⇒ RubyLLM::Chat (readonly)
Returns The configured RubyLLM client.
340 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 340 attr_reader :model, :temperature, :client, :tracked_tool_calls |
#model ⇒ String (readonly)
Returns The LLM model being used.
340 341 342 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 340 def model @model end |
#temperature ⇒ Float (readonly)
Returns The temperature setting.
340 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 340 attr_reader :model, :temperature, :client, :tracked_tool_calls |
#tracked_tool_calls ⇒ Object (readonly)
Returns the value of attribute tracked_tool_calls.
340 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 340 attr_reader :model, :temperature, :client, :tracked_tool_calls |
Class Method Details
.agent_middleware ⇒ Array<Hash>
Returns custom middleware registered on this agent (including inherited)
197 198 199 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 197 def agent_middleware @agent_middleware || (superclass.respond_to?(:agent_middleware) ? superclass.agent_middleware : []) || [] end |
.agent_type ⇒ Symbol
Returns the agent type for this class
Used by middleware to determine which tracking/budget config to use. Subclasses should override this method.
121 122 123 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 121 def agent_type :conversation end |
.aliases(*names) ⇒ Array<String>
Declares previous class names for this agent
When an agent is renamed, old execution records still reference the previous class name. Declaring aliases allows scopes, analytics, and budget checks to automatically include records from all previous names.
138 139 140 141 142 143 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 138 def aliases(*names) if names.any? @agent_aliases = names.map(&:to_s) end @agent_aliases || [] end |
.all_agent_names ⇒ Array<String>
Returns all known names for this agent (current + aliases)
148 149 150 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 148 def all_agent_names [name, *aliases].compact.uniq end |
.ask(message, with: nil, **kwargs) {|chunk| ... } ⇒ Result
Executes the agent with a freeform message as the user prompt
Designed for conversational agents that define a persona (system + optional assistant prefill) but accept freeform input at runtime. Also works on template agents as an escape hatch to bypass the user template.
104 105 106 107 108 109 110 111 112 113 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 104 def ask(, with: nil, **kwargs, &block) opts = kwargs.merge(_ask_message: ) opts[:with] = with if with if block stream(**opts, &block) else call(**opts) end end |
.call(**kwargs) {|chunk| ... } ⇒ Object
Factory method to instantiate and execute an agent
64 65 66 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 64 def call(**kwargs, &block) new(**kwargs).call(&block) end |
.config_summary ⇒ Hash
Returns a summary of the agent’s DSL configuration
Useful for debugging in the Rails console to see how an agent is configured without instantiating it.
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 160 def config_summary { agent_type: agent_type, model: model, temperature: temperature, timeout: timeout, streaming: streaming, system_prompt: system_config, user_prompt: user_config, assistant_prompt: assistant_config, description: description, schema: schema&.respond_to?(:name) ? schema.name : schema&.class&.name, tools: tools.map { |t| t.respond_to?(:name) ? t.name : t.to_s }, parameters: params.transform_values { |v| v.slice(:type, :required, :default, :desc) }, thinking: thinking_config, cache_prompts: cache_prompts || nil, caching: caching_config, reliability: reliability_configured? ? reliability_config : nil }.compact end |
.param(name, required: false, default: nil, type: nil, desc: nil, description: nil) ⇒ void
This method returns an undefined value.
Defines a parameter for the agent
Creates an accessor method for the parameter that retrieves values from the options hash, falling back to the default value.
215 216 217 218 219 220 221 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 215 def param(name, required: false, default: nil, type: nil, desc: nil, description: nil) @params ||= {} @params[name] = {required: required, default: default, type: type, desc: desc || description} define_method(name) do @options[name] || @options[name.to_s] || self.class.params.dig(name, :default) end end |
.params ⇒ Hash{Symbol => Hash}
Returns all defined parameters including inherited ones
226 227 228 229 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 226 def params parent = superclass.respond_to?(:params) ? superclass.params : {} parent.merge(@params || {}) end |
.stream(**kwargs) {|chunk| ... } ⇒ Result
Streams agent execution, yielding chunks as they arrive
74 75 76 77 78 79 80 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 74 def stream(**kwargs, &block) raise ArgumentError, "Block required for streaming" unless block_given? instance = new(**kwargs) instance.instance_variable_set(:@force_streaming, true) instance.call(&block) end |
.streaming(value = nil, overridable: nil) ⇒ Boolean
Enables or returns streaming mode for this agent
240 241 242 243 244 245 246 247 248 249 250 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 240 def streaming(value = nil, overridable: nil) @streaming = value unless value.nil? register_overridable(:streaming) if overridable base = if @streaming.nil? superclass.respond_to?(:streaming) ? superclass.streaming : default_streaming else @streaming end apply_override(:streaming, base) end |
.temperature(value = nil, overridable: nil) ⇒ Float
Sets or returns the temperature for LLM responses
274 275 276 277 278 279 280 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 274 def temperature(value = nil, overridable: nil) @temperature = value if value register_overridable(:temperature) if overridable base = @temperature || (superclass.respond_to?(:temperature) ? superclass.temperature : default_temperature) apply_override(:temperature, base) end |
.thinking(effort: nil, budget: nil) ⇒ Hash?
Configures extended thinking/reasoning for this agent
291 292 293 294 295 296 297 298 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 291 def thinking(effort: nil, budget: nil) if effort || budget @thinking_config = {} @thinking_config[:effort] = effort if effort @thinking_config[:budget] = budget if budget end thinking_config end |
.thinking_config ⇒ Hash?
Returns the thinking configuration
Falls back to global configuration default if not set at class level.
305 306 307 308 309 310 311 312 313 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 305 def thinking_config return @thinking_config if @thinking_config return superclass.thinking_config if superclass.respond_to?(:thinking_config) && superclass.thinking_config # Fall back to global configuration default RubyLLM::Agents.configuration.default_thinking rescue nil end |
.tools(*tool_classes) ⇒ Array<Class>
Sets or returns the tools available to this agent
260 261 262 263 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 260 def tools(*tool_classes) @tools = tool_classes.flatten if tool_classes.any? @tools || (superclass.respond_to?(:tools) ? superclass.tools : []) end |
.use_middleware(middleware_class, before: nil, after: nil) ⇒ void
This method returns an undefined value.
Registers a custom middleware for this agent class
189 190 191 192 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 189 def use_middleware(middleware_class, before: nil, after: nil) @agent_middleware ||= [] @agent_middleware << {klass: middleware_class, before: before, after: after} end |
Instance Method Details
#agent_cache_key ⇒ String
Generates the cache key for this agent invocation
Cache keys are content-based, using a hash of the prompts and parameters. This automatically invalidates caches when prompts change.
467 468 469 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 467 def agent_cache_key ["ruby_llm_agent", self.class.name, cache_key_hash].join("/") end |
#assistant_prompt ⇒ String?
Assistant prefill to prime the model’s response
If a class-level ‘assistant` DSL is defined, it will be used. Otherwise returns nil (no prefill).
424 425 426 427 428 429 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 424 def assistant_prompt config = self.class.assistant_config return resolve_prompt_from_config(config) if config nil end |
#cache_key_data ⇒ Hash
Returns data to include in cache key generation
481 482 483 484 485 486 487 488 489 490 491 492 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 481 def cache_key_data excludes = self.class.cache_key_excludes || %i[skip_cache dry_run with] base_data = @options.except(*excludes) # Include model and other relevant config base_data.merge( model: model, system_prompt: system_prompt, user_prompt: user_prompt, assistant_prompt: assistant_prompt ) end |
#cache_key_hash ⇒ String
Generates a hash of the cache key data
474 475 476 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 474 def cache_key_hash Digest::SHA256.hexdigest(cache_key_data.to_json) end |
#call {|chunk| ... } ⇒ Object
Executes the agent through the middleware pipeline
371 372 373 374 375 376 377 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 371 def call(&block) return dry_run_response if @options[:dry_run] context = build_context(&block) result_context = Pipeline::Executor.execute(context) result_context.output end |
#messages ⇒ Array<Hash>
Conversation history for multi-turn conversations
444 445 446 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 444 def [] end |
#process_response(response) ⇒ Object
Post-processes the LLM response
452 453 454 455 456 457 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 452 def process_response(response) content = response.content return content unless content.is_a?(Hash) content.deep_symbolize_keys end |
#resolved_thinking ⇒ Hash?
Resolves thinking configuration
Public for testing and introspection.
499 500 501 502 503 504 505 506 507 508 509 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 499 def resolved_thinking # Check for :none effort which means disabled if @options.key?(:thinking) thinking_option = @options[:thinking] return nil if thinking_option == false return nil if thinking_option.is_a?(Hash) && thinking_option[:effort] == :none return thinking_option if thinking_option.is_a?(Hash) end self.class.thinking_config end |
#schema ⇒ RubyLLM::Schema?
Response schema for structured output
Delegates to the class-level schema DSL by default. Override in subclass instances to customize per-instance.
437 438 439 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 437 def schema self.class.schema end |
#system_prompt ⇒ String?
System prompt for LLM instructions
If a class-level ‘system` DSL is defined, it will be used. Knowledge entries declared via `knows` are auto-appended.
406 407 408 409 410 411 412 413 414 415 416 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 406 def system_prompt system_config = self.class.system_config base = system_config ? resolve_prompt_from_config(system_config) : nil knowledge = compiled_knowledge if knowledge.present? base ? "#{base}\n\n#{knowledge}" : knowledge else base end end |
#user_prompt ⇒ String
User prompt to send to the LLM
Resolution order:
-
Subclass method override (standard Ruby dispatch — this method is never called)
-
.ask(message) runtime message — bypasses template
-
Class-level ‘user` / `prompt` template — interpolated with placeholders
-
Inherited from superclass
-
NotImplementedError
391 392 393 394 395 396 397 398 |
# File 'lib/ruby_llm/agents/base_agent.rb', line 391 def user_prompt return @ask_message if @ask_message config = self.class.user_config return resolve_prompt_from_config(config) if config raise NotImplementedError, "#{self.class} must implement #user_prompt, use the `user` DSL, or call with .ask(message)" end |