Class: Agents::Runner
- Inherits:
-
Object
- Object
- Agents::Runner
- Defined in:
- lib/agents/runner.rb
Overview
The execution engine that orchestrates conversations between users and agents. Runner manages the conversation flow, handles tool execution through RubyLLM, coordinates handoffs between agents, and ensures thread-safe operation.
The Runner follows a turn-based execution model where each turn consists of:
-
Sending a message to the LLM with current context
-
Receiving a response that may include tool calls
-
Executing tools and getting results (handled by RubyLLM)
-
Checking for agent handoffs
-
Continuing until no more tools are called
## Thread Safety The Runner ensures thread safety by:
-
Creating new context wrappers for each execution
-
Using tool wrappers that pass context through parameters
-
Never storing execution state in shared variables
## Integration with RubyLLM We leverage RubyLLM for LLM communication and tool execution while maintaining our own context management and handoff logic.
Defined Under Namespace
Classes: AgentNotFoundError, MaxTurnsExceeded
Constant Summary collapse
- DEFAULT_MAX_TURNS =
10
Class Method Summary collapse
-
.with_agents(*agents) ⇒ AgentRunner
Create a thread-safe agent runner for multi-agent conversations.
Instance Method Summary collapse
-
#run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, params: nil, callbacks: {}) ⇒ RunResult
Execute an agent with the given input and context.
Class Method Details
.with_agents(*agents) ⇒ AgentRunner
Create a thread-safe agent runner for multi-agent conversations. The first agent becomes the default entry point for new conversations. All agents must be explicitly provided - no automatic discovery.
71 72 73 |
# File 'lib/agents/runner.rb', line 71 def self.with_agents(*agents) AgentRunner.new(agents) end |
Instance Method Details
#run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, params: nil, callbacks: {}) ⇒ RunResult
Execute an agent with the given input and context. This is now called internally by AgentRunner and should not be used directly.
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
# File 'lib/agents/runner.rb', line 87 def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, params: nil, callbacks: {}) # The starting_agent is already determined by AgentRunner based on conversation history current_agent = starting_agent # Create context wrapper with deep copy for thread safety context_copy = deep_copy_context(context) context_wrapper = RunContext.new(context_copy, callbacks: callbacks) current_turn = 0 # Emit run start event context_wrapper.callback_manager.emit_run_start(current_agent.name, input, context_wrapper) runtime_headers = Helpers::HashNormalizer.normalize(headers, label: "headers") agent_headers = Helpers::HashNormalizer.normalize(current_agent.headers, label: "headers") runtime_params = Helpers::HashNormalizer.normalize(params, label: "params") agent_params = Helpers::HashNormalizer.normalize(current_agent.params, label: "params") # Create chat and restore conversation history chat = RubyLLM::Chat.new(model: current_agent.model) current_headers = Helpers::HashNormalizer.merge(agent_headers, runtime_headers) current_params = Helpers::HashNormalizer.merge(agent_params, runtime_params) apply_headers(chat, current_headers) apply_params(chat, current_params) configure_chat_for_agent(chat, current_agent, context_wrapper, replace: false) restore_conversation_history(chat, context_wrapper) input_already_in_history = (chat, input) context_wrapper.callback_manager.emit_chat_created(chat, current_agent.name, current_agent.model, context_wrapper) loop do current_turn += 1 raise MaxTurnsExceeded, "Exceeded maximum turns: #{max_turns}" if current_turn > max_turns # Get response from LLM (RubyLLM handles tool execution with halting based handoff detection) response = if current_turn == 1 # Emit agent thinking event for initial message context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, input, context_wrapper) # If conversation history already ends with this user message (e.g. passed # in via context from an external system), use complete to avoid duplicating it. input_already_in_history ? chat.complete : chat.ask(input) else # Emit agent thinking event for continuation context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, "(continuing conversation)", context_wrapper) chat.complete end track_usage(response, context_wrapper) # Emit LLM call complete event with model and response for instrumentation context_wrapper.callback_manager.emit_llm_call_complete( current_agent.name, current_agent.model, response, context_wrapper ) # Check for handoff via RubyLLM's halt mechanism if response.is_a?(RubyLLM::Tool::Halt) && context_wrapper.context[:pending_handoff] handoff_info = context_wrapper.context.delete(:pending_handoff) next_agent = handoff_info[:target_agent] # Validate that the target agent is in our registry # This prevents handoffs to agents that weren't explicitly provided unless registry[next_agent.name] error = AgentNotFoundError.new("Handoff failed: Agent '#{next_agent.name}' not found in registry") return finalize_run(chat, context_wrapper, current_agent, output: nil, error: error) end # Save current conversation state before switching save_conversation_state(chat, context_wrapper, current_agent) # Emit agent complete event before handoff context_wrapper.callback_manager.emit_agent_complete(current_agent.name, nil, nil, context_wrapper) # Emit agent handoff event context_wrapper.callback_manager.emit_agent_handoff(current_agent.name, next_agent.name, "handoff", context_wrapper) # Switch to new agent - store agent name for persistence current_agent = next_agent context_wrapper.context[:current_agent] = next_agent.name # Reconfigure existing chat for new agent - preserves conversation history automatically configure_chat_for_agent(chat, current_agent, context_wrapper, replace: true) agent_headers = Helpers::HashNormalizer.normalize(current_agent.headers, label: "headers") current_headers = Helpers::HashNormalizer.merge(agent_headers, runtime_headers) apply_headers(chat, current_headers) agent_params = Helpers::HashNormalizer.normalize(current_agent.params, label: "params") current_params = Helpers::HashNormalizer.merge(agent_params, runtime_params) apply_params(chat, current_params) context_wrapper.callback_manager.emit_chat_created( chat, current_agent.name, current_agent.model, context_wrapper ) # Force the new agent to respond to the conversation context # This ensures the user gets a response from the new agent input = nil next end # Handle non-handoff halts - return the halt content as final response if response.is_a?(RubyLLM::Tool::Halt) return finalize_run(chat, context_wrapper, current_agent, output: response.content) end # If tools were called, continue the loop to let them execute next if response.tool_call? # If no tools were called, we have our final response return finalize_run(chat, context_wrapper, current_agent, output: response.content) end rescue MaxTurnsExceeded => e finalize_run(chat, context_wrapper, current_agent, output: "Conversation ended: #{e.}", error: e) rescue StandardError => e finalize_run(chat, context_wrapper, current_agent, output: nil, error: e) end |