Class: Phronomy::Agent::ReactAgent
- Defined in:
- lib/phronomy/agent/react_agent.rb
Overview
ReAct pattern (Reasoning + Acting) agent. Repeats the LLM <-> Tool loop until no more tool calls are made.
Instance Attribute Summary
Attributes inherited from Base
Instance Method Summary collapse
- #invoke(input, config: {}) ⇒ Object
-
#stream(input, config: {}) {|Phronomy::Agent::StreamEvent| ... } ⇒ Hash
Streaming version of #invoke for the ReAct loop.
Methods inherited from Base
#_add_handoff_tool, _before_completion, #_handoff_tools, _on_compact_callback, _on_compaction_trigger_callback, _on_trim_callback, #add_input_guardrail, #add_output_guardrail, before_completion, cache_instructions, context_overhead, context_window, instructions, max_iterations, max_output_tokens, model, #on_approval_required, on_compact, on_compaction_trigger, on_trim, provider, retry_policy, static_knowledge, static_knowledge_sources, temperature, tool_aliases, tools
Methods included from Runnable
Instance Method Details
#invoke(input, config: {}) ⇒ Object
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
# File 'lib/phronomy/agent/react_agent.rb', line 8 def invoke(input, config: {}) = {} [:user_id] = config[:user_id] if config[:user_id] [:session_id] = config[:session_id] if config[:session_id] trace("agent.invoke", input: input, **) do |_span| # Run input guardrails before any LLM interaction. run_input_guardrails!(input) memory = config[:memory] thread_id = config[:thread_id] max_iter = self.class.max_iterations # Seed with persisted messages when memory is provided. = if memory && thread_id load_from_memory(memory, thread_id: thread_id, query: (input)) else [] end = .dup user_asked = false total_usage = Phronomy::TokenUsage.zero max_iter.times do response = step(, input, user_asked: user_asked, config: config) user_asked = true = response[:messages] total_usage += response[:usage] break if response[:done] end save_to_memory(memory, thread_id: thread_id, messages: ) if memory && thread_id output = .last&.content # Run output guardrails before returning to the caller. run_output_guardrails!(output) result = {output: output, messages: , usage: total_usage} [result, total_usage] end end |
#stream(input, config: {}) {|Phronomy::Agent::StreamEvent| ... } ⇒ Hash
Streaming version of #invoke for the ReAct loop. Yields StreamEvent events while the LLM-tool loop runs.
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
# File 'lib/phronomy/agent/react_agent.rb', line 59 def stream(input, config: {}, &block) return invoke(input, config: config) unless block run_input_guardrails!(input) memory = config[:memory] thread_id = config[:thread_id] max_iter = self.class.max_iterations = if memory && thread_id load_from_memory(memory, thread_id: thread_id, query: (input)) else [] end = .dup user_asked = false total_usage = Phronomy::TokenUsage.zero max_iter.times do response = stream_step(, input, user_asked: user_asked, config: config, &block) user_asked = true = response[:messages] total_usage += response[:usage] break if response[:done] end save_to_memory(memory, thread_id: thread_id, messages: ) if memory && thread_id output = .last&.content run_output_guardrails!(output) result = {output: output, messages: , usage: total_usage} block.call(StreamEvent.new(type: :done, payload: result)) result rescue => e block&.call(StreamEvent.new(type: :error, payload: {error: e})) raise end |