Class: Phronomy::Agent::Base
- Inherits:
-
Object
- Object
- Phronomy::Agent::Base
- Includes:
- Runnable
- Defined in:
- lib/phronomy/agent/base.rb
Overview
Base class for all Phronomy agents.
Subclass this to create a conversational agent powered by an LLM. DSL class methods configure the model, instructions, tools, memory, and retry behaviour. Instance methods handle invocation.
Direct Known Subclasses
Class Attribute Summary collapse
-
._retry_policy ⇒ Hash?
readonly
Returns the configured retry policy, or nil when none is set.
-
._sleep_proc ⇒ #call
Injectable sleep callable for testing (shared with Tool::Base pattern).
Instance Attribute Summary collapse
-
#before_completion ⇒ #call?
Instance-level before_completion hook.
Class Method Summary collapse
- ._before_completion ⇒ #call?
- ._on_compact_callback ⇒ Proc?
- ._on_compaction_trigger_callback ⇒ Proc?
- ._on_trim_callback ⇒ Proc?
-
.before_completion(callable = nil) ⇒ #call?
Sets or reads the class-level before_completion hook.
-
.cache_instructions(enabled = nil) ⇒ Object
When enabled, attaches Anthropic prompt-cache markers to the system message so that the fixed instructions are served from cache on subsequent turns, reducing input-token costs.
-
.context_overhead(val = nil) ⇒ Object
Tokens reserved for the system prompt + tool definitions overhead.
-
.context_window(val = nil) ⇒ Object
Overrides the context window size used for token budget calculations.
-
.instructions(text = nil) { ... } ⇒ String, ...
Sets or reads the system instructions for this agent.
-
.max_iterations(val = nil) ⇒ Integer
Sets or reads the maximum number of LLM call cycles for ReAct agents.
-
.max_output_tokens(val = nil) ⇒ Object
Tokens to reserve for the model's output.
-
.model(name = nil) ⇒ String?
Sets or reads the LLM model identifier for this agent.
-
.on_compact {|ctx| ... } ⇒ Object
Registers a callback that performs the actual compaction when the +on_compaction_trigger+ callback fires.
-
.on_compaction_trigger {|ctx| ... } ⇒ Boolean
Registers a callback that decides whether compaction should run.
-
.on_trim {|ctx| ... } ⇒ Object
Registers a callback that is invoked before every LLM call so the application can remove stale or irrelevant messages from the conversation history.
-
.provider(name = nil) ⇒ Symbol?
Sets or reads the LLM provider for this agent.
-
.retry_policy(times: 0, wait: 0, base: 1.0) ⇒ Object
Configures a retry policy that wraps the full #invoke call.
-
.static_knowledge(*sources) ⇒ Object
Registers one or more static knowledge sources on the agent class.
-
.static_knowledge_sources ⇒ Array<Phronomy::KnowledgeSource::Base>
Returns the registered static knowledge sources.
-
.temperature(val = nil) ⇒ Float?
Sets or reads the sampling temperature sent to the LLM.
-
.tool_aliases ⇒ Hash{Class => String}
Returns the alias map registered via the hash form of .tools.
-
.tools(*args) ⇒ Object
Registers tool classes for this agent.
Instance Method Summary collapse
-
#_add_handoff_tool(tool_class) ⇒ self
Registers an anonymous handoff tool class on this agent instance.
-
#_handoff_tools ⇒ Array<Class>
Returns handoff tool classes registered on this instance by Runner.
-
#add_input_guardrail(guardrail) ⇒ Object
Attach a guardrail that validates input before every #invoke call.
-
#add_output_guardrail(guardrail) ⇒ Object
Attach a guardrail that validates output before it is returned.
-
#invoke(input, config: {}) ⇒ Hash
Invokes the agent with the given input and returns a result Hash.
-
#on_approval_required(&block) ⇒ self
Registers a callback that is invoked before executing any tool that has +requires_approval true+ set.
-
#stream(input, config: {}) {|Phronomy::Agent::StreamEvent| ... } ⇒ Hash
Streaming version of #invoke.
Methods included from Runnable
Class Attribute Details
._retry_policy ⇒ Hash? (readonly)
Returns the configured retry policy, or nil when none is set.
180 181 182 |
# File 'lib/phronomy/agent/base.rb', line 180 def _retry_policy @_retry_policy end |
._sleep_proc ⇒ #call
Injectable sleep callable for testing (shared with Tool::Base pattern).
184 185 186 |
# File 'lib/phronomy/agent/base.rb', line 184 def _sleep_proc @_sleep_proc || method(:sleep) end |
Instance Attribute Details
#before_completion ⇒ #call?
Instance-level before_completion hook. When set, takes precedence over the class-level hook for this specific agent instance only.
379 380 381 |
# File 'lib/phronomy/agent/base.rb', line 379 def before_completion @before_completion end |
Class Method Details
._before_completion ⇒ #call?
371 372 373 |
# File 'lib/phronomy/agent/base.rb', line 371 def _before_completion @before_completion end |
._on_compact_callback ⇒ Proc?
277 278 279 |
# File 'lib/phronomy/agent/base.rb', line 277 def _on_compact_callback @on_compact_callback end |
._on_compaction_trigger_callback ⇒ Proc?
255 256 257 |
# File 'lib/phronomy/agent/base.rb', line 255 def _on_compaction_trigger_callback @on_compaction_trigger_callback end |
._on_trim_callback ⇒ Proc?
232 233 234 |
# File 'lib/phronomy/agent/base.rb', line 232 def _on_trim_callback @on_trim_callback end |
.before_completion(callable = nil) ⇒ #call?
Sets or reads the class-level before_completion hook. The hook is called before every LLM request for instances of this class. Receives a Phronomy::Agent::BeforeCompletionContext; must return a Hash of params to merge into the LLM call, or nil to pass through unchanged.
362 363 364 365 366 367 368 |
# File 'lib/phronomy/agent/base.rb', line 362 def before_completion(callable = nil) if callable.nil? && !block_given? @before_completion else @before_completion = callable end end |
.cache_instructions(enabled = nil) ⇒ Object
When enabled, attaches Anthropic prompt-cache markers to the system message so that the fixed instructions are served from cache on subsequent turns, reducing input-token costs.
Only has an effect when the agent also declares provider :anthropic.
The cache_control field is provider-specific (the format differs
between Anthropic direct, Bedrock, etc.), so the agent must explicitly
declare its provider via the DSL rather than having it inferred from
the model name.
296 297 298 299 300 301 302 |
# File 'lib/phronomy/agent/base.rb', line 296 def cache_instructions(enabled = nil) if enabled.nil? @cache_instructions else @cache_instructions = enabled end end |
.context_overhead(val = nil) ⇒ Object
Tokens reserved for the system prompt + tool definitions overhead. Subtract this from the context window before computing the memory budget.
343 344 345 346 347 348 349 |
# File 'lib/phronomy/agent/base.rb', line 343 def context_overhead(val = nil) if val.nil? @context_overhead || 0 else @context_overhead = val.to_i end end |
.context_window(val = nil) ⇒ Object
Overrides the context window size used for token budget calculations. When set, this value takes precedence over the RubyLLM model registry, which is useful for locally-hosted models (e.g. LM Studio) where the actually-loaded context length may differ from the catalogue value.
328 329 330 331 332 333 334 |
# File 'lib/phronomy/agent/base.rb', line 328 def context_window(val = nil) if val.nil? @context_window else @context_window = val.to_i end end |
.instructions(text = nil) { ... } ⇒ String, ...
Sets or reads the system instructions for this agent. Accepts a String, a PromptTemplate, or a block (Proc). When used as a reader (no argument, no block), returns the stored value.
65 66 67 68 69 70 71 |
# File 'lib/phronomy/agent/base.rb', line 65 def instructions(text = nil, &block) if text || block_given? @instructions = text || block else @instructions end end |
.max_iterations(val = nil) ⇒ Integer
Sets or reads the maximum number of LLM call cycles for ReAct agents. Each tool call and follow-up counts as one iteration. Defaults to 10.
155 156 157 158 159 160 161 |
# File 'lib/phronomy/agent/base.rb', line 155 def max_iterations(val = nil) if val @max_iterations = val else @max_iterations || 10 end end |
.max_output_tokens(val = nil) ⇒ Object
Tokens to reserve for the model's output. When nil, the model's max_output_tokens from the registry is used.
311 312 313 314 315 316 317 |
# File 'lib/phronomy/agent/base.rb', line 311 def max_output_tokens(val = nil) if val.nil? @max_output_tokens else @max_output_tokens = val.to_i end end |
.model(name = nil) ⇒ String?
Sets or reads the LLM model identifier for this agent. When called without an argument, returns the stored model or the global default from Phronomy.configuration.
42 43 44 45 46 47 48 |
# File 'lib/phronomy/agent/base.rb', line 42 def model(name = nil) if name @model = name else @model || Phronomy.configuration.default_model end end |
.on_compact {|ctx| ... } ⇒ Object
Registers a callback that performs the actual compaction when the +on_compaction_trigger+ callback fires. The block receives a Context::CompactionContext and should call +ctx.compact+ to specify which messages to summarise.
272 273 274 |
# File 'lib/phronomy/agent/base.rb', line 272 def on_compact(&block) @on_compact_callback = block end |
.on_compaction_trigger {|ctx| ... } ⇒ Boolean
Registers a callback that decides whether compaction should run. Evaluated before every LLM call (after on_trim). If the block returns truthy AND an +on_compact+ callback is also registered, the compact pipeline is executed.
The block receives a read-only Context::TriggerContext.
250 251 252 |
# File 'lib/phronomy/agent/base.rb', line 250 def on_compaction_trigger(&block) @on_compaction_trigger_callback = block end |
.on_trim {|ctx| ... } ⇒ Object
Registers a callback that is invoked before every LLM call so the application can remove stale or irrelevant messages from the conversation history.
The block receives a Context::TrimContext and may call +ctx.remove(seqs)+ to drop messages by seq number. Changes affect only the current invocation; the underlying memory store is unchanged.
227 228 229 |
# File 'lib/phronomy/agent/base.rb', line 227 def on_trim(&block) @on_trim_callback = block end |
.provider(name = nil) ⇒ Symbol?
Sets or reads the LLM provider for this agent. Required when using a model not registered in RubyLLM's model registry (e.g. locally-hosted models via LM Studio or Ollama).
121 122 123 124 125 126 127 |
# File 'lib/phronomy/agent/base.rb', line 121 def provider(name = nil) if name @provider = name else @provider end end |
.retry_policy(times: 0, wait: 0, base: 1.0) ⇒ Object
Configures a retry policy that wraps the full #invoke call. GuardrailError is never retried regardless of this setting.
174 175 176 |
# File 'lib/phronomy/agent/base.rb', line 174 def retry_policy(times: 0, wait: 0, base: 1.0) @_retry_policy = {times: times, wait: wait, base: base} end |
.static_knowledge(*sources) ⇒ Object
Registers one or more static knowledge sources on the agent class. Static sources are fetched once per agent instance and their content is cached in ContextVersionCache keyed by a fingerprint of the instruction text + source content. The cache is invalidated automatically when the fingerprint changes (e.g. because a source was updated).
203 204 205 |
# File 'lib/phronomy/agent/base.rb', line 203 def static_knowledge(*sources) @static_knowledge_sources = sources.flatten end |
.static_knowledge_sources ⇒ Array<Phronomy::KnowledgeSource::Base>
Returns the registered static knowledge sources.
209 210 211 |
# File 'lib/phronomy/agent/base.rb', line 209 def static_knowledge_sources @static_knowledge_sources || [] end |
.temperature(val = nil) ⇒ Float?
Sets or reads the sampling temperature sent to the LLM. When nil, the provider's default is used.
138 139 140 141 142 143 144 |
# File 'lib/phronomy/agent/base.rb', line 138 def temperature(val = nil) if val @temperature = val else @temperature end end |
.tool_aliases ⇒ Hash{Class => String}
Returns the alias map registered via the hash form of .tools.
106 107 108 |
# File 'lib/phronomy/agent/base.rb', line 106 def tool_aliases @tool_aliases ||= {} end |
.tools(*args) ⇒ Object
Registers tool classes for this agent.
Accepts either a splat of classes (backward-compatible) or a Hash mapping each class to an explicit alias name (String) or nil (use tool's own name). The alias form is useful when two tools share the same auto-generated name (e.g. two SearchTool classes from different modules).
89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/phronomy/agent/base.rb', line 89 def tools(*args) if args.empty? return @tools || [] end if args.length == 1 && args.first.is_a?(Hash) hash = args.first @tools = hash.keys @tool_aliases = hash.transform_values { |v| v&.to_s }.reject { |_, v| v.nil? } else @tools = args @tool_aliases = {} end end |
Instance Method Details
#_add_handoff_tool(tool_class) ⇒ self
Registers an anonymous handoff tool class on this agent instance. Called by Runner during construction when routes are configured.
385 386 387 388 389 |
# File 'lib/phronomy/agent/base.rb', line 385 def _add_handoff_tool(tool_class) @_handoff_tools ||= [] @_handoff_tools << tool_class self end |
#_handoff_tools ⇒ Array<Class>
Returns handoff tool classes registered on this instance by Runner.
393 394 395 |
# File 'lib/phronomy/agent/base.rb', line 393 def _handoff_tools @_handoff_tools || [] end |
#add_input_guardrail(guardrail) ⇒ Object
Attach a guardrail that validates input before every #invoke call.
523 524 525 526 527 |
# File 'lib/phronomy/agent/base.rb', line 523 def add_input_guardrail(guardrail) @input_guardrails ||= [] @input_guardrails << guardrail self end |
#add_output_guardrail(guardrail) ⇒ Object
Attach a guardrail that validates output before it is returned.
531 532 533 534 535 |
# File 'lib/phronomy/agent/base.rb', line 531 def add_output_guardrail(guardrail) @output_guardrails ||= [] @output_guardrails << guardrail self end |
#invoke(input, config: {}) ⇒ Hash
Invokes the agent with the given input and returns a result Hash. Applies the retry policy configured via retry_policy when transient errors occur. GuardrailError is never retried.
414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 |
# File 'lib/phronomy/agent/base.rb', line 414 def invoke(input, config: {}) policy = self.class._retry_policy attempt = 0 begin invoke_once(input, config: config) rescue Phronomy::GuardrailError raise rescue if policy && attempt < policy[:times] wait = compute_agent_retry_wait(policy[:wait], policy[:base], attempt) self.class._sleep_proc.call(wait) if wait > 0 attempt += 1 retry end raise end end |
#on_approval_required(&block) ⇒ self
Registers a callback that is invoked before executing any tool that has +requires_approval true+ set. The block receives the tool name (String) and the arguments Hash, and must return a truthy value to allow execution. Returning a falsy value causes the tool to return a denial message instead of executing.
When no handler is registered, tools with +requires_approval+ execute without interruption (backward-compatible behaviour).
516 517 518 519 |
# File 'lib/phronomy/agent/base.rb', line 516 def on_approval_required(&block) @approval_handler = block self end |
#stream(input, config: {}) {|Phronomy::Agent::StreamEvent| ... } ⇒ Hash
Streaming version of #invoke. Yields StreamEvent objects as they are produced by the underlying LLM.
Events emitted (in order): :token — each content delta from the LLM :tool_call — when the LLM requests a tool (ReactAgent subclasses only) :tool_result — after a tool completes (ReactAgent subclasses only) :done — final event carrying output, messages, and usage :error — if an unrecoverable error occurs
446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 |
# File 'lib/phronomy/agent/base.rb', line 446 def stream(input, config: {}, &block) return invoke(input, config: config) unless block run_input_guardrails!(input) memory = config[:memory] thread_id = config[:thread_id] chat = build_chat = (input) # Assemble context via Assembler (same as invoke_once). assembler = Context::Assembler.new(budget: build_token_budget) system_msg = build_instructions(input) assembler.add_instruction(system_msg) if system_msg Array(config[:knowledge_sources]).each do |ks| ks.fetch(query: ).each do |chunk| assembler.add_knowledge(chunk[:content], type: chunk[:type], source: chunk[:source]) end end if memory && thread_id msgs = load_from_memory(memory, thread_id: thread_id, query: ) assembler.(msgs) end context = assembler.build apply_instructions(chat, context[:system]) if context[:system] context[:messages].each { |msg| chat. << msg } # Wire per-event callbacks to yield StreamEvents. chat.on_tool_call { |tool_call| block.call(StreamEvent.new(type: :tool_call, payload: {tool_call: tool_call})) } chat.on_tool_result { |tool_result| block.call(StreamEvent.new(type: :tool_result, payload: {tool_result: tool_result})) } # Run before_completion hooks (global → class → instance) before the LLM call. run_before_completion_hooks!(chat, config) response = chat.ask() do |chunk| block.call(StreamEvent.new(type: :token, payload: {content: chunk.content})) end save_to_memory(memory, thread_id: thread_id, messages: chat.) if memory && thread_id output = response.content usage = Phronomy::TokenUsage.from_tokens(response.tokens) run_output_guardrails!(output) result = {output: output, messages: chat., usage: usage} block.call(StreamEvent.new(type: :done, payload: result)) result rescue => e block&.call(StreamEvent.new(type: :error, payload: {error: e})) raise end |