Module: Legion::Extensions::Agentic::Social::MoralReasoning::Helpers::LlmEnhancer
- Defined in:
- lib/legion/extensions/agentic/social/moral_reasoning/helpers/llm_enhancer.rb
Constant Summary collapse
- SYSTEM_PROMPT =
<<~PROMPT You are the moral reasoning engine for an autonomous AI agent built on LegionIO. You apply ethical frameworks to evaluate actions and resolve dilemmas. Be rigorous, analytical, and fair. Consider multiple perspectives. Output structured reasoning, not opinions. Be concise. PROMPT
Class Method Summary collapse
- .available? ⇒ Boolean
- .evaluate_action(action:, description:, foundations:) ⇒ Object
- .resolve_dilemma(dilemma_description:, options:, framework:) ⇒ Object
Class Method Details
.available? ⇒ Boolean
19 20 21 22 23 |
# File 'lib/legion/extensions/agentic/social/moral_reasoning/helpers/llm_enhancer.rb', line 19 def available? !!(defined?(Legion::LLM) && Legion::LLM.respond_to?(:started?) && Legion::LLM.started?) rescue StandardError => _e false end |
.evaluate_action(action:, description:, foundations:) ⇒ Object
25 26 27 28 29 30 31 32 |
# File 'lib/legion/extensions/agentic/social/moral_reasoning/helpers/llm_enhancer.rb', line 25 def evaluate_action(action:, description:, foundations:) response = llm_ask(build_evaluate_action_prompt(action: action, description: description, foundations: foundations)) parse_evaluate_action_response(response) rescue StandardError => e Legion::Logging.warn "[moral_reasoning:llm] evaluate_action failed: #{e.}" nil end |
.resolve_dilemma(dilemma_description:, options:, framework:) ⇒ Object
34 35 36 37 38 39 40 41 |
# File 'lib/legion/extensions/agentic/social/moral_reasoning/helpers/llm_enhancer.rb', line 34 def resolve_dilemma(dilemma_description:, options:, framework:) response = llm_ask(build_resolve_dilemma_prompt(dilemma_description: dilemma_description, options: , framework: framework)) parse_resolve_dilemma_response(response) rescue StandardError => e Legion::Logging.warn "[moral_reasoning:llm] resolve_dilemma failed: #{e.}" nil end |