Module: Legion::Extensions::Agentic::Self::SelfTalk::Helpers::LlmEnhancer
- Defined in:
- lib/legion/extensions/agentic/self/self_talk/helpers/llm_enhancer.rb
Constant Summary collapse
- SYSTEM_PROMPT =
<<~PROMPT You are an internal cognitive voice in an autonomous AI agent's inner dialogue system. When asked to speak as a specific voice type, adopt that perspective fully: - critic: skeptical, identifies flaws and risks - encourager: supportive, finds reasons for optimism - analyst: data-driven, logical, weighs evidence - devils_advocate: challenges assumptions, plays the contrarian - pragmatist: focuses on what's actionable and achievable - visionary: thinks big picture, sees future possibilities - caretaker: concerned about wellbeing and sustainability - rebel: questions authority, pushes for unconventional approaches Be concise (1-3 sentences). Stay in character. Take a clear position. PROMPT
Class Method Summary collapse
- .available? ⇒ Boolean
- .generate_turn(voice_type:, topic:, prior_turns:) ⇒ Object
- .summarize_dialogue(topic:, turns:) ⇒ Object
Class Method Details
.available? ⇒ Boolean
26 27 28 29 30 |
# File 'lib/legion/extensions/agentic/self/self_talk/helpers/llm_enhancer.rb', line 26 def available? !!(defined?(Legion::LLM) && Legion::LLM.respond_to?(:started?) && Legion::LLM.started?) rescue StandardError => _e false end |
.generate_turn(voice_type:, topic:, prior_turns:) ⇒ Object
32 33 34 35 36 37 38 39 |
# File 'lib/legion/extensions/agentic/self/self_talk/helpers/llm_enhancer.rb', line 32 def generate_turn(voice_type:, topic:, prior_turns:) prompt = build_generate_turn_prompt(voice_type: voice_type, topic: topic, prior_turns: prior_turns) response = llm_ask(prompt) parse_generate_turn_response(response) rescue StandardError => e Legion::Logging.warn "[self_talk:llm] generate_turn failed: #{e.}" nil end |
.summarize_dialogue(topic:, turns:) ⇒ Object
41 42 43 44 45 46 47 48 |
# File 'lib/legion/extensions/agentic/self/self_talk/helpers/llm_enhancer.rb', line 41 def summarize_dialogue(topic:, turns:) prompt = build_summarize_dialogue_prompt(topic: topic, turns: turns) response = llm_ask(prompt) parse_summarize_dialogue_response(response) rescue StandardError => e Legion::Logging.warn "[self_talk:llm] summarize_dialogue failed: #{e.}" nil end |