Module: Legion::Extensions::Agentic::Social::Conscience::Runners::Conscience
- Includes:
- Helpers::Lex
- Included in:
- Client
- Defined in:
- lib/legion/extensions/agentic/social/conscience/runners/conscience.rb
Instance Method Summary collapse
-
#conscience_stats ⇒ Object
Aggregate moral reasoning stats.
-
#moral_dilemmas ⇒ Object
List unresolved moral dilemmas (cases where foundations strongly disagreed).
-
#moral_evaluate(action:, context: {}) ⇒ Object
Full moral assessment of a proposed action.
-
#moral_history(limit: 20) ⇒ Object
Recent moral evaluation history.
-
#moral_status ⇒ Object
Current moral sensitivities and consistency score.
-
#update_moral_outcome(action:, outcome:, verdict: nil) ⇒ Object
Record whether the agent actually followed or overrode its moral verdict.
Instance Method Details
#conscience_stats ⇒ Object
Aggregate moral reasoning stats
84 85 86 87 88 89 90 91 92 |
# File 'lib/legion/extensions/agentic/social/conscience/runners/conscience.rb', line 84 def conscience_stats(**) stats = moral_store.aggregate_stats log.debug '[conscience] stats' stats.merge( verdict_distribution: verdict_distribution(stats[:verdict_counts]), foundation_weights: Helpers::Constants::MORAL_FOUNDATIONS.transform_values { |v| v[:weight] } ) end |
#moral_dilemmas ⇒ Object
List unresolved moral dilemmas (cases where foundations strongly disagreed)
73 74 75 76 77 78 79 80 81 |
# File 'lib/legion/extensions/agentic/social/conscience/runners/conscience.rb', line 73 def moral_dilemmas(**) open = moral_store.open_dilemmas log.debug "[conscience] dilemmas: #{open.size} open" { dilemmas: open, count: open.size } end |
#moral_evaluate(action:, context: {}) ⇒ Object
Full moral assessment of a proposed action. action: string or symbol describing what is about to happen context: hash of moral context signals (harm_to_others, consent_present, etc.)
16 17 18 19 20 21 22 23 24 |
# File 'lib/legion/extensions/agentic/social/conscience/runners/conscience.rb', line 16 def moral_evaluate(action:, context: {}, **) result = moral_store.evaluator.evaluate(action: action, context: context) moral_store.record_evaluation(result) log.debug "[conscience] action=#{action} verdict=#{result[:verdict]} " \ "score=#{result[:weighted_score]} dilemma=#{result[:dilemma]&.dig(:type)}" result end |
#moral_history(limit: 20) ⇒ Object
Recent moral evaluation history
42 43 44 45 46 47 48 49 50 51 |
# File 'lib/legion/extensions/agentic/social/conscience/runners/conscience.rb', line 42 def moral_history(limit: 20, **) recent = moral_store.recent_evaluations(limit) log.debug "[conscience] history: #{recent.size} entries" { history: recent, total: moral_store.history.size, limit: limit } end |
#moral_status ⇒ Object
Current moral sensitivities and consistency score
27 28 29 30 31 32 33 34 35 36 37 38 39 |
# File 'lib/legion/extensions/agentic/social/conscience/runners/conscience.rb', line 27 def moral_status(**) stats = moral_store.aggregate_stats sensitivities = moral_store.foundation_sensitivities log.debug "[conscience] consistency=#{stats[:consistency_score]} " \ "evaluations=#{stats[:total_evaluations]}" { sensitivities: sensitivities, consistency: stats[:consistency_score], stats: stats } end |
#update_moral_outcome(action:, outcome:, verdict: nil) ⇒ Object
Record whether the agent actually followed or overrode its moral verdict. outcome: :followed | :overridden
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# File 'lib/legion/extensions/agentic/social/conscience/runners/conscience.rb', line 55 def update_moral_outcome(action:, outcome:, verdict: nil, **) effective_verdict = verdict || infer_last_verdict(action) moral_store.record_follow_through(effective_verdict, outcome) log.debug "[conscience] follow_through action=#{action} " \ "verdict=#{effective_verdict} outcome=#{outcome} " \ "consistency=#{moral_store.consistency_score}" { action: action, verdict: effective_verdict, outcome: outcome, consistency: moral_store.consistency_score } end |