Class: RubricLLM::Metrics::ContextRecall
- Defined in:
- lib/rubric_llm/metrics/context_recall.rb
Constant Summary collapse
- SYSTEM_PROMPT =
<<~PROMPT You are an evaluation judge. Assess whether the provided contexts cover the information in the ground truth. Context recall measures if the retrieved documents contain enough information to construct the ground truth answer. Respond with JSON only: { "score": <float 0.0-1.0>, "covered_facts": [{"fact": "<from ground truth>", "covered": <true/false>, "source_context": <int or null>}], "reasoning": "<brief explanation>" } PROMPT
Instance Attribute Summary
Attributes inherited from Base
Instance Method Summary collapse
Methods inherited from Base
Constructor Details
This class inherits a constructor from RubricLLM::Metrics::Base
Instance Method Details
#call(context: [], ground_truth: nil) ⇒ Object
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# File 'lib/rubric_llm/metrics/context_recall.rb', line 18 def call(context: [], ground_truth: nil, **) return { score: nil, details: { error: "No ground truth provided" } } if ground_truth.nil? return { score: nil, details: { error: "No context provided" } } if Array(context).empty? user_prompt = <<~PROMPT Contexts: #{Array(context).each_with_index.map { |c, i| "#{i + 1}. #{c}" }.join("\n")} Ground Truth: #{ground_truth} Evaluate how well the contexts cover the facts in the ground truth. PROMPT result = judge_eval(system_prompt: SYSTEM_PROMPT, user_prompt:) normalize(result) end |