Class: RubricLLM::Metrics::ContextPrecision
- Defined in:
- lib/rubric_llm/metrics/context_precision.rb
Constant Summary collapse
- SYSTEM_PROMPT =
<<~PROMPT You are an evaluation judge. Assess whether the retrieved contexts are relevant to the question. Context precision measures if the retrieved documents are useful for answering the question. Respond with JSON only: { "score": <float 0.0-1.0>, "context_scores": [{"index": <int>, "relevant": <true/false>, "reason": "<brief>"}], "reasoning": "<brief explanation>" } PROMPT
Instance Attribute Summary
Attributes inherited from Base
Instance Method Summary collapse
Methods inherited from Base
Constructor Details
This class inherits a constructor from RubricLLM::Metrics::Base
Instance Method Details
#call(question:, context: []) ⇒ Object
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# File 'lib/rubric_llm/metrics/context_precision.rb', line 18 def call(question:, context: [], **) return { score: nil, details: { error: "No context provided" } } if Array(context).empty? user_prompt = <<~PROMPT Question: #{question} Contexts: #{Array(context).each_with_index.map { |c, i| "#{i + 1}. #{c}" }.join("\n")} Evaluate how relevant each context is to the question. PROMPT result = judge_eval(system_prompt: SYSTEM_PROMPT, user_prompt:) normalize(result) end |