Skip to content

Commit

Permalink
fix: rubrics based metrics (#1821)
Browse files Browse the repository at this point in the history
- #1800

---------

Co-authored-by: ikka <[email protected]>
  • Loading branch information
sahusiddharth and shahules786 authored Jan 8, 2025
1 parent 7bf1ecc commit c0dc689
Show file tree
Hide file tree
Showing 2 changed files with 40 additions and 22 deletions.
30 changes: 20 additions & 10 deletions docs/concepts/metrics/available_metrics/general_purpose.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,28 +69,38 @@ await scorer.single_turn_ascore(sample)

## Rubrics based criteria scoring

Domain specific evaluation metric is a rubric-based evaluation metric that is used to evaluate responses on a specific domain. The rubric consists of descriptions for each score, typically ranging from 1 to 5. The response here is evaluation and scored using the LLM using description specified in the rubric. This metric also have reference free and reference based variations.
The Rubric-Based Criteria Scoring Metric is used to do evaluations based on user-defined rubrics. Each rubric defines a detailed score description, typically ranging from 1 to 5. The LLM assesses and scores responses according to these descriptions, ensuring a consistent and objective evaluation.
!!! note
When defining rubrics, ensure consistency in terminology to match the schema used in the `SingleTurnSample` or `MultiTurnSample` respectively. For instance, if the schema specifies a term such as reference, ensure that the rubrics use the same term instead of alternatives like ground truth.

#### Example
```python
from ragas.dataset_schema import SingleTurnSample
from ragas.metrics import RubricsScore

sample = SingleTurnSample(
user_input="Where is the Eiffel Tower located?",
response="The Eiffel Tower is located in Paris.",
reference="The Eiffel Tower is located in Paris.",
response="The Earth is flat and does not orbit the Sun.",
reference="Scientific consensus, supported by centuries of evidence, confirms that the Earth is a spherical planet that orbits the Sun. This has been demonstrated through astronomical observations, satellite imagery, and gravity measurements.",
)

rubrics = {
"score1_description": "The response is incorrect, irrelevant, or does not align with the ground truth.",
"score2_description": "The response partially matches the ground truth but includes significant errors, omissions, or irrelevant information.",
"score3_description": "The response generally aligns with the ground truth but may lack detail, clarity, or have minor inaccuracies.",
"score4_description": "The response is mostly accurate and aligns well with the ground truth, with only minor issues or missing details.",
"score5_description": "The response is fully accurate, aligns completely with the ground truth, and is clear and detailed.",
"score1_description": "The response is entirely incorrect and fails to address any aspect of the reference.",
"score2_description": "The response contains partial accuracy but includes major errors or significant omissions that affect its relevance to the reference.",
"score3_description": "The response is mostly accurate but lacks clarity, thoroughness, or minor details needed to fully address the reference.",
"score4_description": "The response is accurate and clear, with only minor omissions or slight inaccuracies in addressing the reference.",
"score5_description": "The response is completely accurate, clear, and thoroughly addresses the reference without any errors or omissions.",
}
scorer = RubricsScore(rubrics=rubrics, llm=evaluator_llm)


scorer = RubricsScore(rubrics=rubrics, llm=evaluator_llm)
await scorer.single_turn_ascore(sample)
```

Output
```
1
```

## Instance Specific rubrics criteria scoring

Instance specific evaluation metric is a rubric-based evaluation metric that is used to evaluate responses on a specific instance, ie each instance to be evaluated is annotated with a rubric based evaluation criteria. The rubric consists of descriptions for each score, typically ranging from 1 to 5. The response here is evaluation and scored using the LLM using description specified in the rubric. This metric also have reference free and reference based variations. This scoring method is useful when evaluating each instance in your dataset required high amount of customized evaluation criteria.
Expand Down
32 changes: 20 additions & 12 deletions src/ragas/metrics/_domain_specific_rubrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,19 +24,19 @@


DEFAULT_REFERENCE_FREE_RUBRICS = {
"score1_description": "The response is incorrect or does not answer the question.",
"score2_description": "The response is partially correct but may include errors or incomplete information.",
"score3_description": "The response is generally correct but lacks clarity or completeness.",
"score4_description": "The response is correct and clear, with minor issues or missing details.",
"score5_description": "The response is completely accurate, clear, and answers the question directly.",
"score1_description": "The response is entirely incorrect and fails to address any aspect of the user input.",
"score2_description": "The response contains partial accuracy but includes major errors or significant omissions that affect its relevance to the user input.",
"score3_description": "The response is mostly accurate but lacks clarity, thoroughness, or minor details needed to fully address the user input.",
"score4_description": "The response is accurate and clear, with only minor omissions or slight inaccuracies in addressing the user input.",
"score5_description": "The response is completely accurate, clear, and thoroughly addresses the user input without any errors or omissions.",
}

DEFAULT_WITH_REFERENCE_RUBRICS = {
"score1_description": "The response is incorrect, irrelevant, or does not align with the ground truth.",
"score2_description": "The response partially matches the ground truth but includes significant errors, omissions, or irrelevant information.",
"score3_description": "The response generally aligns with the ground truth but may lack detail, clarity, or have minor inaccuracies.",
"score4_description": "The response is mostly accurate and aligns well with the ground truth, with only minor issues or missing details.",
"score5_description": "The response is fully accurate, aligns completely with the ground truth, and is clear and detailed.",
"score1_description": "The response is entirely incorrect, irrelevant, or does not align with the reference in any meaningful way.",
"score2_description": "The response partially matches the reference but contains major errors, significant omissions, or irrelevant information.",
"score3_description": "The response aligns with the reference overall but lacks sufficient detail, clarity, or contains minor inaccuracies.",
"score4_description": "The response is mostly accurate, aligns closely with the reference, and contains only minor issues or omissions.",
"score5_description": "The response is fully accurate, completely aligns with the reference, and is clear, thorough, and detailed.",
}


Expand Down Expand Up @@ -71,13 +71,13 @@ class MultiTurnInputWithoutRubric(BaseModel):


class SingleTurnPrompt(PydanticPrompt[SingleTurnInputWithoutRubric, ScoreFeedback]):
instruction = "" # this will be set in the constructor
instruction = "Your task is to assign an appropriate score and provide feedback to the inputs based solely on the scoring criteria."
input_model = SingleTurnInputWithoutRubric
output_model = ScoreFeedback


class MultiTurnPrompt(PydanticPrompt[MultiTurnInputWithoutRubric, ScoreFeedback]):
instruction = "" # this will be set in the constructor
instruction = "Your task is to assign an appropriate score and provide feedback to the inputs based solely on the scoring criteria."
input_model = MultiTurnInputWithoutRubric
output_model = ScoreFeedback

Expand Down Expand Up @@ -111,6 +111,12 @@ def __init__(
"reference:optional",
},
}

# Add rubrics to the scoring prompts
rubrics_text = "\n".join(f"{key}: {value}" for key, value in self.rubrics.items())
self.single_turn_scoring_prompt.instruction = f"{self.single_turn_scoring_prompt.instruction}\n\nScoring Rubrics:\n{rubrics_text}\n"
self.multi_turn_scoring_prompt.instruction = f"{self.multi_turn_scoring_prompt.instruction}\n\nScoring Rubrics:\n{rubrics_text}\n"

super().__init__(
name=name,
llm=llm,
Expand Down Expand Up @@ -142,6 +148,7 @@ async def _ascore(self, row: t.Dict, callbacks: Callbacks) -> float:
reference=reference,
reference_contexts=reference_contexts,
)

output = await self.single_turn_scoring_prompt.generate(
data=prompt_input,
llm=self.llm,
Expand All @@ -158,6 +165,7 @@ async def _multi_turn_ascore(
prompt_input = MultiTurnInputWithoutRubric(
user_input=interaction,
)

output = await self.multi_turn_scoring_prompt.generate(
data=prompt_input,
llm=self.llm,
Expand Down

0 comments on commit c0dc689

Please sign in to comment.