Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V0.2.7 Context Recall only returning 1 or 0 values #1745

Open
timelesshc opened this issue Dec 10, 2024 · 1 comment
Open

V0.2.7 Context Recall only returning 1 or 0 values #1745

timelesshc opened this issue Dec 10, 2024 · 1 comment
Assignees
Labels
answered 🤖 The question has been answered. Will be closed automatically if no new comments bug Something isn't working module-metrics this is part of metrics module question Further information is requested

Comments

@timelesshc
Copy link

[ ] I checked the documentation and related resources and couldn't find an answer to my question.

Your Question
After upgrading the version to 0.2.7, the Context Recall metrics only return 1 or 0 values. Is this intentional? The previous version would return values between 0 and 1.

Code Examples
This community speaks code. Share your code snippets to help us understand your question better.

Additional context
Anything else you want to share with us?

@timelesshc timelesshc added the question Further information is requested label Dec 10, 2024
@timelesshc timelesshc changed the title Context Recall only returning 1 or 0 values V0.2.7 Context Recall only returning 1 or 0 values Dec 10, 2024
@dosubot dosubot bot added the bug Something isn't working label Dec 10, 2024
@sahusiddharth
Copy link
Collaborator

Hi @timelesshc,

Are you still facing this issue?

I tested with a simple example, I received fractional values. The results you're seeing might be related to your data. I've included the code snippet I used below for reference:

from ragas.dataset_schema import SingleTurnSample
from ragas.metrics import LLMContextRecall

sample = SingleTurnSample(
    user_input="Tell me about the Great Wall of China.",
    response="The Great Wall of China is in northern China and was built to protect against invasions.",
    reference="The Great Wall of China is in northern China, spans thousands of miles, and was built to protect against invasions.",
    retrieved_contexts=[
        "The Great Wall of China is a historic structure located in northern China.",
        "The wall was built over centuries to protect against invasions from nomadic tribes."
    ],
)

context_recall = LLMContextRecall(llm=evaluator_llm)
await context_recall.single_turn_ascore(sample)

Output

0.6666666666666666

In the above example:

  1. The claim "The Great Wall of China is in northern China" can be derived from the first retrieved context.
  2. The claim "The wall was built to protect against invasions" can be derived from the second retrieved context.
  3. The claim "spans thousands of miles" is not directly supported by the retrieved contexts.

@sahusiddharth sahusiddharth added answered 🤖 The question has been answered. Will be closed automatically if no new comments module-metrics this is part of metrics module labels Jan 10, 2025
@sahusiddharth sahusiddharth self-assigned this Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
answered 🤖 The question has been answered. Will be closed automatically if no new comments bug Something isn't working module-metrics this is part of metrics module question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants