Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NO WORKING CODE FOR RAGAS 0.2 IN THE ENTIRE INTERNET #1867

Closed
GenAI-Rocky opened this issue Jan 22, 2025 · 4 comments
Closed

NO WORKING CODE FOR RAGAS 0.2 IN THE ENTIRE INTERNET #1867

GenAI-Rocky opened this issue Jan 22, 2025 · 4 comments
Labels
bug Something isn't working

Comments

@GenAI-Rocky
Copy link

GenAI-Rocky commented Jan 22, 2025

I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug
Encountering error, "'list' object has no attribute 'get_sample_type'" while running the below code.

I have been surfing the entire Internet but couldn't find the working code for RAGAS Version 0.2.

Ragas version: 0.2.11
Python version: 3.12

Code to Reproduce

from ragas import evaluate
import os

'''
# Ensure the OpenAI API key is set in the environment
api_key = os.getenv("AZURE_OPENAI_API_KEY")
print(api_key)
if not api_key:
    raise ValueError("Please set the OPENAI_API_KEY environment variable.")
'''
# Chat history, user input, and assistant response related to Cricket
chat_history = ["Who won the last Cricket World Cup?", "England won the last Cricket World Cup."]
user_input = "Who was the captain of the winning team?"
assistant_response = "Eoin Morgan was the captain of the England team that won the last Cricket World Cup."

# Create the dataset
dataset = [
    {
        "chat_history": chat_history,
        "user_input": user_input,
        "assistant_response": assistant_response
    }
]

# Evaluate the response with chat history
try:
    score = evaluate(dataset)
    print(f"Score: {score}")
except Exception as e:
    print(f"Error during evaluation: {e}")

# Print chat history
print("Chat History:")
for message in chat_history:
    print(f"User: {message}")

Error trace:

Traceback (most recent call last):
  File "c:\Projects\Guided Experience_Local\Evaluation\RAGAS\Chat_History.py", line 37, in <module>
    score = evaluate(dataset)
            ^^^^^^^^^^^^^^^^^
  File "C:\Users\RMothukuru\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\_analytics.py", line 227, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\RMothukuru\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\evaluation.py", line 258, in evaluate
    sample_type = dataset.get_sample_type()
                  ^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'get_sample_type'

Expected behavior
The code should be executed without any error and the metric values should be printed

@GenAI-Rocky GenAI-Rocky added the bug Something isn't working label Jan 22, 2025
@jjmachan
Copy link
Member

jjmachan commented Jan 23, 2025

hey there @GenAI-Rocky - I'm sorry to see that your having a hard time but there is a bug in the code you posted. The evaluate() function expects the data in the format of Dataset or EvaluationDataset.

you can read more about this format here: https://docs.ragas.io/en/stable/concepts/components/eval_dataset/

this is one converstion that might help

# Create the dataset in the correct format
data = {
    "chat_history": [chat_history],  # Wrap in list since we have one example
    "user_input": [user_input],
    "response": [assistant_response]
}

# Convert to Hugging Face Dataset format
dataset = Dataset.from_dict(data)

# Evaluate the response with chat history
try:
    score = evaluate(dataset)
    print(f"Score: {score}")
except Exception as e:
    print(f"Error during evaluation: {e}")

can you check if it works - feel free to post any further questions you might have 🙂

PS: I'll also add a better error message here so users have an easier time debugging

@jjmachan
Copy link
Member

@dosu can you help him out?

Copy link

dosubot bot commented Jan 23, 2025

The error you're encountering is due to the evaluate function expecting an EvaluationDataset or a Dataset object, not a plain list. To resolve this, you need to format your data correctly as an EvaluationDataset. Here's how you can do it:

from ragas import SingleTurnSample, EvaluationDataset, evaluate

# Chat history, user input, and assistant response related to Cricket
chat_history = ["Who won the last Cricket World Cup?", "England won the last Cricket World Cup."]
user_input = "Who was the captain of the winning team?"
assistant_response = "Eoin Morgan was the captain of the England team that won the last Cricket World Cup."

# Create a SingleTurnSample
sample = SingleTurnSample(
    user_input=user_input,
    retrieved_contexts=chat_history,
    response=assistant_response,
    reference="Eoin Morgan"
)

# Create the EvaluationDataset
dataset = EvaluationDataset(samples=[sample])

# Evaluate the response with chat history
try:
    score = evaluate(dataset)
    print(f"Score: {score}")
except Exception as e:
    print(f"Error during evaluation: {e}")

# Print chat history
print("Chat History:")
for message in chat_history:
    print(f"User: {message}")

This code creates a SingleTurnSample with the necessary fields and then uses it to create an EvaluationDataset. This should resolve the error and allow you to evaluate the dataset correctly [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@jjmachan
Copy link
Member

closing this now - feel free to reopen if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants