Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt Error, TimeOut error #1693

Open
giambascientist86 opened this issue Nov 20, 2024 · 2 comments
Open

Prompt Error, TimeOut error #1693

giambascientist86 opened this issue Nov 20, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@giambascientist86
Copy link

2024-11-20 16:06:08,086 - ERROR - Prompt fix_output_format failed to parse output: The output parser failed to parse the output including retries.
2024-11-20 16:06:08,093 - ERROR - Prompt fix_output_format failed to parse output: The output parser failed to parse the output including retries.
2024-11-20 16:06:08,094 - ERROR - Prompt fix_output_format failed to parse output: The output parser failed to parse the output including retries.
2024-11-20 16:06:08,094 - ERROR - Prompt context_recall_classific[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug
This two errors are continuously appearing when evaluating on HuggingFace models and HF Datasets

Ragas version:latest
Python version:3.12

Code to Reproduce

Error trace

Expected behaviour:
I expect no output parser errors!!

Additional context
Add any other context about the problem here.

@giambascientist86 giambascientist86 added the bug Something isn't working label Nov 20, 2024
@giambascientist86
Copy link
Author

Hello guys ; any update on this issue? I ma still facing it blocking my pipeline..:)

@NoamDetournay
Copy link

Hi @giambascientist86, I encountered the same issue and managed to resolve it by using a callback to capture the format of the prompt and the model's output. I actually came across a great solution in #1729, which worked really well.

Here's how I did it:

from langchain_core.callbacks import BaseCallbackHandler
from ragas import evaluate,RunConfig

class TestCallback(BaseCallbackHandler):

    def on_llm_start(self, serialized, prompts, **kwargs):
        print(f"**********Prompts*********:\n {prompts[0]}\n\n")

    def on_llm_end(self, response, **kwargs):
        print(f"**********Response**********:\n {response}\n\n")

score = evaluate(dataset,metrics=[faithfulness],
                 llm=llm,
                 embeddings=azure_embeddings,
                 raise_exceptions=True,
                 callbacks=[TestCallback()],
                 run_config=RunConfig(timeout=10,max_retries=1,max_wait=60,max_workers=1)
                )
score.to_pandas()

Hope it helps!

Also I use ragas == 0.2.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants