-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError('StringIO' object has no attribute 'classifications') #1688
Comments
Also having this same problem while evaluating ragas faithfulness through the giskard.rag.evaluate function. |
Looks like it occurs because of fix_output_format_prompt object contains |
@jjmachan @shahules786 |
Im also facing the same issue. |
I am also facing the same issue while evaluating ragas metrics through the giskard.rag.evaluate function. |
I solved the particular problem I had while evaluating via giskard. When saving model results in an AgentAnswer object I was consolidating all of the contexts into a single stringz, whereas they should be saved as a list of strings. Converting to a list of strings solved the problem for me. Here is a relevant extract from my model class - see comments in all caps
|
hey folks - taking a look at this now |
I tried this in my case and it still did not work (I am using giskard) |
Issue is still occurring as of 2.10: #1831 |
I'm seeing issues specifically with Would actually be great if there was more verbose on what exactly goes wrong as it becomes quite an impossible task to debug... EDIT: No wait, it seems to fail even with very simple |
So after debugging this way too long, I managed to get around the issue for one of my applications by computing the
And of course, there is no need to use Tested with NOTE: This is by no means the fix to the solution but rather a temporary workaround/hack. |
In my case I get "'StringIO' object has no attribute 'statements'", an error related to faithfulness. Unfortunately, as I feared, this workaround did not work for me. |
closed with fix: output parser bug by jjmachan · Pull Request #1864 · explodinggradients/ragas will be released with v0.2.12 🙂 I'm closing this for now but if the issue is still persisting, please do let me know - really sorry about the delay |
👍 Hi @jjmachan, I also get this error. On my end, after digging in the problem, I realised that single quotes happened to be in the JSON output generated by the judge LLM, making json parsing fail. It happens in Faithfulness metric because the judge LLM often cite extracted context in its reason field, with single quotes unfortunately. What I suggest is to slightly modify the Faithfulness prompt: #1874 |
thanks for the fix @michaelromagne - suggested a small change but lets get this merged in 🥳 |
…1874) The error described in [this comment](#1688 (comment)) is not resolved when computing Faithfulness. After digging in judge LLMs responses, there is a JSON parse error happening when parsing the output of the judge. For instance, it happens for the below answer: ``` { "statements": [ { "statement": "The Norwegian Dawn cruise ship was denied access to Mauritius.", "reason": "The context explicitly states that local authorities denied permission for the Norwegian Dawn ship to access the Mauritius capital of Port Louis.", "verdict": 1 }, { "statement": "The denial of access was due to potential health risks.", "reason": "The context directly mentions that the ship was denied access \"citing \\\"potential health risks.\\\"\"", "verdict": 1 }, { "statement": "The specific health risk was a potential cholera outbreak on the Norwegian Dawn cruise ship.", "reason": "While the context mentions fears of a potential cholera outbreak in the title, it does not explicitly state that cholera was the specific health risk on the Norwegian Dawn. The context only mentions \'stomach-related illness\' without specifying cholera.", "verdict": 0 } ] } ``` The error is that the generated context has single quotes as you can see in the last reason. This is not allowed in JSON, and it happens frequently with the current prompt for Faithfulness as it often cite elements from the retrieved context to explain its verdict. I tried to change the `PydanticOutputParser` logic in Langchain Core, the one that Ragas uses to parse JSON. I tried to replace single quotes by double quotes with simple string replace but it did not work. Thus, an immediate solution that worked for me was to specifically ask the judge LLM to only output double quotes, not single quotes, and the error disappears.
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
I'm using the latest ragas version and have been encountering the AttributeError('StringIO' object has no attribute 'classifications') error message when evaluating metrics.
I'm using chatglm APIs and wonder if there is a compatibility issue.
Ragas version: 0.2.5
Python version: 3.12
Code to Reproduce
Error trace
Evaluating: 2%|█▍ | 14/792 [00:51<33:06, 2.55s/it]Exception raised in Job[10]: AttributeError('StringIO' object has no attribute 'classifications')
Evaluating: 5%|████▏ | 41/792 [02:50<1:32:51, 7.42s/it]Exception raised in Job[42]: AttributeError('StringIO' object has no attribute 'classifications')
Evaluating: 5%|████▎ | 42/792 [03:01<1:42:38, 8.21s/it]Exception raised in Job[46]: AttributeError('StringIO' object has no attribute 'classifications')
Evaluating: 7%|█████▋ | 54/792 [04:01<46:35, 3.79s/it]Exception raised in Job[54]: AttributeError('StringIO' object has no attribute 'classifications')
Evaluating: 7%|██████ | 58/792 [04:16<43:52, 3.59s/it]Exception raised in Job[50]: AttributeError('StringIO' object has no attribute 'classifications')
Evaluating: 8%|███████ | 67/792 [04:46<37:19, 3.09s/it]Exception raised in Job[62]: AttributeError('StringIO' object has no attribute 'classifications')
Expected behavior
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: