You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just try ragas to evaluate my GraphRAG app in Chinese, and find that the metric answer relevancy is worse for every question. And I find out that the cause is the question generated is in English, so the embedding of the original question and the generated question is quite different. To address this issue, I modify the function in ~/ragas/prompt/pydantic_prompt.py to demand LLM to output the generated question in Chinese, and it does work.
def _generate_output_signature(self, indent: int = 4) -> str:
return (
f"Please return the output in a JSON format that complies with the "
f"following schema as specified in JSON Schema and the generated question in Chinese:\n"
f"{self.output_model.model_json_schema()}"
)
But I know the function is called not only for this metric, and a solution is needed to support all languages, so I write down the issue here.
Best regards
Jean from China
The text was updated successfully, but these errors were encountered:
I just try ragas to evaluate my GraphRAG app in Chinese, and find that the metric answer relevancy is worse for every question. And I find out that the cause is the question generated is in English, so the embedding of the original question and the generated question is quite different. To address this issue, I modify the function in ~/ragas/prompt/pydantic_prompt.py to demand LLM to output the generated question in Chinese, and it does work.
But I know the function is called not only for this metric, and a solution is needed to support all languages, so I write down the issue here.
Best regards
Jean from China
The text was updated successfully, but these errors were encountered: