-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while implementing AzureChatOpenAI() using the documented code provided. Missing all values of Answer relevancy and some values of other metrics #1349
Comments
Hey @koushik9121! I'm here to help you with any bugs, questions, or contributions. Let's work together to solve the issue you're facing. To resolve the
Here is the updated code with these considerations: from langchain_openai.chat_models import AzureChatOpenAI
from langchain_openai.embeddings import AzureOpenAIEmbeddings
from ragas import evaluate
import os
# Ensure your Azure OpenAI key is set in the environment
os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-openai-api-key"
# Azure OpenAI configurations
azure_configs = {
"base_url": "https://<your-endpoint>.openai.azure.com/",
"model_deployment": "your-deployment-name",
"model_name": "your-model-name",
"embedding_deployment": "your-deployment-name",
"embedding_name": "text-embedding-ada-002", # most likely
}
# Initialize the chat model with a higher timeout
azure_model = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_endpoint=azure_configs["base_url"],
azure_deployment=azure_configs["model_deployment"],
model=azure_configs["model_name"],
validate_base_url=False,
request_timeout=60 # Increase the timeout to 60 seconds
)
# Initialize the embeddings with a higher timeout
azure_embeddings = AzureOpenAIEmbeddings(
openai_api_version="2023-05-15",
azure_endpoint=azure_configs["base_url"],
azure_deployment=azure_configs["embedding_deployment"],
model=azure_configs["embedding_name"],
request_timeout=60 # Increase the timeout to 60 seconds
)
# Load the dataset
from datasets import load_dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2")
# Import metrics
from ragas.metrics import (
context_precision,
answer_relevancy,
faithfulness,
context_recall,
)
from ragas.metrics.critique import harmfulness
# List of metrics to use
metrics = [
faithfulness,
answer_relevancy,
context_recall,
context_precision,
harmfulness,
]
# Evaluate the dataset
result = evaluate(
amnesty_qa["eval"], metrics=metrics, llm=azure_model, embeddings=azure_embeddings
)
# Convert results to pandas DataFrame
df = result.to_pandas()
df.head() By increasing the Additionally, there are known issues or limitations with the For additional logging and debugging, you can enable debug mode by setting the environment variable export RAGAS_DEBUG=true from ragas.utils import patch_logger
import logging
patch_logger('your_module_name', logging.DEBUG) This setup will help you capture detailed information about the execution and diagnose issues with missing metric values. |
I have checked by the keys and the names by making a test run using azure_model.invoke() and the api was returning correct answer. But what you need to understand is i am trying to run the code provided by the RAGAS team itself and still errors are being there. Help me with this issue |
To resolve the
Here is the updated code with the necessary changes: from langchain_openai.chat_models import AzureChatOpenAI
from langchain_openai.embeddings import AzureOpenAIEmbeddings
from ragas import evaluate
# Configuration for Azure OpenAI
azure_configs = {
"base_url": "https://<your-endpoint>.openai.azure.com/",
"model_deployment": "your-deployment-name",
"model_name": "your-model-name",
"embedding_deployment": "your-deployment-name",
"embedding_name": "text-embedding-ada-002", # most likely
}
# Initialize the chat model with a timeout setting
azure_model = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_endpoint=azure_configs["base_url"],
azure_deployment=azure_configs["model_deployment"],
model=azure_configs["model_name"],
validate_base_url=False,
timeout=60 # Set a timeout of 60 seconds
)
# Initialize the embeddings with a timeout setting
azure_embeddings = AzureOpenAIEmbeddings(
openai_api_version="2023-05-15",
azure_endpoint=azure_configs["base_url"],
azure_deployment=azure_configs["embedding_deployment"],
model=azure_configs["embedding_name"],
timeout=60 # Set a timeout of 60 seconds
)
# Load the dataset
from datasets import load_dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2")
# Import metrics
from ragas.metrics import (
context_precision,
answer_relevancy,
faithfulness,
context_recall,
)
from ragas.metrics.critique import harmfulness
# List of metrics to use
metrics = [
faithfulness,
answer_relevancy,
context_recall,
context_precision,
harmfulness,
]
# Initialize the metrics with LLMs and Embeddings
from ragas.run_config import RunConfig
from ragas.metrics.base import MetricWithLLM, MetricWithEmbeddings
def init_ragas_metrics(metrics, llm, embedding):
for metric in metrics:
if isinstance(metric, MetricWithLLM):
metric.llm = llm
if isinstance(metric, MetricWithEmbeddings):
metric.embeddings = embedding
run_config = RunConfig()
metric.init(run_config)
# Wrappers for LLM and Embeddings
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
init_ragas_metrics(
metrics,
llm=LangchainLLMWrapper(azure_model),
embedding=LangchainEmbeddingsWrapper(azure_embeddings),
)
# Evaluate the dataset
result = evaluate(
amnesty_qa["eval"], metrics=metrics, llm=azure_model, embeddings=azure_embeddings
)
# Convert results to pandas DataFrame
df = result.to_pandas()
df.head() Ensure that the |
Serializing the data might help |
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
Exception raised in Job[49]: TimeoutError() this is one sort of error I am getting
Exception raised in Job[29]: BadRequestError(Unsupported data type) this is the other
The exceptions are around 20 in total each of this format
All the values of answer of Answer relevancy are missing. Some values from the rest metrics are missing
Ragas version:0.1.19
Python version:3.11
OpenAI Version:1.33.0
Code to Reproduce
from langchain_openai.chat_models import AzureChatOpenAI
from langchain_openai.embeddings import AzureOpenAIEmbeddings
from ragas import evaluate
azure_model = AzureChatOpenAI(
openai_api_version="2024-02-15-preview",
azure_endpoint=base_url,
azure_deployment="",
model="gpt-4o",
validate_base_url=False,
api_key="",
)
azure_embeddings = AzureOpenAIEmbeddings(
openai_api_version="2024-02-15-preview",
azure_endpoint=base_url,
azure_deployment="",
model="textembeddingada002",
api_key="",
)
from ragas.metrics import (
context_precision,
answer_relevancy,
faithfulness,
context_recall,
)
metrics = [
faithfulness,
answer_relevancy,
context_recall,
context_precision,
]
from datasets import load_dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2")
print(amnesty_qa)
from datasets import load_dataset
from ragas import RunConfig
run_config = RunConfig(timeout=300, log_tenacity=True)
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2")
amnesty_qa
result = evaluate(
amnesty_qa["eval"], metrics=metrics, llm=azure_model, embeddings=azure_embeddings,run_config=run_config,
)
Error trace
Expected behavior
I am expecting to run fully as this is the code provided by the team to test, I want to test with new data but with the provide sample code ragas isn't working for me
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: