-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
token usage parser for llama_index #1248
Comments
Hey @tarunn2799! 👋 I'm here to help you with any bugs, questions, or contributions while you wait for a human maintainer. Let's tackle this together! To use the token usage parser for
Here is the complete code snippet: from datasets import load_dataset
from ragas.metrics import (
answer_relevancy,
faithfulness,
context_recall,
context_precision,
)
from ragas import evaluate
from ragas.cost import get_token_usage_for_openai
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.prompt_values import StringPromptValue
# Load the dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2")
# Initialize the LLM model
gpt4o = ChatOpenAI(model="gpt-4o")
# Perform the evaluation with the token usage parser
result = evaluate(
amnesty_qa["eval"],
metrics=[
context_precision,
faithfulness,
answer_relevancy,
context_recall,
],
llm=gpt4o,
token_usage_parser=get_token_usage_for_openai,
)
# Get the total tokens used
total_tokens = result.total_tokens()
# Compute the total cost
total_cost = result.total_cost(cost_per_input_token=5 / 1e6, cost_per_output_token=15 / 1e6) This code will help you evaluate the dataset using the specified metrics and compute the token usage and cost using the |
That's just the general documentation - |
@tarunn2799 currently llama_index doesnot support token_parser sadly. I will take a look at it but right now we are working on #1009 so this might take some time sadly 🙁 if you want to take a stab at it I can help you out, would that be something you would be interested? |
Yes it would be! Let me know how I can contribute. Thanks :) |
awesome! this is my cal, can you find a time there? we can go over how you might do this 🙂 |
[ *] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
what is unclear to you? What would you like to know?
How do I use the token usage parser when I'm using llama_index evaluate?
Code Examples
This community speaks code. Share your code snippets to help us understand your question better.
Additional context
Anything else you want to share with us?
The text was updated successfully, but these errors were encountered: