Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with tokens that crewai generates #1915

Open
adharshctr opened this issue Jan 17, 2025 · 0 comments
Open

Issue with tokens that crewai generates #1915

adharshctr opened this issue Jan 17, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@adharshctr
Copy link

Description

I have a crew application running and getting the usage metrics as follows:

Image

Here prompt_tokens(input) is greater the completion_tokens(output)

But my input and output are follows

INPUT

Image

OUTPUT

Image

Steps to Reproduce

  1. Build a crewai application
  2. collect the traces of this application and see the usage metrics

Expected behavior

Completion token should be greater than prompt tokens in my case, but getting prompt tokens as big

Screenshots/Code snippets

Image

Operating System

macOS Sonoma

Python Version

3.10

crewAI Version

0.83.0

crewAI Tools Version

0.17.0

Virtual Environment

Venv

Evidence

Image

Possible Solution

  1. How the prompt and completion tokens are collected?
  2. Is there is any way to get the prompt and completion token count of each llm used in agent than count of prompt and completion tokens from a crew ?

Additional context

Nil

@adharshctr adharshctr added the bug Something isn't working label Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant