Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[R-304] support more providers for is_finished() logic #1548

Open
jjmachan opened this issue Oct 22, 2024 · 7 comments
Open

[R-304] support more providers for is_finished() logic #1548

jjmachan opened this issue Oct 22, 2024 · 7 comments
Labels
bug Something isn't working

Comments

@jjmachan
Copy link
Member

jjmachan commented Oct 22, 2024

Suggest improvements to the parser for figuring out if the model finish reasons so that the default parser can be improved

you can also define your own custom parser and parse it too

R-304

@jjmachan jjmachan added the bug Something isn't working label Oct 22, 2024
@jjmachan jjmachan changed the title support more providers for is_finished() logic [R-304] support more providers for is_finished() logic Oct 22, 2024
@ahgraber
Copy link
Contributor

Llama models hosted on TogetherAI use {"finish_reason": "eos"}

{
  "content": "The letter 'r' appears twice in the word 'strawberry'.",
  "additional_kwargs": {
    "refusal": null
  },
  "response_metadata": {
    "token_usage": {
      "completion_tokens": 16,
      "prompt_tokens": 54,
      "total_tokens": 70,
      "completion_tokens_details": null,
      "prompt_tokens_details": null
    },
    "model_name": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
    "system_fingerprint": null,
    "finish_reason": "eos",
    "logprobs": null
  },
  "type": "ai",
  "name": null,
  "id": "run-16ae060b-4bc2-4624-b07e-dce650033d6c-0",
  "example": false,
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 54,
    "output_tokens": 16,
    "total_tokens": 70,
    "input_token_details": {},
    "output_token_details": {}
  }
}

@jjmachan
Copy link
Member Author

hey @ahgraber thanks for reporting this as always 🙂
I'll add it to that

@spackows
Copy link

While you're at it, IBM watsonx LLMs give eos_token when generation finishes:

generation_info={'finish_reason': 'eos_token'}

@malikaltakrori
Copy link

@spackows not sure if you are still waiting for an update or if you already figured it out, but I found this solution and just added to it the 'eos_token'

@jjmachan
Copy link
Member Author

hey @malikaltakrori would you be interested in adding that as a PR? would really appreciate it - if not I'll make it 🙂

@malikaltakrori
Copy link

Hi @jjmachan,
Sorry for the late reply. Github emails still go to my school's inbox.
I would actually love to do so! (if you havn't already).

@malikaltakrori
Copy link

@jjmachan Done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants