Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

job chat: if the question isn't clear, seek clarfiication #178

Open
josephjclark opened this issue Feb 26, 2025 · 0 comments
Open

job chat: if the question isn't clear, seek clarfiication #178

josephjclark opened this issue Feb 26, 2025 · 0 comments

Comments

@josephjclark
Copy link
Collaborator

Sometimes users will accidentally post a question before they've finished typing. Or maybe a team member will run a test.

And the model will treat this as a legitimate question, even though there isn't anything for it to say.

For example, yesterday someone wrote "latency". Were they testing the model's latency? Did they type into the wrong window? I don't know. But the model generated a 200 word response. That took time, energy and money to generate. But there's no possible useful answer the model could have given, apart from "I'm sorry, I don't understand, can you rephrase the question pal?"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant