Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add max_tokens option to generate_content #5

Conversation

edenreich
Copy link
Collaborator

Summary

This feature allow to limit the amount of generated tokens by the LLM request, making it more efficient for quick task that you just want the LLM to "think" less.

@edenreich edenreich changed the title docs: Add max_tokens parameter to OpenAPI specification feat: Add max_tokens option to generate_content Feb 10, 2025
Copy link

🎉 This PR is included in version 0.9.0-rc.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

@edenreich edenreich merged commit fc21cf2 into main Feb 11, 2025
4 checks passed
github-actions bot pushed a commit that referenced this pull request Feb 11, 2025
## [0.9.0](0.8.0...0.9.0) (2025-02-11)

### ✨ Features

* Add max_tokens option to generate_content ([#5](#5)) ([fc21cf2](fc21cf2))
Copy link

🎉 This PR is included in version 0.9.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

@edenreich edenreich deleted the feature/implement-max-tokens-limit-as-option-for-tokens-generations branch February 11, 2025 11:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant