Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-add demo to show Count tokens #296

Merged
merged 9 commits into from
Mar 14, 2024
58 changes: 58 additions & 0 deletions site/en/tutorials/python_quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1034,6 +1034,64 @@
" display(to_markdown(f'**{message.role}**: {message.parts[0].text}'))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AEgVOYu0pAr4"
},
"source": [
"## Count tokens\n",
"\n",
"Large language models have a context window, and the context length is often measured in terms of the **number of tokens**. With the Gemini API, you can determine the number of tokens per any `glm.Content` object. In the simplest case, you can pass a query string to the `GenerativeModel.count_tokens` method as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eLjBmPCLpElk"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"total_tokens: 7"
]
}
],
"source": [
"model.count_tokens(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oM2_U8pmpHQA"
},
"source": [
"Similarly, you can check `token_count` for your `ChatSession`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "i0MUU4BZpG4_"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"total_tokens: 501"
]
}
],
"source": [
"model.count_tokens(chat.history)"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down
Loading