diff --git a/site/en/gemma/docs/get_started.ipynb b/site/en/gemma/docs/keras_inference.ipynb
similarity index 95%
rename from site/en/gemma/docs/get_started.ipynb
rename to site/en/gemma/docs/keras_inference.ipynb
index d1cfe88ac..f32ae96e1 100644
--- a/site/en/gemma/docs/get_started.ipynb
+++ b/site/en/gemma/docs/keras_inference.ipynb
@@ -39,16 +39,16 @@
"source": [
"
"
]
@@ -235,7 +235,7 @@
"id": "XrAWvsU6pI0E"
},
"source": [
- "`from_preset` instantiates the model from a preset architecture and weights. In the code above, the string `\"gemma_2b_en\"` specifies the preset architecture: a Gemma model with 2 billion parameters.\n"
+ "The `GemmaCausalLM.from_preset()` function instantiates the model from a preset architecture and weights. In the code above, the string `\"gemma_2b_en\"` specifies the preset the Gemma 2B model with 2 billion parameters. Gemma models with [7B, 9B, and 27B parameters](/gemma/docs/get_started#models-list) are also available. You can find the code strings for Gemma models in their **Model Variation** listings on [Kaggle](https://www.kaggle.com/models/google/gemma).\n"
]
},
{
@@ -244,7 +244,7 @@
"id": "Ij73k0PfUhjE"
},
"source": [
- "Note: A Gemma model with 7 billion parameters is also available. To run the larger model in Colab, you need access to the premium GPUs available in paid plans. Alternatively, you can perform [distributed tuning on a Gemma 7B model](https://ai.google.dev/gemma/docs/distributed_tuning) on Kaggle or Google Cloud."
+ "Note: To run the larger models in Colab, you need access to the premium GPUs available in paid plans. Alternatively, you can perform inferences using Kaggle notebooks or Google Cloud projects.\n"
]
},
{
@@ -588,7 +588,7 @@
"metadata": {
"accelerator": "GPU",
"colab": {
- "name": "get_started.ipynb",
+ "name": "keras_inference.ipynb",
"toc_visible": true
},
"google": {