diff --git a/site/en/gemma/docs/distributed_tuning.ipynb b/site/en/gemma/docs/distributed_tuning.ipynb index 30b8d99fc..36b961c72 100644 --- a/site/en/gemma/docs/distributed_tuning.ipynb +++ b/site/en/gemma/docs/distributed_tuning.ipynb @@ -40,6 +40,10 @@ "\n", " \n", + " \n", " \n", @@ -89,8 +93,8 @@ "### Notes on TPU environments\n", "\n", "Google has 3 products that provide TPUs:\n", - "* [Colab](https://colab.sandbox.google.com/) provides TPU v2, which is not sufficient for this tutorial.\n", - "* [Kaggle](https://www.kaggle.com/) offers TPU v3 for free and they work for this tutorial.\n", + "* [Colab](https://colab.sandbox.google.com/) provides TPU v2 for free, which is sufficient for this tutorial.\n", + "* [Kaggle](https://www.kaggle.com/) offers TPU v3 for free and they also work for this tutorial.\n", "* [Cloud TPU](https://cloud.google.com/tpu?hl=en) offers TPU v3 and newer generations. One way to set it up is:\n", " 1. Create a new [TPU VM](https://cloud.google.com/tpu/docs/managing-tpus-tpu-vm#tpu-vms)\n", " 2. Set up [SSH port forwarding](https://cloud.google.com/solutions/connecting-securely#port-forwarding-over-ssh) for your intended Jupyter server port\n",
\n", " View on ai.google.dev\n", + " \n", + " Run in Google Colab\n", + " \n", " Run in Kaggle\n", "