From 19be29e89e2fe15a68e225d8c44986b66a058b7e Mon Sep 17 00:00:00 2001 From: justheuristic Date: Tue, 23 Jul 2024 22:46:24 +0300 Subject: [PATCH] note about llama 3.1 RoPE support --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 63449ae11..93d6766cf 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,9 @@

-Generate text with distributed **Llama 2** (70B), **Falcon** (40B+), **BLOOM** (176B) (or their derivatives), and fine‑tune them for your own tasks — right from your desktop computer or Google Colab: +**Warning: Llama 3.1 support is still under construction!** the latest models require custom RoPE configuration that we do not have in Petals yet; we will update the code to fix that within a day.** + +Generate text with distributed **Llama (1-3)** (70B), **Falcon** (40B+), **BLOOM** (176B) (or their derivatives), and fine‑tune them for your own tasks — right from your desktop computer or Google Colab: ```python from transformers import AutoTokenizer