From 9e221033f3adb9d2261aa84f40f5fe567564b96a Mon Sep 17 00:00:00 2001 From: postmasters Date: Fri, 5 Jan 2024 07:25:38 -0800 Subject: [PATCH] gguf : add keys for kv sizes to spec (#676) * Add keys for kv sizes to GGUF spec * Fix types of key_length and value_length --- docs/gguf.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/gguf.md b/docs/gguf.md index 1537170fc..bb63f4f0e 100644 --- a/docs/gguf.md +++ b/docs/gguf.md @@ -296,6 +296,8 @@ In the following, `[llm]` is used to fill in for the name of a specific LLM arch - `[llm].attention.clamp_kqv: float32`: Value (`C`) to clamp the values of the `Q`, `K`, and `V` tensors between (`[-C, C]`). - `[llm].attention.layer_norm_epsilon: float32`: Layer normalization epsilon. - `[llm].attention.layer_norm_rms_epsilon: float32`: Layer RMS normalization epsilon. +- `[llm].attention.key_length: uint32`: The optional size of a key head, $d_k$. If not specified, it will be `n_embd / n_head`. +- `[llm].attention.value_length: uint32`: The optional size of a value head, $d_v$. If not specified, it will be `n_embd / n_head`. #### RoPE