Skip to content

Commit

Permalink
Update README.md (#3012)
Browse files Browse the repository at this point in the history
Summary: Pull Request resolved: #3012

Reviewed By: mergennachin

Differential Revision: D56074130

Pulled By: jerryzh168

fbshipit-source-id: 53e8a1db6ef802789469f1e5ba6c79c03a16e5e1
  • Loading branch information
jerryzh168 authored and facebook-github-bot committed Apr 12, 2024
1 parent 74eb8b3 commit 0f379ba
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion examples/models/llama2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Please note that the models are subject to the [acceptable use policy](https://g
Since 7B Llama2 model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.

## Quantization:
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html).
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch-labs/ao/).

We evaluated WikiText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes.

Expand Down

0 comments on commit 0f379ba

Please sign in to comment.