You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
utilizing Long Llama with Mojo Framework and applying 4-bit quantization and is it possible to use flash attention 2 and your thoughts about Speculative execution for LLM
#17
Open
myname36 opened this issue
Oct 9, 2023
· 1 comment
I am interested in loading Long Llama with Mojo Framework as mentioned here https://github.com/tairov/llama2.mojo to increase the model speed while applying 4-bit quantization for model compression. Could you provide guidance or examples on how this can be achieved? Particularly, I am curious about how to maintain model performance while reducing the model size using 4-bit quantization , and is it possible to use flash attention 2 , and what do you think about using long llama 3b with code long llama for Speculative execution for LLM as mentioned here https://twitter.com/karpathy/status/1697318534555336961
The text was updated successfully, but these errors were encountered:
I am interested in loading Long Llama with Mojo Framework as mentioned here https://github.com/tairov/llama2.mojo to increase the model speed while applying 4-bit quantization for model compression. Could you provide guidance or examples on how this can be achieved? Particularly, I am curious about how to maintain model performance while reducing the model size using 4-bit quantization , and is it possible to use flash attention 2 , and what do you think about using long llama 3b with code long llama for Speculative execution for LLM as mentioned here https://twitter.com/karpathy/status/1697318534555336961
The text was updated successfully, but these errors were encountered: