Replies: 2 comments 1 reply
-
This build should work for you https://github.com/Mobile-Artificial-Intelligence/maid/actions/runs/7268881043 will also be in 1.1.6 when its released |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks for so quick responce. I checked it and now loading phi 2 models doesn't provide to crashing app. Althought it (I mean phi-2) gives some strange responces on output. I guess it might be something wrong with model I downloaded (phi-2.Q5_K_M.gguf from theBloke) however with newest beta LMStudio this gguf works properly. Strange? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello. First of all thanks for your great app - fantastic work. Is there any chance to add support for newest small llm models Phi-2 2.7 from Microsoft? GGUF model (quantized by the Bloke) is available at HF, but isn't compatible with your app?
Maybe it will be helpful - it looks like Phi-2 support was just recently added to Llama cpp https://github.com/ggerganov/llama.cpp/pull/4490#pullrequestreview-1787346569
Beta Was this translation helpful? Give feedback.
All reactions