You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Despite ChatGLM4-9b being marked as supported attempt to convert GLM-4-9B-Chat model (glm-4-9b-chat-hf, command above) to GGUF format fails, with the following error:
INFO:hf-to-gguf:Loading model: glm-4-9b-chat-hf
ERROR:hf-to-gguf:Model GlmForCausalLM is not supported
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered:
Name and Version
.\llama-cli.exe --version
version: 4491 (c67cc98)
built with MSVC 19.39.33523.0 for x64
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
Python/Bash scripts
Command line
Problem description & steps to reproduce
Despite ChatGLM4-9b being marked as supported attempt to convert GLM-4-9B-Chat model (glm-4-9b-chat-hf, command above) to GGUF format fails, with the following error:
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: