Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you update all your llama-cpp-python modules to 0.2.29 I'm getting python loading errors in new models. #42

Open
NonaSuomy opened this issue Jan 19, 2024 · 2 comments

Comments

@NonaSuomy
Copy link

https://huggingface.co/acon96/Home-3B-v2-GGUF/resolve/main/Home-3B-v2.q8_0.gguf

https://github.com/abetlen/llama-cpp-python/releases/tag/v0.2.29

AMD Vega64 Unbuntu 22

On v1 it loaded fine on v2 you get this error asked the model maintainer they said it's because of the old 0.2.26 version and you need a new version.

Traceback (most recent call last):

File "/home/nonasuomy/code/text-generation-webui/modules/ui_model_menu.py", line 213, in load_model_wrapper


shared.model, shared.tokenizer = load_model(selected_model, loader)

                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nonasuomy/code/text-generation-webui/modules/models.py", line 87, in load_model


output = load_func_map[loader](model_name)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nonasuomy/code/text-generation-webui/modules/models.py", line 250, in llamacpp_loader


model, tokenizer = LlamaCppModel.from_pretrained(model_file)

                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nonasuomy/code/text-generation-webui/modules/llamacpp_model.py", line 101, in from_pretrained


result.model = Llama(**params)

               ^^^^^^^^^^^^^^^
File "/home/nonasuomy/code/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama.py", line 962, in init


self._n_vocab = self.n_vocab()

                ^^^^^^^^^^^^^^
File "/home/nonasuomy/code/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama.py", line 2274, in n_vocab


return self._model.n_vocab()

       ^^^^^^^^^^^^^^^^^^^^^
File "/home/nonasuomy/code/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama.py", line 251, in n_vocab


assert self.model is not None

       ^^^^^^^^^^^^^^^^^^^^^^
AssertionError

Thank you.

@Limour-dev
Copy link

Perhaps you could fork this repository, manually trigger the actions yourself, and then use the addresses in releases?

@AmineDjeghri
Copy link

AmineDjeghri commented Feb 17, 2024

@Limour-dev & @NonaSuomy
oobabooga is updating the wheels from time to time
https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/tag/wheels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants