Where should I submit the PR to compile GPU support by default? Here or llama-cpp-python? #3094
Replies: 2 comments 4 replies
-
make it optional in the one click installers.
the failure to compile results in the one click installer cancelling the entire installation process. llama-cpp-python should be optional |
Beta Was this translation helpful? Give feedback.
-
I have a (Windows only?) workaround for this because it's been inconvenient for me too. You can edit
|
Beta Was this translation helpful? Give feedback.
-
I believe it's inconvenient for GPU users to manually compile the code for llama-cpp-python in Webui every time there is a version bump. I've devised two potential solutions to this issue and written code for both.
The first involves modifying the setup.py file in llama-cpp-python to include default GPU support, assuming the user has a GPU and no envs like
CMAKE_ARGS="-DLLAMA_CUBLAS=on"
is set.The second involves altering the text-generation-webui
pip install requirements.txt
command topython install.py
. This python file would also callpip install requirements.txt
and would check for GPU availability, subsequently installing the GPU-supported version if one is detected.There are a couple of potential issues to consider. The first solution might lead to unwanted consequences because I'm uncertain about the implications of making GPU support the default behavior. For the second solution, it represents a significant shift in the installation process for the sake of one module, namely llama-cpp-python.
Given these considerations, I'm seeking advice on the preferable approach. Where should I submit a PR for this proposed solution?
Beta Was this translation helpful? Give feedback.
All reactions