-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"No GPU being used. Careful, inference might be very slow!" #202
Comments
Same issue - I have a Nvidia GTX 2080 with 8gb of RAM. |
hmm strange so two separate things
|
Hi, here is in my case. |
RTX3070, same problem... P.S. This helped #173 |
Same issue, you can check if you have a compatible version of torch with your graphical card while running in a python file
For me I'm compatible, so I suppose the problem come from the allocated Vram by default not settable. |
Thank you for the help. It seems i did not have neither CUDA nor the the Torch Version to use the GPU. I fixed both of those, but now i am getting the following: untyped_storage = torch.UntypedStorage( |
Ok so you should probably enable small model, by changing
Also this interface is much better : https://github.com/C0untFloyd/bark-gui |
I received the "No GPU being used" warning because I'm using Apple silicon. Anyone with this error just needs to enable MPS with an environment variable. For example: SUNO_ENABLE_MPS=True python3 speak.py Note to maintainers, this should probably just be automatic if Everyone else, more info on the MacOS MPS backend for PyTorch. |
I also have this problem |
Could you please tell us exactly what you did to solve your initial problem? |
Okay, I think I solved it too. What I did, in the corresponding virtualenv for bark$ pip3 install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 This should be enough to get GPU inference going. |
cuda Download link
https://download.pytorch.org/whl/torchaudio/ If you're using python 3.10 and torch 2.0, you can use
|
I was struggling with the out of memory error message, but when I typed it correctly in the running console environment, it worked. "set SUNO_USE_SMALL_MODELS=True"
|
This has saved me a day. It works. Thank you. |
Hello ! Is it possible to run it with an AMD 6900XT? Because although I have an AMD Ryzen 9 5950X it is slow. |
right now amd is not supported and i can't test without an amd gpu. open to PRs from the community though if anyone can have a look |
I have AMD and it's working for me with GPU. |
looks like this is fixed for most people. also the topic of the issue is a bit non-descript since it just indicates lack of a gpu |
@gkucsko |
yeah, since we didn't really include a command line script i figured from within python is easier, but ya if you launch something it could be useful to do outside. although now we have a preload_models option as well for small models which might be more straighforward for people anyways |
I also had to uninstall before reinstall as I installed pytorch before cuda 11.8 Before :
After
|
It's not ROCM, but DirectML seems to be better than nothing for Window AMD: https://github.com/JonathanFly/bark/tree/bark_amd_directml_test#-bark-amd-install-test- |
I'm in Linux, but I'm getting lots of memory and cpu use, but not seeing any gpu use at all (unless it suddenly uses it for a split second at the end).
|
This helps! thanks |
I am running the code right now, and the audio file is in the proccess, but it is REALLY slow, like almost an hour. Along with the proccess, i got this message: "No GPU being used. Careful, inference might be very slow!". The thing is, i do have a NVIDA 1050 as GPU in my laptop. How can i make the programm use my GPU to run it faster?
The text was updated successfully, but these errors were encountered: