Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cpu resources costs #724

Open
4 tasks done
leocheung54 opened this issue Jan 15, 2025 · 4 comments
Open
4 tasks done

cpu resources costs #724

leocheung54 opened this issue Jan 15, 2025 · 4 comments
Labels
help wanted Extra attention is needed

Comments

@leocheung54
Copy link

Checks

  • This template is only for usage issues encountered.
  • I have thoroughly reviewed the project documentation but couldn't find information to solve my problem.
  • I have searched for existing issues, including closed ones, and couldn't find a solution.
  • I confirm that I am using English to submit this report in order to facilitate communication.

Environment Details

a100-gpu cuda12.2

Steps to Reproduce

Why does using GPU for inference consume a lot of CPU resources, even though the printed device in load_checkpoint and load_vocoder is 'cuda'?

✔️ Expected Behavior

load model for once, and load wav_scp one line by one, each run infer_process

❌ Actual Behavior

it costs much 1000%cpu resources for 1 thread

@leocheung54 leocheung54 added the help wanted Extra attention is needed label Jan 15, 2025
@SWivid
Copy link
Owner

SWivid commented Jan 15, 2025

how is gpu usage

@ewwink
Copy link

ewwink commented Jan 15, 2025

make sure installation step for the following is succeed

# NVIDIA GPU: install pytorch with your CUDA version, e.g.
pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

# AMD GPU: install pytorch with your ROCm version, e.g.
pip install torch==2.5.1+rocm6.2 torchaudio==2.5.1+rocm6.2 --extra-index-url https://download.pytorch.org/whl/rocm6.2

then use the following command to check whether cuda is detected

python -c "import torch; print('is CUDA:', torch.cuda.is_available())"

@leocheung54
Copy link
Author

how is gpu usage

gpu memory usage is ~3GB

@leocheung54
Copy link
Author

make sure installation step for the following is succeed

# NVIDIA GPU: install pytorch with your CUDA version, e.g.
pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

# AMD GPU: install pytorch with your ROCm version, e.g.
pip install torch==2.5.1+rocm6.2 torchaudio==2.5.1+rocm6.2 --extra-index-url https://download.pytorch.org/whl/rocm6.2

then use the following command to check whether cuda is detected

python -c "import torch; print('is CUDA:', torch.cuda.is_available())"

yes, im pretty sure : torch.cuda.is_avaliable() is True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants