-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
much longer training times with phaseshuffle on #94
Comments
Huh. That's odd. I don't remember this being the case back when I've trained models w/ phase shuffle in the past but it's definitely possible I overlooked it. What version of TF are you using? I wonder if more recent versions of tensorflow rely on CPU operations for the padding? |
Thanks for responding - I'm using TF 1.14 in the env I setup for WaveGAN, maybe I should downgrade to 1.12? |
You could try that yeah, though I think I've trained on 1.14 as well in the past without issue. The only other things I can think of are CUDA/CuDNN versioning having an impact. |
Hi, I just wanted to confirm something. If I am attempting to turn phase shuffle off, contrary to other posts I've seen on the issues here I believe the correct parameter should be |
Hi, I believe the correct parameter is I've also experienced the same behaviour as you (still filling up the shuffle buffer when phase shuffle is supposedly turned off). In my experience turning phase shuffle off works well with Hopefully Chris Donahue can shed more light :) |
Hi,
Thanks for the fantastic model,
On my current setup (Titan RTX / Ryzen 2700X / 32GB RAM) it takes a LOT longer to train WaveGAN with phase shuffle on than off (the difference is huge, like 10x plus). Also GPU usage with phase shuffle on is much lower. Is this normal? I'm guessing that phase shuffle requires much more CPU intervention during training.
Best wishes,
Mark
The text was updated successfully, but these errors were encountered: