Univnet Gradient Error #2304
francois-vz
started this conversation in
General
Replies: 1 comment 2 replies
-
If you reinstall TTS or update the trainer, it should be fixed |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there! First, thanks for creating such an awesome all-in-one TTS toolkit. This is my first TTS project and I have been using Coqui's TTS for approximately three weeks. I am trying to fine-tune a Tacotron2 model to 4 hours of semi-clean Afrikaans data, as well as train a Univnet from scratch on the same data. I am experiencing some issues with the vocoder training and I would just like to further discuss that with somebody more experienced :) . I use the following command when training the unvinet model (python3.8 in coqui conda environment created from the git repo [origin/dev]):
where univnet.json is given by
I based my config file on the default LJSpeech Univnet config file and just added some extra epochs and pointed to my own data [ Please ignore my poor checkpointing practice :) ]. I performed a few checks and I think my data is healthy with no empty or corrupted audio files and weird sample rates. When executing the script, it fails at the first epoch of training with the following error
I read up a bit about this error, and I found the following link. According to the discussion, the error is due to tensor variables created with
requires_grad=False
. I modified the script at/home/francois/miniconda3/envs/coqui/lib/python3.8/site-packages/trainer/trainer.py
and changed the line
to
in the following function in the
trainer.py
script at lines 1127-1154It appears that the model is training correctly and the eval loss is decreasing as expected. Can somebody please explain to me what the purpose of param.requires_grad = False is. The function docstring is not as clear of an explanation as I would like. Also, can I expect this change to deteriorate the performance of the vocoder?
Beta Was this translation helpful? Give feedback.
All reactions