Coqui and multithreading? #2898
-
Hello, I'm new to ML and working on a relatively complex piece of software that incorporates TTS, and I was wondering if it was possible to run it in parallel? This software often needs to perform hundreds of tts operations, and while it runs extremely well, I'd like to squeeze as much performance out of it as possible. Simply running If it's simply not possible right now to perform such functions, I'm content with its single-process performance. Examples:
results in:
I then tried doing something like this:
But this results in the program hanging with no errors or logging. Coqui will list the splitted sentances and then top. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Having same issue, using Tornado and async workers to execute the inference on TTS api. Hope either @erogol or @Edresson could help to resolve it |
Beta Was this translation helpful? Give feedback.
-
I think you guys should take a read on Pytorch multithreading docs: https://pytorch.org/docs/stable/notes/multiprocessing.html TTS was built using pytorch so it also applied to the TTS library. Best regards, |
Beta Was this translation helpful? Give feedback.
Hi @FlorianEagox @VanDavv,
I think you guys should take a read on Pytorch multithreading docs: https://pytorch.org/docs/stable/notes/multiprocessing.html
TTS was built using pytorch so it also applied to the TTS library.
Best regards,
Edresson Casanova