Replies: 6 comments
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> dkreutz |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> lazyguy |
Beta Was this translation helpful? Give feedback.
-
>>> lazyguy |
Beta Was this translation helpful? Give feedback.
-
>>> lazyguy |
Beta Was this translation helpful? Give feedback.
-
>>> lazyguy
[February 16, 2021, 11:56am]
Hi All,
I have lots of audio files to transcribe and I am using Transcribe.py
for transcription. It works fine but I want to speed up the process by
multiprocessing. I have successfully implemented multiprocessing on CPU
and it has cut down my transcription time by 54% but when I switch over
to GPU, whole GPU memory is blocked by each audio transcription. I want
to avoid that and I want to set a limit in terms of GPU memory each job
can take. Is there a way to do that? Has anyone faced a similar problem
where you have used Threading/Multiprocessing/Async to parallelly
process more than 1 audios using transcribe.py?
Thanks!
[This is an archived TTS discussion thread from discourse.mozilla.org/t/how-to-restrict-transcribe-py-from-consuming-whole-gpu-memory]
Beta Was this translation helpful? Give feedback.
All reactions