Replies: 7 comments
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> reuben |
Beta Was this translation helpful? Give feedback.
-
>>> ayoub |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> ayoub |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> ayoub |
Beta Was this translation helpful? Give feedback.
-
>>> ayoub
[January 5, 2021, 10:18pm]
Hello everyone, So I'm new to deepspeech and either I'm facing an issue
here or I might just didn't know how to use it.
So I'm working on Windows 10 and I'm using deepspeech python
version.
And I want to work with the french prebuilt models for deepspeech
which exist in
here.
So I've setup two python virtual environments with venv.
In the first venv, I've downloaded the french tensorflow
model.
And in the second venv I've downloaded the french tflite
model.
The first environment I've setup it's for the tensorflow model which
contains the following packages:
colorama 0.4.4
deepspeech 0.9.3
halo 0.0.31
log-symbols 0.0.14
numpy 1.14.5
pip 18.1
PyAudio 0.2.11
scipy 1.4.1
setuptools 40.6.2
six 1.15.0
spinners 0.0.24
termcolor 1.1.0
webrtcvad 2.0.10
And the second environment I've setup it's for the tflite model
which contains the following packages:
absl-py 0.11.0
astunparse 1.6.3
cachetools 4.2.0
certifi 2020.12.5
chardet 4.0.0
colorama 0.4.4
deepspeech 0.8.0
deepspeech-tflite 0.8.0
gast 0.3.3
google-auth 1.24.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
grpcio 1.34.0
h5py 2.10.0
halo 0.0.31
idna 2.10
importlib-metadata 3.3.0
Keras-Preprocessing 1.1.2
log-symbols 0.0.14
Markdown 3.3.3
numpy 1.14.4
oauthlib 3.1.0
opt-einsum 3.3.0
pip 18.1
protobuf 3.14.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
PyAudio 0.2.11
requests 2.25.1
requests-oauthlib 1.3.0
rsa 4.6
scipy 1.4.1
setuptools 40.6.2
six 1.15.0
spinners 0.0.24
tensorboard 2.4.0
tensorboard-plugin-wit 1.7.0
tensorflow-estimator 2.3.0
termcolor 1.1.0
typing-extensions 3.7.4.3
urllib3 1.26.2
webrtcvad 2.0.10
Werkzeug 1.0.1
wheel 0.36.2
wrapt 1.12.1
zipp 3.4.0
And now, I want to work in both virtual environments with
mic_vad_streaming.
So when I'm working with the first venv(tensorflow model), I have
no problems and deepspeech works flawlessly(I've encountered some
lag/slow response but it's okey for now).
But when I'm trying to use the second venv(tflite model), I
encountered this issue:
Loading model from file output_graph.tflite
TensorFlow: v2.2.0-17-g0854bb5188
DeepSpeech: v0.8.0-0-gf56b07da
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2021-01-05 21:30:07.055669: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Data loss: Can't parse output_graph.tflite as binary proto
Traceback (most recent call last):
File 'C: slash Program Files slash Python36 slash lib slash runpy.py', line 193, in _run_module_as_main
'main', mod_spec)
File 'C: slash Program Files slash Python36 slash lib slash runpy.py', line 85, in _run_code
exec(code, run_globals)
File 'C: slash Users slash HP slash Downloads slash model_tflite_fr slash Scripts slash deepspeech.exe slash main.py', line 9, in
File 'c: slash users slash hp slash downloads slash model_tflite_fr slash lib slash site-packages slash deepspeech slash client.py', line 117, in main
ds = Model(args.model)
File 'c: slash users slash hp slash downloads slash model_tflite_fr slash lib slash site-packages slash deepspeech slash init.py', line 38, in init
raise RuntimeError('CreateModel failed with '{}' (0x{:X})'.format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with 'Error reading the proto buffer model file.' (0x3005)
Here's the output of the first venv(tensorflow model) when it work
successfully:
Initializing model...
INFO:root:ARGS.model: output_graph.pbmm
TensorFlow: v2.3.0-6-g23ad988fcd
DeepSpeech: v0.9.3-0-gf2e9c858
2021-01-05 21:19:58.129550: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO:root:ARGS.scorer: kenlm.scorer
Listening (ctrl-C to exit)...
Recognized: bonjour
Recognized: bonsoir dont
Recognized: en
Recognized: on range en deux
Recognized: mais profond
Recognized: la pole
Recognized: mai coute moi bien
Recognized: la
Recognized: paul
Recognized: du point
The command I've used for the first venv(tensorflow model) which
works successfully: slash
python mic_vad_streaming.py -m output_graph.pbmm -s kenlm.scorer
The command I've used for the second venv slash
(tflite model) which doesn't work: slash
python mic_vad_streaming.py -m output_graph.tflite -s kenlm.scorer
I've even tried using deepspeech directly in the second venv with a
.wav audio, but still the same results.
(model_tflite_fr) C: slash Users slash Ayoub slash Downloads slash model_tflite_fr>deepspeech --model output_graph.tflite --scorer kenlm.scorer --audio outputs slash savewav_2021-01-05_21-26-23_483447.wav
Loading model from file output_graph.tflite
TensorFlow: v2.2.0-17-g0854bb5188
DeepSpeech: v0.8.0-0-gf56b07da
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2021-01-05 21:30:07.055669: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Data loss: Can't parse output_graph.tflite as binary proto
Traceback (most recent call last):
File 'C: slash Program Files slash Python36 slash lib slash runpy.py', line 193, in _run_module_as_main
'main', mod_spec)
File 'C: slash Program Files slash Python36 slash lib slash runpy.py', line 85, in _run_code
exec(code, run_globals)
File 'C: slash Users slash HP slash Downloads slash model_tflite_fr slash Scripts slash deepspeech.exe slash main.py', line 9, in
File 'c: slash users slash hp slash downloads slash model_tflite_fr slash lib slash site-packages slash deepspeech slash client.py', line 117, in main
ds = Model(args.model)
File 'c: slash users slash hp slash downloads slash model_tflite_fr slash lib slash site-packages slash deepspeech slash init.py', line 38, in init
raise RuntimeError('CreateModel failed with '{}' (0x{:X})'.format(deepspeech.impl.ErrorCodeToErrorMessage(status),status))
RuntimeError: CreateModel failed with 'Error reading the proto buffer model file.' (0x3005)
I think that's all.
I appreciate any help possible and thanks mozilla for this awesome
project.
[This is an archived TTS discussion thread from discourse.mozilla.org/t/runtimeerror-createmodel-failed-with-error-reading-the-proto-buffer-model-file-0x3005]
Beta Was this translation helpful? Give feedback.
All reactions