You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, the repo is defined as lfs but git lfs pull created :
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
error: failed to fetch some objects from 'https://github.com/HemantKArya/Melodfy.git/info/lfs'
My understanding is that the downloadModel function is a workaround, but this doesn't work.
/Users/tirane/PycharmProjects/Melodfy/venv/bin/python /Users/tirane/github/Melodfy/main.py
Model already present
Loading model...
Error found in moderl...
Model downloaded
Traceback (most recent call last):
File "/Users/tirane/github/Melodfy/main.py", line 49, in
ort_session = InferenceSession(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /Users/tirane/github/Melodfy/models/model.onnx failed:/Users/runner/work/1/s/onnxruntime/core/graph/model.cc:134 onnxruntime::Model::Model(onnx::ModelProto &&, const onnxruntime::PathString &, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList *, const logging::Logger &, const onnxruntime::ModelOptions &) ModelProto does not have a graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/tirane/github/Melodfy/main.py", line 53, in
ort_session = InferenceSession(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /Users/tirane/github/Melodfy/models/model.onnx failed:/Users/runner/work/1/s/onnxruntime/core/graph/model.cc:134 onnxruntime::Model::Model(onnx::ModelProto &&, const onnxruntime::PathString &, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList *, const logging::Logger &, const onnxruntime::ModelOptions &) ModelProto does not have a graph.
The text was updated successfully, but these errors were encountered:
Hi, the repo is defined as lfs but git lfs pull created :
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
error: failed to fetch some objects from 'https://github.com/HemantKArya/Melodfy.git/info/lfs'
My understanding is that the downloadModel function is a workaround, but this doesn't work.
/Users/tirane/PycharmProjects/Melodfy/venv/bin/python /Users/tirane/github/Melodfy/main.py
Model already present
Loading model...
Error found in moderl...
Model downloaded
Traceback (most recent call last):
File "/Users/tirane/github/Melodfy/main.py", line 49, in
ort_session = InferenceSession(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /Users/tirane/github/Melodfy/models/model.onnx failed:/Users/runner/work/1/s/onnxruntime/core/graph/model.cc:134 onnxruntime::Model::Model(onnx::ModelProto &&, const onnxruntime::PathString &, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList *, const logging::Logger &, const onnxruntime::ModelOptions &) ModelProto does not have a graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/tirane/github/Melodfy/main.py", line 53, in
ort_session = InferenceSession(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/tirane/PycharmProjects/Melodfy/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /Users/tirane/github/Melodfy/models/model.onnx failed:/Users/runner/work/1/s/onnxruntime/core/graph/model.cc:134 onnxruntime::Model::Model(onnx::ModelProto &&, const onnxruntime::PathString &, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList *, const logging::Logger &, const onnxruntime::ModelOptions &) ModelProto does not have a graph.
The text was updated successfully, but these errors were encountered: