You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 915, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/home/hpeadmin/www/craftgpt/code/web_demo.py", line 112, in predict
response, pixel_output = model.generate({
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 712, in generate
input_embeds, pixel_output = self.prepare_generation_embedding(inputs, web_demo)
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 658, in prepare_generation_embedding
feature_embeds, anomaly_map = self.extract_multimodal_feature(inputs, web_demo)
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 599, in extract_multimodal_feature
image_embeds, _, patch_tokens = self.encode_image_for_web_demo(inputs['image_paths'])
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 278, in encode_image_for_web_demo
embeddings = self.visual_encoder(inputs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/imagebind_model.py", line 462, in forward
modality_value = self.modality_preprocessors[modality_key](
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 278, in forward
vision_tokens = self.tokenize_input_and_cls_pos(
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 257, in tokenize_input_and_cls_pos
tokens = stem(input)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 152, in forward
x = self.proj(x)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
return F.conv3d(
NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
The text was updated successfully, but these errors were encountered:
<img width="1558" alt="Image" src="https://github.com/user-attachments/assets/921452e3-b43e-4cff-b84a-e390e5ac936c" /
Traceback (most recent call last):
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 915, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/home/hpeadmin/www/craftgpt/code/web_demo.py", line 112, in predict
response, pixel_output = model.generate({
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 712, in generate
input_embeds, pixel_output = self.prepare_generation_embedding(inputs, web_demo)
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 658, in prepare_generation_embedding
feature_embeds, anomaly_map = self.extract_multimodal_feature(inputs, web_demo)
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 599, in extract_multimodal_feature
image_embeds, _, patch_tokens = self.encode_image_for_web_demo(inputs['image_paths'])
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 278, in encode_image_for_web_demo
embeddings = self.visual_encoder(inputs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/imagebind_model.py", line 462, in forward
modality_value = self.modality_preprocessors[modality_key](
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 278, in forward
vision_tokens = self.tokenize_input_and_cls_pos(
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 257, in tokenize_input_and_cls_pos
tokens = stem(input)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 152, in forward
x = self.proj(x)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
return F.conv3d(
NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
The text was updated successfully, but these errors were encountered: