Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F.conv3d RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same #113

Open
westpilgrim63 opened this issue Jan 7, 2025 · 4 comments

Comments

@westpilgrim63
Copy link

运行web demo的时候,网页载入成功,但图片、文字提问上传以后,界面出现error,并且命令行窗口出现以上错误,可能是什么原因?

@xjoj58822104
Copy link

权重转到gpu上运行 model = model.to('cuda')

@westpilgrim63
Copy link
Author

权重转到gpu上运行 model = model.to('cuda')

可以了,多谢!

@star562
Copy link

star562 commented Jan 17, 2025

这个问题要怎么处理
Traceback (most recent call last):
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 915, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/home/hpeadmin/www/craftgpt/code/web_demo.py", line 112, in predict
response, pixel_output = model.generate({
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 712, in generate
input_embeds, pixel_output = self.prepare_generation_embedding(inputs, web_demo)
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 658, in prepare_generation_embedding
feature_embeds, anomaly_map = self.extract_multimodal_feature(inputs, web_demo)
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 599, in extract_multimodal_feature
image_embeds, _, patch_tokens = self.encode_image_for_web_demo(inputs['image_paths'])
File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 278, in encode_image_for_web_demo
embeddings = self.visual_encoder(inputs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/imagebind_model.py", line 462, in forward
modality_value = self.modality_preprocessors[modality_key](
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 278, in forward
vision_tokens = self.tokenize_input_and_cls_pos(
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 257, in tokenize_input_and_cls_pos
tokens = stem(input)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 152, in forward
x = self.proj(x)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
return F.conv3d(
NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

@xjoj58822104
Copy link

这个问题要怎么处理 Traceback (most recent call last): File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/routes.py", line 393, in run_predict output = await app.get_blocks().process_api( File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1108, in process_api result = await self.call_function( File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/gradio/blocks.py", line 915, in call_function prediction = await anyio.to_thread.run_sync( File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) File "/home/hpeadmin/www/craftgpt/code/web_demo.py", line 112, in predict response, pixel_output = model.generate({ File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 712, in generate input_embeds, pixel_output = self.prepare_generation_embedding(inputs, web_demo) File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 658, in prepare_generation_embedding feature_embeds, anomaly_map = self.extract_multimodal_feature(inputs, web_demo) File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 599, in extract_multimodal_feature image_embeds, _, patch_tokens = self.encode_image_for_web_demo(inputs['image_paths']) File "/home/hpeadmin/www/craftgpt/code/model/openllama.py", line 278, in encode_image_for_web_demo embeddings = self.visual_encoder(inputs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/imagebind_model.py", line 462, in forward modality_value = self.modality_preprocessors[modality_key]( File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 278, in forward vision_tokens = self.tokenize_input_and_cls_pos( File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 257, in tokenize_input_and_cls_pos tokens = stem(input) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/code/model/ImageBind/models/multimodal_preprocessors.py", line 152, in forward x = self.proj(x) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/hpeadmin/www/craftgpt/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward return F.conv3d( NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

PyTorch 与 CUDA 版本不兼容

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants