You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am searching a way to use Faster-RCNN on an iOS mobile App, so I tried your tutorial from the beginning by running converter.py, as I struggle for months to implement by my own the missing functions in C++ for RoiAlign, nms and so on. But the line m = torch.jit.load('./maskrcnn/model_freezed.pt') implies the following error :
WARNING:root:Torch version 1.12.1 has not been tested with coremltools. You may run into unexpected errors. Torch 1.10.2 is the most recent version that has been tested.
Traceback (most recent call last):
File "/Users/rgn12/Desktop/CoreML-MaskRCNN/converter/converter.py", line 27, in <module>
m = torch.jit.load('./maskrcnn/model_freezed.pt')
File "/Users/rgn12/Library/Python/3.8/lib/python/site-packages/torch/jit/_serialization.py", line 162, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError:
Unknown builtin op: _caffe2::GenerateProposals.
Could not find any similar ops to _caffe2::GenerateProposals. This op may not exist or may not be currently supported in TorchScript.
:
File "code/__torch__/detectron2/export/caffe2_modeling/___torch_mangle_546.py", line 81
scores0 = torch.detach(scores)
bbox_deltas0 = torch.detach(bbox_deltas)
rpn_rois, _1 = ops._caffe2.GenerateProposals(scores0, bbox_deltas0, im_info, CONSTANTS.c96, 0.0625, 100, 10, 0.69999999999999996, 0., True, -180, 180, 1., False)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
input54 = ops._caffe2.RoIAlign(input44, rpn_rois, "NCHW", 0.0625, 6, 6, 0, True)
input55 = torch._convolution(input54, CONSTANTS.c97, CONSTANTS.c98, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
So the problem appears while loading the Jitted model, and the implemented functions in custom_ops and custom_mil_ops aren't used there as their implementation will serve in the conversion to CoreML. Do you know how to solve this ? Thank you levy much in advance !
The text was updated successfully, but these errors were encountered:
Hello, I am searching a way to use Faster-RCNN on an iOS mobile App, so I tried your tutorial from the beginning by running converter.py, as I struggle for months to implement by my own the missing functions in C++ for RoiAlign, nms and so on. But the line
m = torch.jit.load('./maskrcnn/model_freezed.pt')
implies the following error :So the problem appears while loading the Jitted model, and the implemented functions in custom_ops and custom_mil_ops aren't used there as their implementation will serve in the conversion to CoreML. Do you know how to solve this ? Thank you levy much in advance !
The text was updated successfully, but these errors were encountered: