Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are the models available? #1

Open
Maik13579 opened this issue Nov 9, 2023 · 5 comments
Open

Are the models available? #1

Maik13579 opened this issue Nov 9, 2023 · 5 comments

Comments

@Maik13579
Copy link

Hey wanted to ask if the three models mono3d_yolox_576_768.onnx, bisenetv1.onnx, and monodepth_res101_384_1280.onnx are publicly available or if you know of any alternative models I can download to test your project?

@Owen-Liuyuxuan
Copy link
Owner

All models can be downloaded from the tag page: https://github.com/Owen-Liuyuxuan/ros2_vision_inference/tags. I recently uploaded a DLA-deformable conv for mono3d too.

If you want other models, the basic interface here is https://github.com/Owen-Liuyuxuan/ros2_vision_inference#onnx-model-interface. All these models are trained and exported from https://github.com/Owen-Liuyuxuan/visionfactory/tree/develop.

At least, as far as I know, transplanting other image-based models onto visionfactory is not difficult. If you have a trained model and fully open-source code, I could try exporting it for you too,

This could be a bit different for different projects (of course, it is easy to slightly change the image format in code but still it is not plug-and-play).

To the best of my knowledge, there is not a lot of work that fully integrates onnx model exports for these tasks. (Segmentation has many onnx models though). You are encouraged to check out models from mmlab (mmdetection3d/mmdeploy)

@Maik13579
Copy link
Author

Thanks a lot I will try them soon :)

@Owen-Liuyuxuan
Copy link
Owner

Owen-Liuyuxuan commented Nov 9, 2023

@Maik13579
If it is OK, you could describe your target scenes. The models here were optimized for road scenes (especially the front camera). This could help us consider how to improve on the data we trained on.

@Maik13579
Copy link
Author

Thank you for your fast response. Our project primarily involves indoor scenes, and we are prepared to train our own model. My primary intention was to test the system's functionality to assess its suitability for our project. However, I am currently facing challenges in configuring the correct CUDA version in conjunction with ROS within a Docker image.

@Owen-Liuyuxuan
Copy link
Owner

Owen-Liuyuxuan commented Nov 11, 2023

configuring the correct CUDA version in conjunction with ROS within a Docker image.

https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

And the color map for segmentation in the repo is rather limited to out-door scenes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants