Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

waiting for camera info #10

Open
Ragnar7982 opened this issue Jan 16, 2025 · 8 comments
Open

waiting for camera info #10

Ragnar7982 opened this issue Jan 16, 2025 · 8 comments

Comments

@Ragnar7982
Copy link

Ragnar7982 commented Jan 16, 2025

Hello, I'm using Ubuntu22.04 and ROS2 Humble, may I ask what this mean? It says waiting for camera info, but I already publish Intel D455 camera topic,

Image

I would like to ask why we need compressed_image_topic, usually we only need image, depth, and camera_info_topic?

Image

Here is my ros2 topic list:

Image

Also I'm curious what is compressed_image_topic doing? Thanks.

@Owen-Liuyuxuan
Copy link
Owner

I believe you have already configured the code correctly and the node has been running and outputting results.

  1. Yes, the node will require camera_info + image_raw or compressed to work. I believe you have correctly configured the camera_info + image_raw.
  2. Image compression topic is for users who want to provide input as a compressed image. It is parallel to the image_raw and will trigger the same computation, but we are just providing that API.
  3. If you are using image_raw + camera_info, please make sure image compression topic is a dummy/dangling one

@Ragnar7982
Copy link
Author

Hi, thanks for replying fast, now I see this is mono camera, so if I using Intel RealSense D455 RGB-D camera, it still can use this project right?
Also if I wanna use yolact or yolov8 model, I need to transfer to onnx file? ( Because yolact is usually pth file.)
Thank you!

@Owen-Liuyuxuan
Copy link
Owner

https://github.com/luiszeni/yolact_onnx

I recommend you export to an onnx file. But I think that YOLOACT and YOLOV8 is not outputting 3D objects (only 2D objects), so you may need to modify the output formats.

@Ragnar7982
Copy link
Author

Ragnar7982 commented Jan 18, 2025

Sorry, I don't understand very well, how do I know if yolact is 2D or 3D?
And this project is only for vehicle or any scenario? As long as have a trained model?
Thanks.

@Owen-Liuyuxuan
Copy link
Owner

https://github.com/dbolya/yolact

I am not sure what models you are going to use. But YOLACT and YoloV8, as reported in their papers and official codes, are 2D detection models. YOLACT outputs instance segmentation masks, and YOLOV8 outputs 2D bounding boxes.

By design, the detection model in this repo will be outputting 3D bounding boxes directly from cameras.


The pre-trained models in this repo are only for self-driving-related scenarios (metric-3d is a foundation model and it work in many scenarios). You can use different onnx models trained in your scenarios as long as the input and output formats are the same.


You can also re-write part of the pre-processing and post-processing codes to adapt to any custom onnx models.

@Ragnar7982
Copy link
Author

Ok so if I wanna use others model, I have to see is 3D, and I have to train 3d, segmentation, and depth 3 different models for my scenarios?

@Owen-Liuyuxuan
Copy link
Owner

For now, the repo uses three different models. You can enable/disable any if you only want only parts of the functions.

@Ragnar7982
Copy link
Author

Ok, I will try, thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants