-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rotational Detection #432
Comments
@eonurk I am now trying to detect the same images with rotated bounding boxes, but no luck yet. First I have to prepare an annotation tool which allows rotated bounding boxes. |
Actually I think I found the reason. I was taking the RoIs without a precised model, some of them were halves of car, some of then were the cars bottom parts. Yesterday, I deleted those RoIs from my dataset, and only took the cars that are very obvious in the picture. It kind of solved my problem. Still not perfect though! I am also writing a matlab code that rotates the image and the bboxes 45 degree so that the ground truth images are good to go. I will share it if I do it:) |
@eonurk Thanks. I really appreciate that if you could, but I think that would more efficient if we could change this annotation tool to support rotated BBoxes.Extension - BBox Angle (not an issue) |
@eonurk have you managed to add rotation classification? |
I actually wrote a code for changing the rotations of the images and their bounding boxes by 45 degrees. Also my collegues wrote a simple GUI to implement my code. So in the end if you have 100 ground truth images, you will end up with 800 with bounding boxes. Also, I wrote a program that is compitible with the inria.py training example of Faster R-CNN. If I have some spare time, I will create a repo for these stuff. |
@eonurk I am trying to divide 360 degrees into 8 classes and add another loss function besides other two ones for rotation prediction. The another way is to add angles to bounding box groundtuthes as a fifth array, but I am not sure the second approach would work better that the first one. So I have already asked other guys, but now response yet. ##515 another way of predicting rotated bboxes is predicting the coordinates of 4 points of bboxes, but I think predicting center, width, height along angle will be better. I am not sure. As said I am trying to add a new loss function but I am getting errors. |
@smajida I think you can have a look at this paper "https://arxiv.org/pdf/1703.01086.pdf" |
I have modified a version of labelimg, which can label rotation BBs. |
does anyone tried to apply the modified version of @cgvict to an object detection api such as the one from Tensorflow ? is it even possible to change the bounding box from the api so it would be rotated ! |
Hello,
Having prepared the dataset properly, I tested an example image and got the following result,
Is there any clever way to improve the detection probabilities of rotated cars?
The text was updated successfully, but these errors were encountered: