Replies: 1 comment 3 replies
-
Hi @Luffy2810 , thanks for pointing this out. As commented in a previous discussion, there were issues with the annotations in the dataset we initially selected for this benchmark. Are these samples included in the updated dataset? |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In some of the images, the entire crack is enclosed in a single bounding box, while in other images, the cracks are enclosed in multiple bounding boxes and the cracks on which these two types of annotation present are quite similar.


The scoring criteria for this task is inappropriate if there isn't a standard format for the annotations.
Given that it's possible that my model will do poorly on this evaluation criterion if it detects a crack in multiple bounding boxes but your annotation ground truth is a large bounding box or vice versa on a particular image due to the inconsistencies in the annotation.
For example, It is possible that my model detects the crack in 3 bounding boxes but your ground truth is a single bounding box covering the whole crack or the other way round, my IOU will be less than 0.4 even though I detected the whole crack
Please look into this issue.
Some screenshots of such ground truth annotations are attached. The bounding boxes that are drawn on the image are the ground truths provided with the dataset.
These are just a few examples of the problem I mentioned.
Beta Was this translation helpful? Give feedback.
All reactions