-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add Panoptic Quality (PQ) #408
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
@NielsRogge thank you for working on this! I think it'll really make it easier to use the segmentation models as well. I'm guessing researchers will evaluate their models on public datasets such as COCO and ADE20K, and other users will use their custom datasets and will want to evaluate using a minimal setup. My proposal is as follows:
I'm in favor of changing cc @amyeroberts |
A few remarks:
I'm not sure it's possible to define optional features for the inputs of
I would definitely avoid having a cv2 dependency, as this library is pretty painful to install and it creates an additional dependency.
The Ideally the same keys should be present in the ground truth and predicted segments_info (it's a bit weird to have different keys in both). |
Yes, you can! The features can be a list of different formats and |
Let me know if you need another review on this @NielsRogge. |
The metric is actually in a ready state, the final API just needs to be decided (which keys need to be in the predicted vs ground truth annotations). cc @alaradirik |
Sorry about that, I thought I replied to your remarks! I'm in favor of excluding keys that are not used for the metric computation - iscrowd/was_fused, area, bbox. Users can still include these in the ground truth annotations for the sake of convenience and we can drop them during the actual computation. So the ground truth annotations would have the What do you think? @NielsRogge @lvwerra @amyeroberts |
This PR adds the panoptic quality (PQ) metric, based on the original implementation.
Unlike most metrics that only require 2 things to be provided to the
add_batch
method (which are thepredictions
and thereferences
), the panoptic quality metric requires 2 additional things to be provided, namely the predictedsegments_info
and the ground truthsegments_info
(which contain more information about the predicted and ground truth segmentation map, respectively).Refer to this notebook for evaluating Mask2Former on the COCO panoptic validation set using this metric.
To do: