Replies: 5 comments 2 replies
-
Increasing the number of keypoints will lead to performance drop. This is reasonable. Because the model needs more capacity to learn more things. |
Beta Was this translation helpful? Give feedback.
-
@jin-s13 yes I tested on 3 different datasets with HRNet and HRNet+UDP. I am getting similar results in all cases, the inference speed is higher for 6 key points as compared to 3 key points. I used the following script to calculate the inference time: How many times should I run this script approximately to check for accurate results? |
Beta Was this translation helpful? Give feedback.
-
What is the inference speed in your case?Could you provide some examples? 5 FPS, 10FPS? How many times should I run this script approximately to check for accurate results? I am not sure. Maybe 3 or 5 times? |
Beta Was this translation helpful? Give feedback.
-
The results for HRNet are as follows: |
Beta Was this translation helpful? Give feedback.
-
I am running my experiments on Google colab. Can this be due to the fact that Colab assigns a different GPU each time? |
Beta Was this translation helpful? Give feedback.
-
I am training a custom dataset with HRNet, HRNet+UDP and ResNet50. I am testing two annotation schemes: 6 key points versus 3 key points. If I use 6 key points to label my dataset, the performance of the model (AP,AR) is worse but the inference speed is better than that achieved with 3 key points. However, logically if the number of key points is more, the precision should increase and the speed should decrease right? Any reason why 3 key points give better AP and AR and less speed as compared to 6 key points?
Any help will be highly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions