You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am modifying the demo code for top down heatmap inference demo/top_down_pose_tracking_demo_with_mmdet.py in order to extract the keypoints for each frame for further processing. However, most predictions coming out of either the model or tracking call are empty even though the output video has correct keypoints for each video frame. I am at a loss what is happening - how do I get the keypoint predictions per frame?
I have looked through the code and am at model.show_result.
It doesn't make sense that predictions are reused, i have advanced through the video frame by frame and there are clear changes for every frame.
number of predictions: 512
number of frames (and model calls): 5272
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am modifying the demo code for top down heatmap inference
demo/top_down_pose_tracking_demo_with_mmdet.py
in order to extract the keypoints for each frame for further processing. However, most predictions coming out of either the model or tracking call are empty even though the output video has correct keypoints for each video frame. I am at a loss what is happening - how do I get the keypoint predictions per frame?I have looked through the code and am at
model.show_result
.It doesn't make sense that predictions are reused, i have advanced through the video frame by frame and there are clear changes for every frame.
number of predictions: 512
number of frames (and model calls): 5272
mmpose v0.24.0
thank you,
Beta Was this translation helpful? Give feedback.
All reactions