Replies: 2 comments 2 replies
-
Hello, I am also having some performance issue with a mask rcnn R50 FPN model trained on with my own data. In my case, the prediction on one slice take ~ 0.5s. However, between slice prediction, up to 10s (but often less) can elapse (between two call of get_prediction() in get_sliced_prediction() of sahi.predict). So for a complete picture cut in 81 slices, I have in total: I would be interested in speeding this up :) |
Beta Was this translation helpful? Give feedback.
-
Hello, Then, SAHI will perform a prediction on each of the images. For me, as seen in the below image, the whole prediction which beforehand took ~300sec now took place in 10.992 seconds and 8.34 it/s resulting in ~120 ms average image prediction time. @ClemSc Could you maybe verifiy if you experience the same behavior? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I currently experience performance issues using a Mask R-CNN trained with detectron2 on custom data.
My environment consist of a R50-FPN model, which has been custom trained to a dataset with two category ("a" and "b").
When running the inference, I use sahi to split a high resolution image (6000x4000) into smaller 640x640px slices.
This prediction takes ~120 seconds for one image divided into 120 slices, which results in a inference time of 1 s/img. According to the model zoo, around 0.04 s/img should be possible.
What I found out is that if I run my prediction on the model from the model zoo without applying my custom dataset training to it, the inference time is 11.10 seconds for the 120 slices, which is pretty much what I expected from the model.
Above is the configuration I use to train the model, in case that helps, below an excerpt of the training dataset.

I would highly appreciate if anyone can help me solving this performance issue.
Beta Was this translation helpful? Give feedback.
All reactions