You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I am reaching out to inquire about the performance of the two-stream Convolutional Neural Network (CNN) model, as presented in your esteemed paper "Vehicle-Rear: A New Dataset to Explore Feature Fusion For Vehicle Identification Using Convolutional Neural Networks".
Specifically, I am interested in understanding how the model performs under varying lighting conditions, which can be quite challenging for visual recognition tasks. The dataset samples provided in the README.md showcase instances of severe lighting conditions and dark frames caused by the motion of large vehicles. However, I am keen to learn more about the robustness of the model in these scenarios.
Could you please provide additional insights or results that highlight the model's capability to maintain high precision and recall rates in poor lighting? Moreover, are there any pre-processing steps or augmentations applied to the dataset to mitigate such conditions?
I am considering utilising your dataset and model architecture for a project that involves vehicle re-identification across a network of cameras with significant variations in lighting. Any additional information you could provide would be greatly appreciated.
Thank you for your time and for sharing your research with the community. I look forward to your response.
Kind regards,
yihong1120
The text was updated successfully, but these errors were encountered:
Dear Ícaro Oliveira de Oliveira and Team,
I hope this message finds you well. I am reaching out to inquire about the performance of the two-stream Convolutional Neural Network (CNN) model, as presented in your esteemed paper "Vehicle-Rear: A New Dataset to Explore Feature Fusion For Vehicle Identification Using Convolutional Neural Networks".
Specifically, I am interested in understanding how the model performs under varying lighting conditions, which can be quite challenging for visual recognition tasks. The dataset samples provided in the README.md showcase instances of severe lighting conditions and dark frames caused by the motion of large vehicles. However, I am keen to learn more about the robustness of the model in these scenarios.
Could you please provide additional insights or results that highlight the model's capability to maintain high precision and recall rates in poor lighting? Moreover, are there any pre-processing steps or augmentations applied to the dataset to mitigate such conditions?
I am considering utilising your dataset and model architecture for a project that involves vehicle re-identification across a network of cameras with significant variations in lighting. Any additional information you could provide would be greatly appreciated.
Thank you for your time and for sharing your research with the community. I look forward to your response.
Kind regards,
yihong1120
The text was updated successfully, but these errors were encountered: