Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very, very good research! about the trained model weights #18

Open
goodluck110706112 opened this issue Dec 2, 2024 · 4 comments
Open

Comments

@goodluck110706112
Copy link

Thank you very much for your solid research!!
May I ask if the Performance in README refers to the performance of the trained models you provided(load by torch.hub.load method)?

@amaralibey
Copy link
Owner

Hello @goodluck110706112 ,

Thank you for your kind words and interest in our work!

The performance results mentioned in the README are indeed obtained using the provided weights on Torch Hub. These results are also consistent with those reported in the latest version of our paper on arXiv: https://arxiv.org/abs/2405.07364.

Initially, the paper focused on results using the ResNet-50 backbone only. However, as most recent works used the DINOv2 backbone, we have updated the arXiv version of the paper to include additional results using DINOv2.

Please feel free to reach out if you have more questions or feedback.

Best,

@goodluck110706112
Copy link
Author

@amaralibey

ResNet50 + BoQ

vpr_model = torch.hub.load("amaralibey/bag-of-queries", "get_trained_boq", backbone_name="resnet50", output_dim=16384) # dim of embedding is 16384
image
May I ask if there is a way to use 4096 embeddings instead of 16384?

@amaralibey
Copy link
Owner

Hello @goodluck110706112

Sorry for the delay, I will soon share the weights of the 4096-d and 2048-d models, for both ResNet and DinoV2.

@goodluck110706112
Copy link
Author

@amaralibey Thank you very much for your reply. I expect 4096-d and 2048-d for BoQ~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants