You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks for the detailed repo of BoQ and https://github.com/amaralibey/OpenVPRLab/tree/main
I have 24 gb 2 GPUs and batch size of 120 can't fit in one GPU memory. So i was thinking to use both, i was trying by adding "devices=[0, 1], strategy='ddp'," in run.py of open vpr lab but got error.
I trained by keeping batch size of 80 on one GPU and accordingly change warmup step but the BOQ result can't reach near to what you got in paper.
The text was updated successfully, but these errors were encountered:
I didn't try multi-gpu training as I don't have that kind of setup, sorry.
However, 24GB of memory should be enough to train a DinoV2-BoQ model. You can use batchez of size 160x4, and resize images to 224x224 during training and 322x322 during test. I'm getting ~93.5 R@1 on MSLS-val. You can use OpenVPRLab to track and manage your training (https://github.com/amaralibey/OpenVPRLab).
Hello, thanks for the detailed repo of BoQ and https://github.com/amaralibey/OpenVPRLab/tree/main
I have 24 gb 2 GPUs and batch size of 120 can't fit in one GPU memory. So i was thinking to use both, i was trying by adding "devices=[0, 1], strategy='ddp'," in run.py of open vpr lab but got error.
I trained by keeping batch size of 80 on one GPU and accordingly change warmup step but the BOQ result can't reach near to what you got in paper.
The text was updated successfully, but these errors were encountered: