Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use multi-gpu training? #12

Open
sumitmishra209 opened this issue Oct 2, 2024 · 1 comment
Open

How to use multi-gpu training? #12

sumitmishra209 opened this issue Oct 2, 2024 · 1 comment

Comments

@sumitmishra209
Copy link

Hello, thanks for the detailed repo of BoQ and https://github.com/amaralibey/OpenVPRLab/tree/main
I have 24 gb 2 GPUs and batch size of 120 can't fit in one GPU memory. So i was thinking to use both, i was trying by adding "devices=[0, 1], strategy='ddp'," in run.py of open vpr lab but got error.

I trained by keeping batch size of 80 on one GPU and accordingly change warmup step but the BOQ result can't reach near to what you got in paper.

@amaralibey
Copy link
Owner

Hello @sumitmishra209
Thank you for your interest,

I didn't try multi-gpu training as I don't have that kind of setup, sorry.
However, 24GB of memory should be enough to train a DinoV2-BoQ model. You can use batchez of size 160x4, and resize images to 224x224 during training and 322x322 during test. I'm getting ~93.5 R@1 on MSLS-val. You can use OpenVPRLab to track and manage your training (https://github.com/amaralibey/OpenVPRLab).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants