Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for >1 batch size in Splatfacto #3582

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

akristoffersen
Copy link
Contributor

WIP, preliminary testing makes it look like it's working but I would want to make sure.


data = {}
for key in all_keys:
if key == "image":
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will fail with masks / depth images

@kerrj
Copy link
Collaborator

kerrj commented Jan 31, 2025

Hey Alex! this is super cool; especially in MCMC which doesn't require gradient thresholds at all. #3216 might have mildly broken parts of this PR since it merged in parallel dataloading, but it shouldn't be too bad; let us know if you want any help fixing conflicts!

@hardikdava
Copy link
Contributor

hardikdava commented Feb 7, 2025

@akristoffersen I think you might want to modify step_post_backward to support for batch processing. So densification and pruning can be intact as it is.

@akristoffersen
Copy link
Contributor Author

Works with masks now

As expected, I noticed a almost 2x increase in rays/s with a batch size of two, and a very slight performance drop with a batch size of 1 compared to baseline (50.1 M rays/sec -> 48 M rays/sec)

@akristoffersen
Copy link
Contributor Author

@hardikdava do you mean that the tuning might be different for the thresholds? Yeah, I don't know exactly what to do there. maybe someone else has an opinion?

Some quick stats on the poster dataset.
Orange: Baseline (batch size of 1)
Blue: BS = 5
Red: BS = 10

Screenshot 2025-02-08 at 9 42 13 PM

so the splitting / densification outcomes are affected by batch size.

Screenshot 2025-02-08 at 9 43 48 PM

Similarly, train rays/sec do start higher due to the larger batch size, but go down as you'd expect with the higher number of gaussians.

Screenshot 2025-02-08 at 9 43 02 PM

Some good news, with a higher batch I do see the training loss hitting better values quicker as the batch size increases.

@hardikdava
Copy link
Contributor

@akristoffersen currently, densification, splitting and culling are implemented inside strategy and logic is based on step. Since batch training can skip some of the functions. I am not sure but you might want to modify the strategy function by calling the batch size e.g. loop for the batch size. So that all densification logic will be performed for all steps, otherwise some of the functions will be skipped.

In simple words, suppose the batch size is 2, opacity reset needs to be applied at every 3000th step. So it should happen at every 1500th steps according to batch size. But according to your current implementation it will be applied at every 3000th steps but actually it will be 6000th step (batch size * step).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants