Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some question about the data #1

Open
denyingmxd opened this issue Jan 23, 2025 · 1 comment
Open

Some question about the data #1

denyingmxd opened this issue Jan 23, 2025 · 1 comment

Comments

@denyingmxd
Copy link

Thanks for thr great work and dataset. Yet, i meet a few problems when i try to use the dataset.

  1. It is claimed that the synthetic datasets have 21 scenes which is listed in the paper and github readme file. Yet, there are only 20 in the download_spherecraft.py and the 'rainbow' is missing.

  2. You mention that 5 synthetic scenes are for tesing but you also have specific train and test files in the form of [keypoint detector]_train.txt. If the txt file is used why we still need to divide scnes into training and testing scenes?

  3. For classroom 00000000, the depth of window seems to be wrong and is the depth of what is behind

Image

Image

  1. For urbanCanyon 00000000, the depth of buildings in this outdoor scene, seems to be too small

Image

Image

@cgava25
Copy link
Collaborator

cgava25 commented Jan 30, 2025

Hi denyingmxd,

  1. Thank you for pointing that out. It is fixed now.

  2. We provide train and test files for all synthetic scenes simply for the sake of completeness and flexibility. If someone wishes to train their model on our entire dataset and then evaluate on another dataset, all files are readily available. Or, if they wish to use a different train-test split than the one we suggested, again all necessary files are available. Having the train and test files for all synthetics scenes simply enforces training and testing different models on exactly the same data -- so that these models can be fairly compared.

  3. I am not sure I understand your point. The depth should correspond to what is behind the window if they ray of light passes through glass, which is what a downstream vision application (like 3D reconstruction) is supposed to sense. The depth values in the classroom scene are consistent with that.

  4. In Table 2 of the paper we clearly marked Rainbow and Urban Canyon and stated "Scales used for Rainbow and Urban Canyon in their original Blender projects are unfortunately over- and undersized, respectively.", so these two scenes are exceptions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants