Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates with potential bug! #1124

Closed
Wang-Chbo opened this issue Dec 30, 2024 · 7 comments
Closed

Updates with potential bug! #1124

Wang-Chbo opened this issue Dec 30, 2024 · 7 comments

Comments

@Wang-Chbo
Copy link

I guess some updates led to failed reconstruction (updates during September 2024).
#1121 #1114 #1106 , these issues have the same problem. You can reproduce the failure on the NeRF Synthetic scenes.

And I trained this dataset, and they are all trained failed. The reconstruction of the "train" scene is successful but reconstruction is slower than before.

image

@Wang-Chbo
Copy link
Author

Checkout to dev(a2a91d9)

@hanzhangshen03
Copy link

Hi, I still failed to train 3dgs on the nerf synthetic scene even after I changed to this commit. May I see your training script? Thanks!

@Wang-Chbo
Copy link
Author

Hi, I still failed to train 3dgs on the nerf synthetic scene even after I changed to this commit. May I see your training script? Thanks!

Just add blender_scenes to full_eval.py

#
# Copyright (C) 2023, Inria
# GRAPHDECO research group, https://team.inria.fr/graphdeco
# All rights reserved.
#
# This software is free for non-commercial, research and evaluation use 
# under the terms of the LICENSE.md file.
#
# For inquiries contact  [email protected]
#

import os
from argparse import ArgumentParser
import time
import json
import yaml

mipnerf360_outdoor_scenes = ["bicycle", "flowers", "garden", "stump", "treehill"]
mipnerf360_indoor_scenes = ["room", "counter", "kitchen", "bonsai"]
tanks_and_temples_scenes = ["truck", "train"]
deep_blending_scenes = ["drjohnson", "playroom"]
blender_scenes = ["chair", "drumps", "ficus", "hotdog", "lego", "materials", "mic", "ship"]

parser = ArgumentParser(description="Full evaluation script parameters")
parser.add_argument("--skip_training", action="store_true")
parser.add_argument("--skip_rendering", action="store_true")
parser.add_argument("--skip_metrics", action="store_true")
parser.add_argument("--output_path", default="./data/model")
parser.add_argument("--result_path", default="./data/test_result")
args, _ = parser.parse_known_args()

all_scenes = []
all_scenes.extend(mipnerf360_outdoor_scenes)
all_scenes.extend(mipnerf360_indoor_scenes)
all_scenes.extend(tanks_and_temples_scenes)
all_scenes.extend(deep_blending_scenes)
all_scenes.extend(blender_scenes)

if not args.skip_training or not args.skip_rendering:   # ! Create score directory
    parser.add_argument('--mipnerf360', "-m360", required=True, type=str)
    parser.add_argument("--tanksandtemples", "-tat", required=True, type=str)
    parser.add_argument("--deepblending", "-db", required=True, type=str)
    parser.add_argument("--blender", "-bl", required=True, type=str)
    args = parser.parse_args()

if not args.skip_training:
    common_args = " --quiet --eval --test_iterations -1 "

    start_time = time.time()
    for scene in mipnerf360_outdoor_scenes:
        source = args.mipnerf360 + "/" + scene
        os.system("python train.py -s " + source + " -i images_4 -m " + args.output_path + "/" + scene + common_args)
    for scene in mipnerf360_indoor_scenes:
        source = args.mipnerf360 + "/" + scene
        os.system("python train.py -s " + source + " -i images_2 -m " + args.output_path + "/" + scene + common_args)
    m360_timing = (time.time() - start_time)/60.0

    start_time = time.time()
    for scene in tanks_and_temples_scenes:
        source = args.tanksandtemples + "/" + scene
        os.system("python train.py -s " + source + " -m " + args.output_path + "/" + scene + common_args)
    tandt_timing = (time.time() - start_time)/60.0

    start_time = time.time()
    for scene in deep_blending_scenes:
        source = args.deepblending + "/" + scene
        os.system("python train.py -s " + source + " -m " + args.output_path + "/" + scene + common_args)
    db_timing = (time.time() - start_time)/60.0

    start_time = time.time()
    for scene in blender_scenes:
        source = args.blender + "/" + scene
        # start_time_str = time.strftime("%m-%d_%H-%M", time.localtime(time.time()))
        command_i = "python train.py -s " + source + " -m " + args.output_path + "/" + scene + common_args
        os.system(command_i)
    blender_timing = (time.time() - start_time)/60.0

with open(os.path.join(args.output_path,"timing.txt"), 'w') as file:
    file.write(f"m360: {m360_timing} minutes \n tandt: {tandt_timing} minutes \n db: {db_timing} minutes\n blender: {blender_timing} minutes")

if not args.skip_rendering:
    all_sources = []
    for scene in mipnerf360_outdoor_scenes:
        all_sources.append(args.mipnerf360 + "/" + scene)
    for scene in mipnerf360_indoor_scenes:
        all_sources.append(args.mipnerf360 + "/" + scene)
    for scene in tanks_and_temples_scenes:
        all_sources.append(args.tanksandtemples + "/" + scene)
    for scene in deep_blending_scenes:
        all_sources.append(args.deepblending + "/" + scene)
    for scene in blender_scenes:
        all_sources.append(args.blender + "/" + scene)

    common_args = " --quiet --eval --skip_train"
    for scene, source in zip(all_scenes, all_sources):
        # os.system("python render.py --iteration 7000 -s " + source + " -m " + args.output_path + "/" + scene + common_args)
        os.system("python render.py --iteration 30000 -s " + source + " -m " + args.output_path + "/" + scene + common_args)

if not args.skip_metrics:
    scenes_string = ""
    for scene in all_scenes:
        scenes_string += "\"" + args.output_path + "/" + scene + "\" "

    os.system("python metrics.py -m " + scenes_string)

@Wang-Chbo Wang-Chbo reopened this Jan 3, 2025
@hanzhangshen03
Copy link

Thanks a lot! I just figured out that in the latest version of code, the dataloader failed to add the whitebackground to the gt images, and there was also a problem with alpha_mask. I got the correct results after I fixed these.

@Wang-Chbo
Copy link
Author

Hi, I still failed to train 3dgs on the nerf synthetic scene even after I changed to this commit. May I see your training script? Thanks!

Thanks for sharing

@BaiYeBuTingXuan
Copy link

Thanks a lot! I just figured out that in the latest version of code, the dataloader failed to add the whitebackground to the gt images, and there was also a problem with alpha_mask. I got the correct results after I fixed these.

Oh? how? could you share your modification code? I am bothered with this problem these days.

@hanzhangshen03
Copy link

Thanks a lot! I just figured out that in the latest version of code, the dataloader failed to add the whitebackground to the gt images, and there was also a problem with alpha_mask. I got the correct results after I fixed these.

Oh? how? could you share your modification code? I am bothered with this problem these days.

For the latest commit 54c035f, in dataset_readers.py, the function readCamerasFromTransforms applies the white/black background to the image:

arr = norm_data[:,:,:3] * norm_data[:, :, 3:4] + bg * (1 - norm_data[:, :, 3:4])

But it does not save the modified image. Later when this CameraInfo is loaded, the image is read again from the disk:

image = Image.open(cam_info.image_path)

So the white background is not applied to the gt image. I added another field called image to the class CameraInfo, and in readCamerasFromTransforms, I saved the modified image in this field. Later when this camera info is loaded in loadCam, I just read cam_info.image instead of reading the image again from the disk.

Another modification is mentioned here: #1038. After removing this block of code, I added self.original_image *= self.alpha_mask after this line:

self.original_image = gt_image.clamp(0.0, 1.0).to(self.data_device)

Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants