-
Notifications
You must be signed in to change notification settings - Fork 7
config_3D.yml #178
Comments
Hello. To run a the pipeline with a 3D model, one would use the 3d inference config, 2.5d preprocessing, and slightly modify one of the training configs to fit their use case (changing the model class name and the depths to suit their specifications). We have shown that the 2.5D architecture is both more efficient and more effective than the 3D architecture for virtual staining tasks. Could you elaborate on your planned use case? |
Thank you very much for your reply ! I just want to recurrence your results which appear in the paper. |
I want to run the 3D moudel ,so I use 2.5d preprocessing config where change the tile ["depths"] output channel depth=5, it like depths:[5,5],and add a attribute "mask_depth"=5 .Next I use 2.5d train configs where just change the model class name to "UNet3D“,but I meet a error "AssertionError: network depth is incompatible with input depth" in unet3d.py , I don't understand the code of "feature_depth_at_last_block = depth // (2 ** self.num_down_blocks)" in unet3d.py,does it mean I should set the depth with a number much larger than 5 ? |
Hi @stonedada, the 3D U-Net in unet3d.py requires different config parameters. However, we have stopped using 3D U-Nets in favor of 2.5D U-Net as @Christianfoley mentions. Which specific result are you trying to reproduce from our paper (https://elifesciences.org/articles/55502)? Our reasons for using 2.5D U-Net are summarized in the section If you are new to 2.5D U-Net and microDL repository, you should read and try DL-MBL notebook from release 1.0.0. |
I want to reproduce the resluts of Figure 3 in your paper, which include 3D Predicted F-actin. |
And I want to know that why "Depth must be uneven" in def adjust_slice_margins(slice_ids,depth) of aux_utils.py |
The 3D Unet downsamples in 3 dimensions, meaning that in a depth 5 (5 convolutional blocks + downsamples in the encoding path) network, the bottleneck feature map will have a z-depth of 1/(2**5) the input depth. This means to use a 3D Unet, your input data must be very large in Z. Previously we accomplished this by upsampling/resizing our data (see resize.py), but one of the benefits of a 2.5D Unet is that this is not necessary.
The
When translating 3d label-free volumes to 2d fluorescent predictions, we take a z-stack of label free slices and use them to predict the fluorescent target corresponding to the center slice of the stack. This "center slice" can only be the center of a stack with uneven stack depth. |
Hello, I read your slide and paper, I also want to run 3D model. I choose retardance and nuclei images, position 150-153, slice 0-44. I also made some changes in config.yml below: # preprocess.yml
preproc_config['channel_ids'] = [0, 1] # 0 -> nuclei, 1 -> retardance
preproc_config['normalize']['normalize_channels'] = [True, True]
preproc_config['tile']['depths'] = [1, 45] # depths
preproc_config['pos_ids'] = [150, 151, 152, 153] # position
preproc_config["slice_ids"] = list(range(45)) # slice
# Set the channels used for generating masks
preproc_config['masks']['channels'] = 0
preproc_config['masks']['mask_type'] = "otsu" # train.yml
train_config['dataset']['input_channels'] = [1]
train_config['dataset']['target_channels'] = [0]
train_config["dataset"]["mask_channels"] = [2]
train_config["dataset"]["split_ratio"] = {"test": 0.25, "train":0.50, "val": 0.25}
train_config["network"]["class"] = "UNet3D"
train_config["network"]["depth"] = 45
train_config["network"]['num_filters_per_block'] = [16, 32, 64, 128, 256]
train_config["trainer"]["metrics"] = "pearson_corr"
train_config['trainer']['loss'] = "mae_loss" # TODO 3: your choice of loss function here.
... However, It raise an error
could you please help me solver this problem, thx. |
Hi @yingmuzhi . Could you please post the entire error traceback? |
Hi, @Christianfoley . Thanks for your reply, I am sorry for my late reply since my final examination is around the corner.I try to run 3D Unet preprocessing according to config_preprocess_resize.yml . However, it raise a error: Traceback (most recent call last):
File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.vscode-server/extensions/ms-python.python-2022.6.0/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module>
cli.main()
File "/root/.vscode-server/extensions/ms-python.python-2022.6.0/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main
run()
File "/root/.vscode-server/extensions/ms-python.python-2022.6.0/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/microDL/micro_dl/cli/preprocess_script.py", line 428, in <module>
pp_config, runtime = pre_process(pp_config, base_config)
File "/home/microDL/micro_dl/cli/preprocess_script.py", line 327, in pre_process
mask_ext)
File "/home/microDL/micro_dl/cli/preprocess_script.py", line 155, in generate_masks
mask_ext=mask_ext
File "/home/microDL/micro_dl/preprocessing/generate_masks.py", line 73, in __init__
uniform_structure=uniform_struct
File "/home/microDL/micro_dl/utils/aux_utils.py", line 236, in validate_metadata_indices
'Indices for {} not available'.format(col_name)
AssertionError: Indices for slice_idx not available And I guess the problem is - in resize_image.py, it returns slice_ids is [2, 11] but in resized_images/frames_meta.csv the slice_ids is [2, 10], they are not the same. |
Can you share the config_3D.yml for preprocess,train,and inference script.py ?
The text was updated successfully, but these errors were encountered: