Multi-body SE(3) Equivariance for Unsupervised Rigid Segmentation and Motion Estimation (NeurIPS 2023)
If our work has been helpful in your research, please consider citing it as follows:
@inproceedings{zhong2023multi,
title={Multi-body SE (3) Equivariance for Unsupervised Rigid Segmentation and Motion Estimation},
author={Zhong, Jia-Xing and Cheng, Ta-Ying and He, Yuhang and Lu, Kai and Zhou, Kaichen and Markham, Andrew and Trigoni, Niki},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023}
}
(1) PyTorch Installation
Ensure you have a GPU-supported version of PyTorch that is compatible with your system. We have confirmed compatibility with PyTorch 1.9.0.
(2) Additional Libraries
Install the PointNet2 library and other required Python packages:
cd pointnet2
python setup.py install
cd ..
pip install -r requirements.txt
(3) EPN Dependencies
Follow the specific instructions in ./EPN_PointCloud/README.md
to set up EPN:
cd EPN_PointCloud
pip install -r requirements.txt
cd vgtk
python setup.py install
(4) [Optional] Open3D Installation
For visualizing point cloud segmentation:
pip install open3d
SAPIEN Dataset (Provided by MBS)
Download necessary data from the following links and place them in your specified ${SAPIEN}
directory:
- Training + Validation Set (
mbs-shapepart
): Google Drive - Test Set (
mbs-sapien
): Google Drive
Download the checkpoint for the self-supervised scene flow network from OGC:
- Checkpoint (
sapien_unsup
): Dropbox.
In our experiments, we directly use the same trained checkpoint as their scene flow network for fair comparisons. If needed, the scene flow network can be trained and tested as follows:
Train the model using the provided configuration:
python train_flow.py config/flow/sapien/sapien_unsup.yaml
Evaluate and save the scene flow estimations with:
python test_flow.py config/flow/sapien/sapien_unsup.yaml --split ${SPLIT} --save
Replace ${SPLIT}
with either train
, val
, or test
as required.
Train the segmentation network using full annotations:
# Two 12GB GPUs are required.
CUDA_VISIBLE_DEVICES=0,1 python eq_train_2head_sup.py config/seg/sapien/eq_sapien_2head_sup_sapien.yaml
Evaluate the segmentation results with:
python eq_test_2head_seg.py config/seg/sapien/eq_sapien_2head_sup_sapien.yaml --split test
Train the model without annotations:
# Two 12GB GPUs are required.
CUDA_VISIBLE_DEVICES=0,1 python eq_train_2head_unsup.py config/seg/sapien/sapien_unsup_woinv.yaml
Evaluate segmentation and scene-flow results:
# Segmentation
python eq_test_2head_seg.py config/seg/sapien/eq_sapien_2head_unsup_woinv.yaml --split test --round 0
# Scene Flow: Two 12GB GPUs are required.
CUDA_VISIBLE_DEVICES=0,1 python eq_test_2head_oa_icp.py config/seg/sapien/eq_sapien_2head_unsup_woinv.yaml --split test