Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major Bug Fix - start repo while using these fixes #62

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -70,3 +70,6 @@ dataset/scannetv2/test
dataset/scannetv2/val_gt


scans/*
test.ipynb
model/pointgroup/pointgroup.pth
110 changes: 78 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,29 @@
# PointGroup

## PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation (CVPR2020)

![overview](https://github.com/llijiang/PointGroup/blob/master/doc/overview.png)

Code for the paper **PointGroup:Dual-Set Point Grouping for 3D Instance Segmentation**, CVPR 2020 (Oral).

**Authors**: Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, Jiaya Jia
**Authors**: Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, Jiaya Jia

[[arxiv]](https://arxiv.org/abs/2004.01658) [[video]](https://youtu.be/HMetye3gmAs)

## Introduction

Instance segmentation is an important task for scene understanding. Compared to the fully-developed 2D, 3D instance segmentation for point clouds have much room to improve. In this paper, we present PointGroup, a new end-to-end bottom-up architecture, specifically focused on better grouping the points by exploring the void space between objects. We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid. A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength. Further, we formulate the ScoreNet to evaluate the candidate instances, followed by the Non-Maximum Suppression (NMS) to remove duplicates.

## Installation

### Requirements
* Python 3.7.0
* Pytorch 1.1.0
* CUDA 9.0

- Python 3.7.0
- Pytorch 1.1.0
- CUDA 9.0

### Virtual Environment

```
conda create -n pointgroup python==3.7
source activate pointgroup
Expand All @@ -27,66 +32,90 @@ source activate pointgroup
### Install `PointGroup`

(1) Clone the PointGroup repository.

```
git clone https://github.com/llijiang/PointGroup.git --recursive
git clone https://github.com/llijiang/PointGroup.git --recursive
cd PointGroup
```

(2) Install the dependent libraries.

```
pip install -r requirements.txt
conda install -c bioconda google-sparsehash
conda install -c bioconda google-sparsehash
```

(3) For the SparseConv, we apply the implementation of [spconv](https://github.com/traveller59/spconv). The repository is recursively downloaded at step (1). We use the version 1.0 of spconv.
(3) For the SparseConv, we apply the implementation of [spconv](https://github.com/traveller59/spconv). The repository is recursively downloaded at step (1). We use the version 1.0 of spconv.

**Note:** We further modify `spconv\spconv\functional.py` to make `grad_output` contiguous. Make sure you use our modified `spconv`.

* To compile `spconv`, firstly install the dependent libraries.
```
- To compile `spconv`, firstly install the dependent libraries.

```bash
conda install libboost
conda install -c daleydeng gcc-5 # need gcc-5.4 for sparseconv
sudo apt-get update && sudo apt-get install -y libboost-all-dev libsparsehash-dev
```

Add the `$INCLUDE_PATH$` that contains `boost` in `lib/spconv/CMakeLists.txt`. (Not necessary if it could be found.)

```
include_directories($INCLUDE_PATH$)
```

* Compile the `spconv` library.
```
- Compile the `spconv` library.

1. go to `lib/spconv/src/spconv/all.cc`, run
2. On line number 20 replace `torch::jit::RegisterOperators` with `torch::RegisterOperators`
3. Or install

1. ```bash
pip install spconv-1.0-cp39-cp39-linux_x86_64.whl
```

```bash
cd lib/spconv
python setup.py bdist_wheel
```

* Run `cd dist` and use pip to install the generated `.whl` file.


- Run `cd dist` and use pip to install the generated `.whl` file.

(4) Compile the `pointgroup_ops` library.

```
cd lib/pointgroup_ops
python setup.py develop
```
If any header files could not be found, run the following commands.

If any header files could not be found, run the following commands.

```
python setup.py build_ext --include-dirs=$INCLUDE_PATH$
python setup.py develop
```
`$INCLUDE_PATH$` is the path to the folder containing the header files that could not be found.

`$INCLUDE_PATH$` is the path to the folder containing the header files that could not be found.

## Data Preparation

(1) Download the [ScanNet](http://www.scan-net.org/) v2 dataset.
(1) Run script to download the ScanNet v2 dataset.

```python
python download_scannetv2.py --0 <directory to download data>
```

(2) Download the [ScanNet](http://www.scan-net.org/) v2 dataset.

(2) Put the data in the corresponding folders.
* Copy the files `[scene_id]_vh_clean_2.ply`, `[scene_id]_vh_clean_2.labels.ply`, `[scene_id]_vh_clean_2.0.010000.segs.json` and `[scene_id].aggregation.json` into the `dataset/scannetv2/train` and `dataset/scannetv2/val` folders according to the ScanNet v2 train/val [split](https://github.com/ScanNet/ScanNet/tree/master/Tasks/Benchmark).
(4) Put the data in the corresponding folders.

* Copy the files `[scene_id]_vh_clean_2.ply` into the `dataset/scannetv2/test` folder according to the ScanNet v2 test [split](https://github.com/ScanNet/ScanNet/tree/master/Tasks/Benchmark).
- Copy the files `[scene_id]_vh_clean_2.ply`, `[scene_id]_vh_clean_2.labels.ply`, `[scene_id]_vh_clean_2.0.010000.segs.json` and `[scene_id].aggregation.json` into the `dataset/scannetv2/train` and `dataset/scannetv2/val` folders according to the ScanNet v2 train/val [split](https://github.com/ScanNet/ScanNet/tree/master/Tasks/Benchmark).

* Put the file `scannetv2-labels.combined.tsv` in the `dataset/scannetv2` folder.
- Copy the files `[scene_id]_vh_clean_2.ply` into the `dataset/scannetv2/test` folder according to the ScanNet v2 test [split](https://github.com/ScanNet/ScanNet/tree/master/Tasks/Benchmark).

- Put the file `scannetv2-labels.combined.tsv` in the `dataset/scannetv2` folder.

The dataset files are organized as follows.

```
PointGroup
├── dataset
Expand All @@ -96,11 +125,12 @@ PointGroup
│ │ ├── val
│ │ │ ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│ │ ├── test
│ │ │ ├── [scene_id]_vh_clean_2.ply
│ │ │ ├── [scene_id]_vh_clean_2.ply
│ │ ├── scannetv2-labels.combined.tsv
```

(3) Generate input files `[scene_id]_inst_nostuff.pth` for instance segmentation.

```
cd dataset/scannetv2
python prepare_data_inst.py --data_split train
Expand All @@ -109,62 +139,78 @@ python prepare_data_inst.py --data_split test
```

## Training

```
CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_run1_scannet.yaml
CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_run1_scannet.yaml
```

You can start a tensorboard session by

```
tensorboard --logdir=./exp --port=6666
```

## Inference and Evaluation

(1) If you want to evaluate on validation set, prepare the `.txt` instance ground-truth files as the following.

```
cd dataset/scannetv2
python prepare_data_inst_gttxt.py
```
Make sure that you have prepared the `[scene_id]_inst_nostuff.pth` files before.

(2) Test and evaluate.
Make sure that you have prepared the `[scene_id]_inst_nostuff.pth` files before.

(2) Test and evaluate.

a. To evaluate on validation set, set `split` and `eval` in the config file as `val` and `True`. Then run

a. To evaluate on validation set, set `split` and `eval` in the config file as `val` and `True`. Then run
```
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_run1_scannet.yaml
```

An alternative evaluation method is to set `save_instance` as `True`, and evaluate with the ScanNet official [evaluation script](https://github.com/ScanNet/ScanNet/blob/master/BenchmarkScripts/3d_evaluation/evaluate_semantic_instance.py).

b. To run on test set, set (`split`, `eval`, `save_instance`) as (`test`, `False`, `True`). Then run

```
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_run1_scannet.yaml
```

c. To test with a pretrained model, run

```
CUDA_VISIBLE_DEVICES=0 python test.py --config config/pointgroup_default_scannet.yaml --pretrain $PATH_TO_PRETRAIN_MODEL$
```

## Pretrained Model
We provide a pretrained model trained on ScanNet v2 dataset. Download it [here](https://drive.google.com/file/d/1wGolvj73i-vNtvsHhg_KXonNH2eB_6-w/view?usp=sharing). Its performance on ScanNet v2 validation set is 35.2/57.1/71.4 in terms of mAP/mAP50/mAP25.

We provide a pretrained model trained on ScanNet v2 dataset. Download it [here](https://drive.google.com/file/d/1wGolvj73i-vNtvsHhg_KXonNH2eB_6-w/view?usp=sharing). Its performance on ScanNet v2 validation set is 35.2/57.1/71.4 in terms of mAP/mAP50/mAP25.

## Visualize

To visualize the point cloud, you should first install [mayavi](https://docs.enthought.com/mayavi/mayavi/installation.html). Then you could visualize by running

```
cd util
cd util
python visualize.py --data_root $DATA_ROOT$ --result_root $RESULT_ROOT$ --room_name $ROOM_NAME$ --room_split $ROOM_SPLIT$ --task $TASK$
```

The visualization task could be `input`, `instance_gt`, `instance_pred`, `semantic_pred` and `semantic_gt`.

## Results on ScanNet Benchmark
## Results on ScanNet Benchmark

Quantitative results on ScanNet test set at the submisison time.
![scannet_result](https://github.com/llijiang/PointGroup/blob/master/doc/scannet_benchmark.png)

## TODO List

- [ ] Distributed multi-GPU training

## Citation

If you find this work useful in your research, please cite:

```
@article{jiang2020pointgroup,
title={PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation},
Expand All @@ -175,9 +221,9 @@ If you find this work useful in your research, please cite:
```

## Acknowledgement
This repo is built upon several repos, e.g., [SparseConvNet](https://github.com/facebookresearch/SparseConvNet), [spconv](https://github.com/traveller59/spconv) and [ScanNet](https://github.com/ScanNet/ScanNet).

## Contact
If you have any questions or suggestions about this repo, please feel free to contact me ([email protected]).
This repo is built upon several repos, e.g., [SparseConvNet](https://github.com/facebookresearch/SparseConvNet), [spconv](https://github.com/traveller59/spconv) and [ScanNet](https://github.com/ScanNet/ScanNet).

## Contact

If you have any questions or suggestions about this repo, please feel free to contact me ([email protected]).
Loading