A simple and concise implementation of the RFCN is given.
This project can be run with Pytorch 1.7.
1. train on voc2007 & no OHEM
2. train on voc07+12 & OHEM
Train on voc2007 | Train on voc07+12 | |
---|---|---|
From scratch without OHEM | 71.6% | / |
From scratch with OHEM | 72.5% | 76.8% |
- The results are comparable with those described in the paper(RFCN).
- Very low cuda memory usage (about 3GB(training) and 1.7GB(testing) for resnet101).
- It can be run as pure Python code, no more build affair.
matplotlib==3.2.2
tqdm==4.47.0
numpy==1.18.5
visdom==0.1.8.9
fire==0.3.1
torchnet==0.0.4
opencv_contrib_python==4.5.1.48
scikit_image==0.16.2
torchvision==0.8.1
torch==1.7.0
cupy==8.4.0
Pillow==8.1.0
skimage==0.0
cd [RFCN-pytorch root_dir]
Train:
python -m visdom.server
python train.py RFCN_train
Access 'http://localhost:8097/' to view loss and mAP (real-time).
Eval:
python train.py RFCN_eval --load_path='checkPoints/rfcn_voc07_0.725_ohem.pth' --test_num=5000
Predict
Place the pictures to be predicted in predict/imgs
folder.
Run command in terminal:
python predict.py predict --load_path='checkPoints/rfcn_voc07_0.725_ohem.pth'
You can download the weights of ResNet101 and place it in weights
folder.
You can download the pretrained model from Google Drive or 百度云盘
(passwd: 9o15) and place it in checkPoints
folder.
This project is writen by elbert-xiao, and thanks to the provider chenyuntc for the project simple-faster-rcnn-pytorch!
If you have any question, please feel free to open an issue.