This repository hosts the source code of our paper: DFSI.
Challenges and Motivation:
The network structure:
-
[25/02/2025] 📣We submitted our paper to Information Fusion!
-
[23/02/2025] 📣We released the code.
Run pip install -r requirements.txt
in the root directory of the project.
Let's say $ROOT
is the root directory.
data
├── CUHK-SYSU
├── PRW
- Following the link in the above table, download our pretrained model to anywhere you like, e.g.,
$ROOT/exp_cuhk
Performance profile:
Please see the Demo photo:
Note: At present, our script only supports single GPU training, but distributed training will be also supported in future. By default, the batch size and the learning rate during training are set to 3 and 0.003 respectively, which requires about 28GB of GPU memory. If your GPU cannot provide the required memory, try smaller batch size and learning rate (performance may degrade). Specifically, your setting should follow the Linear Scaling Rule: When the minibatch size is multiplied by k, multiply the learning rate by k. For example:
CUHK:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu_resnet.yaml
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu_convnext.yaml
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu_solider.yaml
PRW:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/prw_resnet.yaml
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/prw_convnext.yaml
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/prw_solider.yaml
if out of memory, modify this:
./configs/cuhk_sysu_convnext.yaml BATCH_SIZE: 3 #5
Before running, you need to modify the addresses in these two files and link them to the directory where your data is located.
./configs/_path_cuhk_sysu.yaml
./configs/_path_prw.yaml
Tip: If the training process stops unexpectedly, you can resume from the specified checkpoint.
python train.py --cfg configs/cuhk_sysu.yaml --resume --ckpt /path/to/your/checkpoint
Note: You need to modify the base_dir address in the file ./configs/_path_solider_weights.yaml. like this:
Remember that when you test other code, you still need to set it to 100!!Thanks to the authors of the following repos for their code, which was integral in this project:
Pull request is welcomed! Before submitting a PR, DO NOT forget to run ./dev/linter.sh
that provides syntax checking and code style optimation.
If you find this code useful for your research, please cite our paper
@article{zhang2024learning,
title={Learning adaptive shift and task decoupling for discriminative one-step person search},
author={Zhang, Qixian and Miao, Duoqian and Zhang, Qi and Wang, Changwei and Li, Yanping and Zhang, Hongyun and Zhao, Cairong},
journal={Knowledge-Based Systems},
volume={304},
pages={112483},
year={2024},
publisher={Elsevier}
}
@article{zhang2024attentive,
title={Attentive multi-granularity perception network for person search},
author={Zhang, Qixian and Wu, Jun and Miao, Duoqian and Zhao, Cairong and Zhang, Qi},
journal={Information Sciences},
volume={681},
pages={121191},
year={2024},
publisher={Elsevier}
}
@inproceedings{li2021sequential,
title={Sequential End-to-end Network for Efficient Person Search},
author={Li, Zhengjia and Miao, Duoqian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={3},
pages={2011--2019},
year={2021}
}
If you have any question, please feel free to contact us. E-mail: [email protected]