-
Install CUDA tool kit and the CuDNN
-https://developer.nvidia.com/cuda-toolkit-archive
-https://developer.nvidia.com/cudnn -
Install Anaconda Python Package Manager
-https://www.anaconda.com/products/individual
-
Create an Environment in Anaconda with Python 3.7
-
Install PyTorch - (For Windows 10 Conda Environment with Python and CUDA 10.1 - use Pytorch 1.8.1 as follow)
OR download the appropriate PyTorch version from - https://pytorch.orgconda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
To check if PyTorch is properly installed, use the following command in a python instance.
If pytorch is properly installed,
torch.cuda.is_available() should return True and
torch.cuda.get_device_name() should return the name of your GPU card.import torch torch.cuda.is_available() torch.cuda.get_device_name()
-
Clone this repository.
git clone https://github.com/cepdnaclk/e15-4yp-human-behavior-prediction-using-cctv.git
-
Download the weight files for (YOLO, ReID, OpenPose, Action) from the following link
https://drive.google.com/drive/folders/13IzO-skjxj-kRSScbuYjoRYKTW-P3eaB?usp=sharing
Move the 'ModelFiles' folder to the 'code' folder.
(If you download the weight files one by one, create the 'ModelFiles' folder inside 'code' and put the weight files in it) -
Download the demo video files from the following link
https://drive.google.com/drive/folders/1bP9OHtpQ9oY0C3mLkViN8HfQ8jQ2DteV?usp=sharing
Move the 'Demo' folder to the 'code' folder.
(If you download the demo files one by one, create the 'Demo' folder inside 'code' and put the demo files in it) -
Download the Image Database file from the following link
https://drive.google.com/drive/folders/1sZnGMVUAc1gHMI94iBfFhcNTxbGgbXKC?usp=sharing
Move the 'ImageDatabase' folder to the 'code' folder.
(If you download the files one by one, create the 'ImageDatabse' folder inside 'code' and put the files in it) -
The final Folder Structure should be as follow.
e15-4yp-human-behavior-prediction-using-cctv ├── README.md ├── docs └── code ├── BehaviorExtraction ├── BehaviorPrediction ├── Demo | ├── CCTV_Low.mp4 | └── Entrance_2.mp4 ├── ImageDatabase | ├── Faces | | ├── 0000 | | ├── 0001 | | ├── ... | | └── faces.pt | └── Human | └── 2021-04-15 | ├── 0000 | ├── 0001 | ├── ... | └── today ├── ModelFiles | ├── action | | └── model.pickle | ├── openpose | | └── openpose_model.pth | ├── reid | | └── reid_model.pth | └── yolo | ├── yolov3.cfg | └── yolovo3.weights ├── PersonIdentification ├── CCTVObserver.py ├── FaceObserver.py ├── face.png └── face_2.png
-
CD in to the 'e15-4yp-human-behavior-prediction-using-cctv' folder and start an instance of the conda environment.
Then run the following command.pip install -r Requirements.txt
-
Execute the following steps while keeping the cmd in 'code' folder and in the conda environment.
Generate features for the faces:python PersonIdentification/FaceGenerate.py
(If you want to try a seperate video, create another folder inside ImageDatabase/Faces following the naming convention and
put more than 10 face images of the person you want to recognize inside that folder.)Run the face Observer:
python FaceObserver.py
Generate features for the re-identification:
(If you are running the demo files, copy the files under the date '2021-04-15' in ImageDatabase/Humans and paste them under a folder with the name of the current date.
This contains more images to increase the performance drastically )python BehaviorExtraction/HumanGenerate.py
Run the Human detector:
python CCTVObserver.py