![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
(Image from https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data)
Shape: (1, 1, 64, 64)
- Estimating emotion
### Estimating emotion ###
emotion: happiness
![]() happiness
|
![]() surprise
|
![]() sadness
|
![]() anger
|
![]() disgust
|
![]() fear
|
![]() contempt
|
Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.
For the sample image,
$ python3 ferplus.py
If you want to specify the input image, put the image path after the --input
option.
$ python3 ferplus.py --input IMAGE_PATH
If you want to perform face detection in preprocessing, use the --detection
option.
$ python3 ferplus.py --input IMAGE_PATH --detection
By adding the --video
option, you can input the video.
If you pass 0
as an argument to VIDEO_PATH, you can use the webcam input instead of the video file.
You can use --savepath option to specify the output file to save.
$ python3 ferplus.py --video VIDEO_PATH --savepath SAVE_VIDEO_PATH
By adding the --model_name
option, you can specify model name which is selected from "majority", "probability", "crossentropy" "multi_target". (default is majority)
$ python3 ferplus.py --model_name majority
MS Cognitive Toolkit
ONNX opset = 9
VGG13_majority.onnx.prototxt
VGG13_probability.onnx.prototxt
VGG13_crossentropy.onnx.prototxt
VGG13_multi_target.onnx.prototxt