This project implements a face expression detection system using machine learning techniques. It can identify various facial expressions in images or real-time video streams.
- Detect faces in images and video streams
- Classify facial expressions into categories (e.g., happy, sad, angry, surprised)
- Real-time processing capability
- Easy-to-use command-line interface
- Pre-trained model included
-
Clone this repository:
git clone https://github.com/harsh6045/facial-emotion-detection.git cd face-emotion-detection
-
Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
-
To detect expressions in an image:
python realtimedetection.py --image path/to/your/image.jpg
-
To run real-time detection using your webcam:
python realtimedetection.py --webcam
-
For additional options:
python realtimedetection.py --help
We used the FER2013 dataset for training and evaluation. This dataset contains 48x48 pixel grayscale images of faces, categorized into 7 emotions:
- Angry
- Disgust
- Fear
- Happy
- Sad
- Surprise
- Neutral
To train the model on your own dataset or fine-tune the existing model:
- Prepare your dataset
- Run the training script
Model performance metrics on the test set:
- Accuracy: 52.2%
- F1-Score: 0.63
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.