The goal of this project is to generate an image or video of a person as they would look at a future point in time. This can be used for various purposes such as forensic investigations, personal curiosity, or creating visual effects in movies and TV shows.
- Python 3.7.9
- TensorFlow v2.11
- Pytorch 1.0.
UTKFace (Access from the github link)
You may use any dataset with labels of age and gender. In this demo, we use the UTKFace dataset. It is better to use aligned and cropped faces. Please save and unzip UTKFace.tar.gz
to the folder data
.
- Resizing & Normalization - We resize the images to a uniform size and normalize the pixel values to a range of between o and 1 to help converge the model better
- Data Augmentation - Generating additional training data by applying random transformations to the original images to help the model learn better such as rotation range, zoom range, horizontal flip, fill mode.
- Noise Reduction - This helps the learning of the model easier as it identifies patterns and helps with the ovefitting
Based on our evaluation,the style GAN model was the best based on our visual evaluation criteria, so we deployed it. We used Streamlit to deploy the model, where users can upload images. After uploading the photo, the model provides an image as young or old, based on how the user uses the slider tool to indicate the age they want to view. This is the link to the deployed model.
- Zhifei Zhang, Yang Song, and Hairong Qi. "Age Progression/Regression by Conditional Adversarial Autoencoder." IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- Elmahmudi, A., Ugail, H. A framework for facial age progression and regression using exemplar face templates. Vis Comput 37, 2023–2038 (2021).
- @inproceedings{Li2020AgePA, title={Age Progression and Regression with Spatial Attention Modules}, author={Qi Li and Yunfan Liu and Zhenan Sun}, booktitle={AAAI}, year={2020} } Liu, J., Liu, R., Li, H., & Liu, S. (2021). Face Aging GAN: Age Progression/Regression of Face Images with Identity Preserved. arXiv preprint arXiv:2102.02754.
GNU General Public License v3.0