A real-time home mapping app for blind people.
Case: If something fell on the ground, they can’t find it easily. Also, they may forget the location of their belongings (the keys or example).
A custom image classification model trained with AutoML Vision Edge will identify most of the objects that can be found in a house.
Case: when a visually impaired person enters a room they don’t know who is inside
and they have to wait for the other people to greet and identify themselves which is really frustrating.
A sighted user provides photos to the app and a name for each Object/face to associate ,and a deep learning system will identify faces each they appear again.
The app will be called via speech from Google assistant tools to make it easier for the blind to access its functionalities
We are in the Development phase currently and so far we have implemented object recognition in the house, the app maps objects in real time and speaks about them.The model used is for testing purposes as of now.
- Clone/download this repo
- Open App Directory
- Or alternatively see this link (https://github.com/brianzhou139/HomeNavigation)
- Implement object recognition using Auto AutoML Vision Edge
- Create UI/UX prototype.
- Soft releases for Friends and Family.
- Enhance recognition flow performance and accuracy
- Implement Face Identification
- Finish Android Native User Interface.
- Support Hybrid Version for IOS .
- Testing and Deployment
- Soft releases in local communities and adding patches
- Release at Google IO
- Help us to squeeze every bit of performance for object detection on-device
- Access to https://cloud.google.com/tpu/ to train models
- Help us build ML pipeline for model training for person recognition
My name is Brian Zhou. I'm a passionate android developer.I love code challenges.I recently participated in the IEEMADC Mobile App developement contest and this was my submission( https://ieeemadc.github.io/IEEEmadC-wiki/BookShoot) . I have worked on various android projects(https://github.com/brianzhou139).