- Monitoring
- Automation
- Analytics
The architecture diagram is available under /architecture folder.
- Raspberry Pi (eg: https://www.amazon.com/s?k=raspberry+pi)
- PIR Motion Sensor (eg: https://www.amazon.com/Aukru-Pyroelectricity-Raspberry-Microcontrollers-Electronic/dp/B019SX734A)
- Raspberry Pi Camera Module (eg: https://www.amazon.com/Raspberry-Pi-Camera-Module-Megapixel/dp/B01ER2SKFS)
- USB Microphone
- External Speaker
To run the workflow with real devices, you need to set up your Raspberry Pi with those devices.
- Attach your motion sensor to Raspberry Pi
- Attach the camera by following the instructions in the Raspberry Pi camera board documentation.
- Download and install the drivers from /device-drivers/ThingsGraphPrototypeDevices/. To install the drivers, follow the instructions in the README.
- Download and run the Play Music python app. Follow the instructions on the README
Now you've configured your Raspberry Pi for the smart assistant.
Open the AWS IoT console and create two things: one thing for your motion sensor and the other for the camera.
For instructions on how to create things in the registry, see https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html. Be sure to create and activate certificates for each thing.
The flow for this project has 5 custom device models to call the Lambda functions. The TDM for the device models are under iot/tg-models. Replace the Region and Account ID on all files.
AWS IoT Things Graph Data Model (TDM) code, contains the definition of the flow, is under iot/tg-flow. Replace the Region and Account ID with appropriate values
Use the instructions here to create and deploy the flow through CLI - https://docs.aws.amazon.com/thingsgraph/latest/ug/iot-tg-workflows-gs-cloud-cli.html
S3 is used for storing the video recordings of baby's sleep events. Once the S3 bucket is created, note down the name. You will provide this as an argument, when running the device driver on Raspberry Pi
Here are the instructions: https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html
Amazon Timestream is used to store the Baby's sleep events. This project requires a database "eventsDB" and a table named "sleepEvents".
AWS CLI command for db and table creation - database/create-timestream-db-cli.txt
This project creates 3 DynamoDB tables -
- for tracking the sleep event escalations/alerts
- for showing event history with video recordings
- storing sleep assistant configurations
AWS CLI command for #1 is here - database/create-dynamodb-table-cli.txt
The other two will be created through AWS Amplify
This project uses 7 Lambda functions to call API based actions and interact with Step Functions. All the Lambda functions are under lambda-functions/ folder
We are using Step Functions to track and call Lambda functions to stop the sleep assistant after configurable period
The step functions state machine definition is in /step-functions
The cry sound detection is done with a custom audio classification model using Amazon SageMaker Pytorch framework.
Details of the model here: https://github.com/aws-samples/amazon-sagemaker-audio-classification-pytorch
AWS Amplify can be used to build and deploy secure, scalable, full-stack web and mobile applications quickly. For this prototype, I used AWS Amplify to build an app with React.js as front-end, AWS AppSync(a fully managed GraphQL service) as the backend and DynamoDB as the database.
There are 2 features - one for setting the assistant configuration and the other for viewing the sleep history.
Code is in /mobile-app
For this prototype, Amazon Quicksight is used to generate reports and dashboard on baby's sleep patterns. Quicksight can pull data directly from Amazon Timestream Database.
- Documentation on how to connect Quicksight to Timestream - https://docs.aws.amazon.com/timestream/latest/developerguide/Quicksight.html
- Create Analysis and Dashboard with Quicksight - https://docs.aws.amazon.com/quicksight/latest/user/example-analysis.html