The schema for analyses Song play dataset from the start-up called Sparkify In this project, will build the ETL pipeline extracting from the dataset in the S3 stage and transform them into a set of dimensional database for analytics team to continues analysis
We will working with two dataset reside in S3 using the following links:
1.Song Play: s3://udacity-dend/song_data
2.Log Data: s3://udacity-dend/log_data
and for the log data json path is s3://udacity-dend/log_json_path.json
In this data, it is a subset of the real data from Million Song Dataset Each file is in JSON format and contains metadata about a song and the artist of that song.
This dataset, is consist of log file in JSON format that generated by Event Simulator based on the song event from first dataset
In the AWS redshift, they are using star scheme in building databese model. The star schema is consist of one fact table and multiple dimension tables (in this project is four dimension tables).
For fact table, in the log data it consist of the data that link to Songplay data. In this project we use 'page' matched value with 'NextSong'. Because we want only the data that have NextSong.
In dimension table, consists of USERS, ARTISTS, SONG, and TIMES, they all connected to SONGPLAY table
Extracted data fromboth dataset and loaded into the STAGE_EVENTS and STAGE_SONGS table. By using the function load_staging_tables
in etl.py file
After complete loading data into stage tables. Insert data in to fact and dimension table using function insert_tables
in etl.py file