Skip to content

SDC-Batman/atelier-reviews-api

Repository files navigation

Atelier Reviews Service

The Atelier Reviews service supports the Reviews section of the Atelier product page.

Summary

Atelier's reviews service is powered by 4 Amazon Web Services EC2 instances:

  • Nginx Load Balancing Server
  • Host Server 1
  • Host Server 2
  • Mongo Database Server

The service supports 500 client requests per second with an average response time of 4 to 10 milliseconds per response.

Why MongoDB?

Both Mongo (NoSQL) and Postgres (SQL) databases were considered for the service before implementation.

MongoDB was selected over Postgres because the service required the following:

  • complex, nested data types in responses (i.e. JSON arrays and objects)

  • aggregations including counts and averages of ratings, recommendations, and characteristics

MongoDB was purpose-built to handle these exact requirements and enables fast, simple queries.

Schema Design

There are two primary collections in the MongoDB that powers the service:

  • reviews
  • reviews_meta

reviews_meta contains metadata including counts and averages of ratings, characteristics, and recommendations from the reviews collection.

The reviews collection schema includes two fields, characteristics and photos, that are embedded documents for improved performance and efficiency.

The reviews_meta collection is created via MongoDB aggregation pipelines after the reviews collection has been created.

Performance Optimization & Tuning

Local Testing:

Before adding any indexes to the reviews collection, individual testing in Postman showed response times in the 2000 to 3000 millisecond range.

Local, randomized load testing with K6 of the product_id parameter at the reviews endpoint showed average response time of 3.8 seconds at 10 client requests per second (rps). At 100 rps, response times increased to an average of 31 seconds per response.

After indexing on product_id, average response time at 100 rps, fell to just 8 milliseconds, a 99.8% reduction in average response time!

With indexing, K6 testing showed that the service could handle 1,000 rps with an average response time of just 50 milliseconds (ms)!

Individual Testing: 2000 - 3000 millisecond response times.

Deployment:

After deploying the database and server to AWS EC2 instances, stress testing with loader.io demonstrated that the service could handle throughput of 400 rps with 31 ms average response time.

Load Balancing:

Further Optimizations:

  • after load balancer

    • 1 server

    • 2 servers

    • 3 servers

  • after additional indexing

Further Documentation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published