Udagram is a simple cloud application developed alongside the Udacity Cloud Engineering Nanodegree. The app allows users to register and log into web client, post photos to the feed, and process photos using an image filtering microservice.
The project is composed of three pieces:
- The Simple Frontend, an Ionic client web application which consumes the RestAPI Backend.
- The RestAPI Feed Backend, a Node-Express feed microservice.
- The RestAPI User Backend, a Node-Express user microservice.
The aim of the exercise is to build, test and deploy the microservice, using container technology, and CI/CD workflow.
Ensure relevant dependencies are in place.
docker --version
aws --version
eksctl version
kubectl version --short --client
aws-iam-authenticator version
Add following to the $PATH.
export POSTGRESS_USERNAME=your postgress username;
export POSTGRESS_PASSWORD=your postgress password;
export POSTGRESS_DB=your postgress database;
export POSTGRESS_HOST=your postgress host;
export AWS_REGION=your aws region;
export AWS_PROFILE=your aws profile;
export AWS_BUCKET=your aws bucket name;
export JWT_SECRET=your jwt secret;
Build local docker images for the project.
docker-compose -f ./udacity-c3-deployment/docker/docker-compose-build.yaml build --parallel
Spin up docker containers with following command.
docker-compose -f ./udacity-c3-deployment/docker/docker-compose.yaml up
Make sure users can sign-up, login, post message, and attach pictures. Testing URL is at http://localhost:8100.
For distribution over the internet, we publish container(s) to the Dockerhub.
docker-compose -f ./udacity-c3-deployment/docker/docker-compose-build.yaml push
For CI/CD, we use Travis CI for this exercise. New commit to the github master branch will trigger new build process.
- Sign up Travis CI and connect Github repo to Trivis CI.
- Add .travis.yml to the root of the application.
language: minimal
services: docker
env:
- DOCKER_COMPOSE_VERSION=1.23.2
before_install:
- docker -v && docker-compose -v
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
install:
- docker-compose -f udacity-c3-deployment/docker/docker-compose-build.yaml build --parallel
- docker-compose -f udacity-c3-deployment/docker/docker-compose-build.yaml push
- Add microservice specific environment variables in TravisCI setting. Also add in credential for Dockhub to publish the built image.
- Commit and push changes to Github, and trigger build.
To provision a k8s cluster on AWS, use eksctl command with script bellow.
eksctl create cluster \
--name "udagram"
--version 1.14 \
--region ap-northeast-1 \
--nodegroup-name udagram-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto
- Encode AWS-Credential in
aws_secret.yaml
. The pod will need AWS credential to access S3 bucket to save images that users upload.
cat ~/.aws/credentaisl | base64
-
Save microservice specific configurations in
env-configmap.yaml
, e.g. AWS S3 bucket name, database name, database host, JWT secret, etc. -
Encode postgres database username and password in
env-secret.yaml
. The pod will need RDS credential to save user posts to the database.
echo -n 'POSTGRESS_USERNAME' | base64
echo -n 'POSTGRESS_PASSWORD' | base64
Deploy k8s configmaps and secrets.
kubectl apply -f aws-secret.yaml
kubectl apply -f env-secret.yaml
kubectl apply -f env-configmap.yaml
Deploy microservices. Specifically we deploy seperate services for frontend web app, NGINX reverse proxy, and two sets of backend APIs.
kubectl apply -f backend-feed-deployment.yaml
kubectl apply -f backend-feed-service.yaml
kubectl apply -f backend-user-deployment.yaml
kubectl apply -f backend-user-service.yaml
kubectl apply -f frontend-deployment.yaml
kubectl apply -f frontend-service.yaml
kubectl apply -f reverseproxy-deployment.yaml
kubectl apply -f reverseproxy-service.yaml
We then check status of pods , deplyments , and services.
kubectl get pods
kubectl get rs
kubectl get svc
kubectl get all
Finally we set up port forwarding to test drive the microservice.
kubectl port-forward service/reverseproxy 8080:8080
kubectl port-forward service/frontend 8100:8100
The application is monitored by Amazon CloudWatch. Inside AWS EKS control plane, we can optionally activate various types of logging for purpose of debugging.
For future releases of functionalities, we have choice of rolling update or blue/green deployment.
e.g. just deploy updated version
kubectl apply -f frontend-deployment
e.g. deploy two versions at the same time
kubectl apply -f frontend-deployment-alpha
kubectl apply -f frontend-deployment-beta
To cope with fluctuation of incoming traffic, we can scale up and down the number of servicing nodes in the cluster.
e.g. scale up
kubectl scale deployment/user --replicas=4
e.g. scale down
kubectl scale deployment/user --replicas=2