- AWS CLI installed
- Configure AWS CLI
- Pulumi installed
- Pulumi logged in
- Helm instaled
- NPM installed
- Kubectl installed
- Clone repository
git clone https://github.com/ludesdeveloper/pulumi-managed-nodes-autoscale-eks.git
- Change directory
cd pulumi-managed-nodes-autoscale-eks
- Install dependencies
npm install
- Initialize Pulumi stack
pulumi stack init
- Set Pulumi region
pulumi config set aws:region ap-southeast-1
- Provision EKS with pulumi
pulumi up --yes
Give name of your stack, (dev) or other name you prefered
- Copy kubeconfig to your kubeconfig directory (warning, this will replace your kubeconfig file)
pulumi stack output kubeConfig > ~/.kube/config
- Make sure your cluster is ready
kubectl get nodes
- You can run script below to apply cluster autoscaler or manually copy paste line inside init-autoscale-cluster.sh script
./init-autoscale-cluster.sh
- Usually when testing, I open 3 tabs terminal to check, and every check consist of : auto scale log
kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler
- Check Nodes
watch kubectl get nodes
- Check Pods
watch kubectl get pods -o wide
- Apply nginx-deployment
kubectl apply -f nginx-deployment.yaml
- Scale up nginx pod
kubectl scale deployment/nginx-deployment --replicas=20
You will see pending in your pod, and kubernetes will trigger scale up for nodes, wait for a while. You also can check in your EC2 dashboard AWS, AWS will try to create new instance
- Scale down nginx pod
kubectl scale deployment/nginx-deployment --replicas=3
Scale down will takes time around 10 minutes, until cluster remove unnecessary nodes
- Destroy pulumi
pulumi destroy --yes
- Remove pulumi stack
pulumi stack rm dev
Type name of your stack
How to Scale Your Amazon EKS Cluster: EC2, Managed Node Groups, and Fargate