We can define AWS (Amazon Web Services) as a secured cloud services platform that offers compute power, database storage, content delivery, and various other functionalities. To be more specific, it is a large bundle of cloud-based services.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Ansible is the simplest way to automate apps and IT infrastructure. Application Deployment + Configuration Management + Continuous Delivery.
I am going to configure a K8s Cluster on AWS, Ec-2 instances with a tool for automation called ansible. Also with a dynamic inventory.
For example, Aws default provides a dynamic IP so, after every restart, it's assigned to a new IP, or we launch a new O.S in a region called ap-south-1 and we need to use all the container as the database from that region and suppose there are 100 servers, Instead of putting it manually in the inventory there is a python script that works as a dynamic inventory which divides the instances with tags, region, subnets, etc.
In the task/main.yml we are going to launch an ec2 instance using the amazon ami image one Master and two slaves.
Here, the most important are tags i.e k8s_master and k8s_slave because we are gonna use them in the dynamic inventory to separate them according to their uses.
Here the access and secret key are pre-generated by me you can create your own as per your need, they are assigned to an IAM user.
Instead of providing the aws_access_key and aws_secret_key in the playbook, we can configure the AWS CLI and the Ansible will by default use the credentials which we provide while configuring the AWS CLI.
The security group which was created with all inbound and outbound traffic named k8s cluster. It's not good for security but set for doing this project.
The dynamic inventory will separate the instances according to region, tags, public and private IPs, and many more
https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py ---- ec2.py https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini ---- ec2.ini
Pre-requisites for these scripts are installing boto and boto3 in the system where you are running the program.
To install boto module
pip3 install boto
pip3 install boto3
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a variety of methods available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID='your access key'
export AWS_SECRET_ACCESS_KEY='your secret key'
or The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.
The script separated the instances according to tags "tag_db_k8s_master" and "tag_db_k8s_slave" and made them a separated host group so we can use them in the playbook.
ansible-galaxy init k8s-cluster
- Installing docker and iproute-tc
- Configuring the Yum repo for kubernetes
- Installing kubeadm,kubelet kubectl program
- Enabling the docker and Kubernetes
- Pulling the config images
- Confuring the docker daemon.json file
- Restarting the docker service
- Configuring the Ip tables and refreshing sysctl
- Starting kubeadm service
- Creating .kube Directory
- Copying file config file
- Installing Addons e.g flannel
- Creating the token
ansible-galaxy init k8s-slave
- Installing docker and iproute-tc
- Configuring the Yum repo for kubernetes
- Installing kubeadm,kubelet kubectl program
- Enabling the docker and kubenetes
- Pulling the config images
- Confuring the docker daemon.json file
- Restarting the docker service
- Configuring the Ip tables and refreshing sysctl
The file created for running the role(k8s-slave)
kubectl create deployment success --image=vimal13/apache-webserver-php
kubectl expose deployments success --type=NodePort --port=80