Skip to content

SuyogShinde942/K8s_cluster-AWS-Dynamic_inventory-ANSIBLE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

K8s_cluster-AWS-Dynamic_inventory-ANSIBLE

Creating a K8s cluster with | Ansible on provisioned AWS instances | Dyanamic Inventory

Amazon Web Service :

We can define AWS (Amazon Web Services) as a secured cloud services platform that offers compute power, database storage, content delivery, and various other functionalities. To be more specific, it is a large bundle of cloud-based services.

Kubernetes:

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

Ansible :

Ansible is the simplest way to automate apps and IT infrastructure. Application Deployment + Configuration Management + Continuous Delivery.

START

I am going to configure a K8s Cluster on AWS, Ec-2 instances with a tool for automation called ansible. Also with a dynamic inventory.

Why dynamic inventory?

For example, Aws default provides a dynamic IP so, after every restart, it's assigned to a new IP, or we launch a new O.S in a region called ap-south-1 and we need to use all the container as the database from that region and suppose there are 100 servers, Instead of putting it manually in the inventory there is a python script that works as a dynamic inventory which divides the instances with tags, region, subnets, etc.

First, let's start with provisioning of Ec-2 instances with ansible

Creating an Ansible role called provision-ec2

Screenshot (311)

In roles, Creating a var file provision-ec2/vars/main.yml

var

In the task/main.yml we are going to launch an ec2 instance using the amazon ami image one Master and two slaves.

awsmain

Here, the most important are tags i.e k8s_master and k8s_slave because we are gonna use them in the dynamic inventory to separate them according to their uses.

Here the access and secret key are pre-generated by me you can create your own as per your need, they are assigned to an IAM user.

1612791874539

Instead of providing the aws_access_key and aws_secret_key in the playbook, we can configure the AWS CLI and the Ansible will by default use the credentials which we provide while configuring the AWS CLI.

The security group which was created with all inbound and outbound traffic named k8s cluster. It's not good for security but set for doing this project.

3

According to the role, all the tasks are successful

5 4

Now the one instance for master and two for slaves are ready to configure for the k8s cluster.

Now let's move to the dynamic inventory part

The dynamic inventory will separate the instances according to region, tags, public and private IPs, and many more

We used two scripts for the dynamic inventory ec2.py and ec2.ini

https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py ---- ec2.py https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.ini ---- ec2.ini

Pre-requisites for these scripts are installing boto and boto3 in the system where you are running the program.

To install boto module

pip3 install boto

pip3 install boto3

To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a variety of methods available, but the simplest is just to export two environment variables:

export AWS_ACCESS_KEY_ID='your access key'

export AWS_SECRET_ACCESS_KEY='your secret key'

or The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.

6

We have to only run ec2.py for getting the dynamic inventory.

The script separated the instances according to tags "tag_db_k8s_master" and "tag_db_k8s_slave" and made them a separated host group so we can use them in the playbook.

Our dynamic inventory is ready to use.

Let's, Create a role for configuring the k8s master.

ansible-galaxy init k8s-cluster

In the k8s-cluster/vars/main.yml file

var1

Steps in the below playbook /k8s-cluster/tasks/main.yml

  1. Installing docker and iproute-tc
  2. Configuring the Yum repo for kubernetes
  3. Installing kubeadm,kubelet kubectl program
  4. Enabling the docker and Kubernetes
  5. Pulling the config images
  6. Confuring the docker daemon.json file
  7. Restarting the docker service
  8. Configuring the Ip tables and refreshing sysctl
  9. Starting kubeadm service
  10. Creating .kube Directory
  11. Copying file config file
  12. Installing Addons e.g flannel
  13. Creating the token

mmain1 mmain2

The file created for running the role(k8s-cluster)

project

Let's, Configure the slave on the other two ec-2 instances.

Create a role k8s-slave

ansible-galaxy init k8s-slave

In /k8s-slave/vars/main.yml

var1

Steps in the below playbook /k8s-slave/task/main.yml

  1. Installing docker and iproute-tc
  2. Configuring the Yum repo for kubernetes
  3. Installing kubeadm,kubelet kubectl program
  4. Enabling the docker and kubenetes
  5. Pulling the config images
  6. Confuring the docker daemon.json file
  7. Restarting the docker service
  8. Configuring the Ip tables and refreshing sysctl smain1

The file created for running the role(k8s-slave)

project

Checking if the cluster is working or not :

As in the above-created token in master join the slaves

Adding this token in both the slaves

11

After running "kubectl get nodes" in the Master.

13

Let's launch a pod on this cluster

kubectl create deployment success --image=vimal13/apache-webserver-php 14

Exposing the POD

kubectl expose deployments success --type=NodePort --port=80

kubectl get pods -o wide 16

The pod is launched in the first slave

17

HAPPY LEARNING ⭐⭐⭐

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages