Skip to content

thnguyenf5/GetMetotheCluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Get me to the Cluster Lab - NGINX+ Ingress Controllers + NGINX+ Edge Servers

This is a lab environment to demonstrate the configuration and setup of a Kubernetes Cluster, Calico CNI, NGINX+ L4 Edge Servers, and NGINX+ Ingress Controllers. This lab environment is loosely based on the whitepaper - https://www.nginx.com/resources/library/get-me-to-the-cluster/.

Conceptual Infrastructure

Infrastructure Lab Details

  • 10.1.1.4 - nginxedge01.f5.local

  • 10.1.1.5 - nginxedge02.f5.local

  • 10.1.1.6 - nginxedge03.f5.local

  • 10.1.1.7 - k8scontrol01.f5.local

  • 10.1.1.8 - k8sworker01.f5.local

  • 10.1.1.9 - k8sworker02.f5.local

  • 10.1.1.10 - k8sworker03.f5.local

  • user: user01

  • pass: f5agility!

Client Desktop

  • 10.1.1.11 - client.f5.local
  • user: ubuntu
  • pass: f5agility!

UDF blueprint has prerequisite software like containerd and kubectl/kubeadmin/kubelet packages. This lab also uses local hostfile for DNS resolution. UDF blueprint is listed as "NGINX Ingress Controller - Calico CNI - Get me to the Cluster"

Additional changes that were made from the white paper:

  • Ubuntu 22.04 LTS
  • FRR routing package was used instead of Quagga
  • 10.1.1.0/24 is the underlay network
  • 172.16.1.0/24 will be the POD network to be advertised via iBGP
  • Additional DNS resiliency was added by advertising a single /32 route for the KUBE-DNS service instead using the individual kube-dns pod IPs in the resolv.conf files.
  • In this lab, we will not be utilizing the IP Pools functionality of the whitepaper.

Table of Contents


K8s_Installation

During this section, you will initialize the kubernetes cluster on k8scontrol01.f5.local. Utilize the UDF portal to log in via the web shell.

  1. Log into k8scontrol01.f5.local
  2. Initialize kubernetes cluster as root user
echo 1 > /proc/sys/net/ipv4/ip_forward
kubeadm config images pull
kubeadm init --control-plane-endpoint=k8scontrol01.f5.local --pod-network-cidr=172.16.1.0/24
  1. Note the output of k8s intialization. Copy the text output to a seperate file on how to add additional worker nodes to the K8s cluster. The worker nodes output will be required in a later step in the lab.
  2. Switch to user01 (Password: f5agility!)
su - user01
  1. Create kubernetes directories in user01 folder
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Confirm cluster installation. NOTE: that the status of the control node will be listed as NotReady. This is because the cluster does not yet have a CNI installed yet. You will install Calico in the next section of the lab.
kubectl cluster-info
kubectl get nodes

Calico_Installation

During this section, you will be installing Calico as the K8s CNI. You will also install Calicoctl which is a command line tool to manage Calico resources and perform adminstrative functions. Additional documentation can be found here:

  1. Download the Calico manifest to the K8s control01 node.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml -O

curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml -O
  1. Edit the custom-resources manifest to configure 172.16.1.0/24 CIDR for pod network.
nano custom-resources.yaml
  1. Below is an example of custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 172.16.1.0/24
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer 
metadata: 
  name: default 
spec: {}
  1. Deploy Calico CNI
kubectl create -f tigera-operator.yaml

kubectl create -f custom-resources.yaml
  1. Confirm Calico deployment. Confirm that all of the calico pods are in a Running state. Once all pods are in a Running state, <CRTL+c> to break and move to the next step.
watch kubectl get pods -n calico-system
  1. Install Calicoctl into the /usr/local/bin/
cd /usr/local/bin/

sudo curl -L https://github.com/projectcalico/calico/releases/download/v3.24.1/calicoctl-linux-amd64 -o calicoctl

sudo chmod +x ./calicoctl
  1. Confirm Calicoctl installation. Confirm the the Calico process is running. NOTE: no BGP peers will be found as we will configure that in later portion of the lab.
sudo calicoctl node status
  1. Switch back to home directory
cd /home/user01

Worker_Nodes_Initialization

During this section, you will add the worker nodes to K8's cluster.

  1. Log into k8sworker01.f5.local via UDF web shell
  2. As root user, add the worker node to the cluster using the output from K8s init command.
  3. Update IP forward setting
echo 1 > /proc/sys/net/ipv4/ip_forward
  1. Sample command - This will NOT be the same command that you will enter
kubeadm join k8scontrol01.f5.local:6443 --token 4fpx9j.rum6ldoc63t3p0gy \
        --discovery-token-ca-cert-hash sha256:5990a4cb02eea640c88b3c764bd452b932d1228380f22368bc48eff439cd7469 
  1. Repeat this process on the remainaing worker nodes k8sworker02.f5.local and k8sworker3.f5.local
  2. As user01 on k8scontrol01.f5.local command line, confirm the K8s cluster status. NOTE: the k8s nodes will now show their status as Ready. Calico node pods should be deployed on both worker and control nodes. Core DNS pods should also be running now.
kubectl get nodes -o wide

kubectl get pods --all-namespaces -o wide

Calico_iBGP_configuration

In this section, you will configure BGP for Calico.

  1. On the k8scontrol01.f5.local create a bgpConfiguration.yaml file to define the initial BGP configurations.
nano bgpConfiguration.yaml
  1. Edit the contents of the yaml file to your environment details. In this lab we are utilizing Calico's default ASN number - 64512
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
  name: default
spec:
  logSeverityScreen: Info
  nodeToNodeMeshEnabled: true
  asNumber: 64512
  1. Deploy the BGP configuration manifest to the Calico CNI.
calicoctl create -f bgpConfiguration.yaml
  1. Next you will create the definition of the BGP Peers for the lab evironment. Create bgppeers.yaml file.
nano bgppeers.yaml
  1. Edit the confents of the yaml file to define your BGP peers. This file will now point to each K8s node and each NGINX+ Edge instance.
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-k8scontrol01 
spec:
  peerIP: 10.1.1.7
  asNumber: 64512 
---
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-k8sworker01 
spec:
  peerIP: 10.1.1.8
  asNumber: 64512 
---
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-k8sworker02
spec:
  peerIP: 10.1.1.9
  asNumber: 64512 
---
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-k8sworker03
spec:
  peerIP: 10.1.1.10
  asNumber: 64512 
---
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-nginxedge01 
spec:
  peerIP: 10.1.1.4
  asNumber: 64512
---
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-nginxedge02 
spec:
  peerIP: 10.1.1.5
  asNumber: 64512
---
apiVersion: projectcalico.org/v3 
kind: BGPPeer
metadata:
  name: bgppeer-global-nginxedge03
spec:
  peerIP: 10.1.1.6
  asNumber: 64512
  1. Deploy the BGP peers manifest to the Calico CNI.
calicoctl create -f bgppeers.yaml
  1. Get BGP configurations. (NOTE: the NGINX+ Edge servers will be in a Connection Refused state. This is because the NGINX+ Edge servers have not yet been configured with BGP. You will configure the BGP on the Edge servers later in the lab.)
calicoctl get bgpConfiguration

calicoctl get bgpPeer

sudo calicoctl node status

NGINX+_Ingress_Controller_deployment

In this section, you will deploy NGINX+ Ingress Controller via a manifest using a JWT token as a deployment. In order to get your JWT token, log into your myF5 portal and download your JWT token entitlement. Additional documentation can be found here:

  1. On the k8scontrol01.f5.local, clone the NGINX+ Ingress controller repo.
git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v2.3.1
  1. Configure RBAC
kubectl apply -f kubernetes-ingress/deployments/common/ns-and-sa.yaml

kubectl apply -f kubernetes-ingress/deployments/rbac/rbac.yaml 
  1. Modify downloaded manifest to enable NGINX+ Ingress Controller as default ingress class
nano kubernetes-ingress/deployments/common/ingress-class.yaml
  1. Uncomment the annotation ingressclass.kubernetes.io/is-default-class. With this annotation set to true all the new Ingresses without an ingressClassName field specified will be assigned this IngressClass.
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx
  annotations: 
    ingressclass.kubernetes.io/is-default-class: "true" 
spec:
  controller: nginx.org/ingress-controller
  1. Create Common Resources
kubectl apply -f kubernetes-ingress/deployments/common/default-server-secret.yaml

kubectl apply -f kubernetes-ingress/deployments/common/nginx-config.yaml

kubectl apply -f kubernetes-ingress/deployments/common/ingress-class.yaml
  1. Create Custom Resources
kubectl apply -f kubernetes-ingress/deployments/common/crds/k8s.nginx.org_virtualservers.yaml

kubectl apply -f kubernetes-ingress/deployments/common/crds/k8s.nginx.org_virtualserverroutes.yaml

kubectl apply -f kubernetes-ingress/deployments/common/crds/k8s.nginx.org_transportservers.yaml

kubectl apply -f kubernetes-ingress/deployments/common/crds/k8s.nginx.org_policies.yaml

kubectl apply -f kubernetes-ingress/deployments/common/crds/k8s.nginx.org_globalconfigurations.yaml
  1. Create docker-registry secret on the cluster using the JWT token from your myF5 account. (NOTE: Replace the < JWT Token > with the JWT token information from your myF5.com portal. Your entry could resemble --docker-username=a93hfganasd3h4BSkaj)
kubectl create secret docker-registry regcred --docker-server=private-registry.nginx.com --docker-username=<JWT Token> --docker-password=none -n nginx-ingress
  1. Confirm the details of the secret
kubectl get secret regcred --output=yaml -n nginx-ingress
  1. Modify the NGINX+ Ingress Controller manifest
nano kubernetes-ingress/deployments/deployment/nginx-plus-ingress.yaml
  1. Update the contents of the yaml file:
  • Increase the number of NGINX+ replicas to 3
  • Utilize the docker-registry secret create in the previous step
  • Update the container location to pull from private-registry.nginx.com
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
     #annotations:
       #prometheus.io/scrape: "true"
       #prometheus.io/port: "9113"
       #prometheus.io/scheme: http
    spec:
      serviceAccountName: nginx-ingress
      automountServiceAccountToken: true
      imagePullSecrets:
      - name: regcred
      containers:
      - image: private-registry.nginx.com/nginx-ic/nginx-plus-ingress:2.3.1
        imagePullPolicy: IfNotPresent
        name: nginx-plus-ingress
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: readiness-port
          containerPort: 8081
        - name: prometheus
          containerPort: 9113
        readinessProbe:
          httpGet:
            path: /nginx-ready
            port: readiness-port
          periodSeconds: 1
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
         #limits:
         #  cpu: "1"
         #  memory: "1Gi"
        securityContext:
          allowPrivilegeEscalation: true
          runAsUser: 101 #nginx
          runAsNonRoot: true
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        env:
        - name: POD_NAMESPACE
          valueFrom:
                      fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-plus
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
         #- -enable-cert-manager
         #- -enable-external-dns
         #- -enable-app-protect
         #- -enable-app-protect-dos
         #- -v=3 # Enables extensive logging. Useful for troubleshooting.
         #- -report-ingress-status
         #- -external-service=nginx-ingress
         #- -enable-prometheus-metrics
         #- -global-configuration=$(POD_NAMESPACE)/nginx-config
  1. Run NGINX+ Ingress Controller
kubectl apply -f kubernetes-ingress/deployments/deployment/nginx-plus-ingress.yaml
  1. Confirm NGINX+ Ingress Controller pods are running.
kubectl get pods --namespace=nginx-ingress
  1. Create a service for the Ingress Controller pods.
nano nginx-ingress-svc.yaml
  1. Create the nginx-ingress-svc.yaml service to be a headless service utilizing ports 80 and 443 inside the nginx-ingress namespace
apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-svc
  namespace: nginx-ingress
spec:
  type: ClusterIP
  clusterIP: None
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    app: nginx-ingress
  1. Deploy the service via manifest
kubectl apply -f nginx-ingress-svc.yaml 
  1. Confirm that the NGINX+ ingress service is running. NOTE: Kube-DNS service is also running.
kubectl get services --all-namespaces -o wide

NGINX+_Edge_installation

In this section, you will be installing NGINX+ on the edge servers outside of the K8's cluster. These edge servers will be responible for L4 Load Balancing to the K8's clusters. In order to complete this section, log into your myF5.com account and download the cert and key for NGINX+. Additional documentation can be found here:

  1. Log into nginxedge01.f5.local as user01
su - user01
  1. Create the /etc/ssl/nginx directory
sudo mkdir /etc/ssl/nginx

cd /etc/ssl/nginx
  1. Create nginx-repo.crt
sudo nano nginx-repo.crt
  1. Paste contents of the crt from your nginx-repo.crt downloaded from myF5 portal.
  2. Create nginx-repo.key
sudo nano nginx-repo.key
  1. Paste contents of key from your nginx-repo.key downloaded from myF5 portal.
  2. Switch back to home directory
cd /home/user01
  1. Install the prerequisites packages.
sudo apt-get install apt-transport-https lsb-release ca-certificates wget gnupg2 ubuntu-keyring
  1. Download and add NGINX signing key and App Protect security updates signing key:
wget -qO - https://cs.nginx.com/static/keys/nginx_signing.key | gpg --dearmor | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null

wget -qO - https://cs.nginx.com/static/keys/app-protect-security-updates.key | gpg --dearmor | sudo tee /usr/share/keyrings/app-protect-security-updates.gpg >/dev/null
  1. Add the NGINX Plus repository.
printf "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] https://pkgs.nginx.com/plus/ubuntu `lsb_release -cs` nginx-plus\n" | sudo tee /etc/apt/sources.list.d/nginx-plus.list
  1. Download the nginx-plus apt configuration to /etc/apt/apt.conf.d:
sudo wget -P /etc/apt/apt.conf.d https://cs.nginx.com/static/files/90pkgs-nginx
  1. Update the repository information:
sudo apt-get update
  1. Install the nginx-plus package.
sudo apt-get install -y nginx-plus
  1. Check the nginx binary version to ensure that you have NGINX Plus installed correctly:
nginx -v
  1. Repeat NGINX+ installation on nginxedge02.f5.local
  2. Repeat NGINX+ installation on nginxedge03.f5.local

FRR_installation

In this section, you will install FRR package as to enable BGP functionality on the NGINX+ Edge servers. Additional documentation can be found here:

  1. Install dependencies
sudo apt-get install \
   git autoconf automake libtool make libreadline-dev texinfo \
   pkg-config libpam0g-dev libjson-c-dev bison flex \
   libc-ares-dev python3-dev python3-sphinx \
   install-info build-essential libsnmp-dev perl \
   libcap-dev python2 libelf-dev libunwind-dev
  1. Install FRR package
sudo apt-get install frr -y
  1. Enable BGP process for FRR daemons by editing the
sudo systemctl stop frr

sudo nano /etc/frr/daemons
  1. Edit the file to to enable bgpd
bgpd=yes
  1. Enable and confirm FRR service is active and running.
sudo systemctl enable frr.service

sudo systemctl restart frr.service

sudo systemctl status frr
  1. Repeat NGINX+ installation on nginxedge02.f5.local
  2. Repeat NGINX+ installation on nginxedge03.f5.local

#FRR_iBGP_configuration

In this section, you will configure iBGP on the NGINX+ Edge server and build a mesh with the K8's Calico CNI.

  1. Log into the FRR routing shell
sudo vtysh
  1. Configure BGP network and neighbors.
config t
router bgp 64512
bgp router-id 10.1.1.4   
network 10.1.1.0/24
neighbor calico peer-group
neighbor calico remote-as 64512
neighbor calico capability dynamic
neighbor 10.1.1.5 peer-group calico
neighbor 10.1.1.5 description nginxedge02
neighbor 10.1.1.6 peer-group calico 
neighbor 10.1.1.6 description nginxedge03
neighbor 10.1.1.7 peer-group calico 
neighbor 10.1.1.7 description k8scontrol01
neighbor 10.1.1.8 peer-group calico
neighbor 10.1.1.8 description k8sworker01
neighbor 10.1.1.9 peer-group calico
neighbor 10.1.1.9 description k8sworker02
neighbor 10.1.1.10 peer-group calico
neighbor 10.1.1.10 description k8sworker03
exit
exit 
write
  1. Confirm configurations
show running-config

show ip bgp summary

show bgp neighbors

show ip route
  1. Exit vtysh shell
exit
  1. Repeat BGP configurations step on nginxedge02.f5.local. Be sure to modify the bgp router-id IP address to 10.1.1.5 and include nginxedge01 and nginxedge03 as peers.
  2. Repeat BGP configurations step on nginxedge03.f5.local. Be sure to modify the bgp router-id IP address to 10.1.1.6 and include nginxedge01 and nginxedge02 as peers.
  3. Upon completion of this section, the bgp configurations should show all of the NGINX+ edge and K8s nodes connected via BGP.

NGINX+_Edge_DNS_resolution

In this section, you will setup DNS resolution on the NGINX+ Edge servers to utilize the internal DNS core-dns solution. As part of this lab, we will be advertising the service IP versus the individual pod endpoint ips as shown in the whitepaper. Additional documentation can be found here:

  1. Log into k8scontrol01.f5.local as user01.
  2. Describe the kube-dns service to get IP address information. NOTE the Service IP address and the pod endpoint IP addresses. In this lab example, we will be utilizing the Service IP address in the 10.96.0.10.
kubectl describe svc kube-dns -n kube-system
  1. Update the existing bgpConfiguration.yaml manifest to add advertisement of the ServiceClusterIP.
nano bgpConfiguration.yaml
  1. Below is an example of how to add a service IP address of 10.96.0.10/32.
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
  name: default
spec:
  logSeverityScreen: Info
  nodeToNodeMeshEnabled: true
  asNumber: 64512
  serviceClusterIPs:
  - cidr: 10.96.0.10/32
  1. Apply the configuration
calicoctl apply -f bgpConfiguration.yaml 
  1. Log into nginxedge01.f5.local as user01 and confirm BGP routes for the kube-DNS IP address. Exit the shell.
sudo vtysh

show ip route

exit
  1. On nginxedge01.f5.local as user01, edit the localhost DNS configuration to add the kubedns service as a DNS resolver.
sudo nano /etc/resolv.conf
  1. Add the service IP address of KUBE-DNS and the domains. Add the following:
nameserver 10.96.0.10   #kube-dns service IP address
search . cluster.local svc.cluster.local
  1. Execute multiple ping test to from NGINX Edge server ensure different pod IPs of the NGINX+ Ingress controllers. You have now established DNS connectivity to from the external NGINX+ edge servers into the K8s cluster's private Kube-DNS environment.
ping nginx-ingress-svc.nginx-ingress.svc.cluster.local -c 2
ping nginx-ingress-svc.nginx-ingress.svc.cluster.local -c 2
ping nginx-ingress-svc.nginx-ingress.svc.cluster.local -c 2
ping nginx-ingress-svc.nginx-ingress.svc.cluster.local -c 2
ping nginx-ingress-svc.nginx-ingress.svc.cluster.local -c 2
  1. Repeat steps 6-9 on nginxedge02.f5.local and nginxedge03.f5.local.

Deploy_an_App

In this section of the lab, you will deploy an example modern application Arcadia Financial application. This application is composed of 4 services. Additional information can be found here:

  1. Log into k8scontrol01.f5.local as user01.
  2. Create arcadia-deployment.yaml to deploy application via manifest
nano arcadia-deployment.yaml
  1. Edit contents of the manifest file. This manifest file will define a new namespace for the arcadia app. This will also deploy the main, backend, app2, and app3 portions of the arcadia application with a replicaset definition of 3.
##################################################################################################
# CREATE NAMESPACE - ARCADIA
##################################################################################################
---
kind: Namespace
apiVersion: v1
metadata:
  name: arcadia
---
##################################################################################################
# FILES - BACKEND
##################################################################################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: arcadia
  labels:
    app: backend
    version: v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
      version: v1
  template:
    metadata:
      labels:
        app: backend
        version: v1
    spec:
      containers:
      - env:
        - name: service_name
          value: backend
        image: registry.gitlab.com/arcadia-application/back-end/backend:latest
        imagePullPolicy: IfNotPresent
        name: backend
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: "100Mi"
          requests:
            cpu: "0.01"
            memory: "20Mi"
---
##################################################################################################
# MAIN
##################################################################################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: main
  namespace: arcadia
  labels:
    app: main
    version: v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: main
      version: v1
  template:
    metadata:
      labels:
        app: main
        version: v1
    spec:
      containers:
      - env:
        - name: service_name
          value: main
        image: registry.gitlab.com/arcadia-application/main-app/mainapp:latest
        imagePullPolicy: IfNotPresent
        name: main
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: "100Mi"
          requests:
            cpu: "0.01"
            memory: "20Mi"
---
##################################################################################################
# APP2
##################################################################################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app2
  namespace: arcadia
  labels:
    app: app2
    version: v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app2
      version: v1
  template:
    metadata:
      labels:
        app: app2
        version: v1
    spec:
      containers:
      - env:
        - name: service_name
          value: app2
        image: registry.gitlab.com/arcadia-application/app2/app2:latest
        imagePullPolicy: IfNotPresent
        name: app2
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: "100Mi"
          requests:
            cpu: "0.01"
            memory: "20Mi"
---
##################################################################################################
# APP3
##################################################################################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app3
  namespace: arcadia
  labels:
    app: app3
    version: v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app3
      version: v1
  template:
    metadata:
      labels:
        app: app3
        version: v1
    spec:
      containers:
      - env:
        - name: service_name
          value: app3
        image: registry.gitlab.com/arcadia-application/app3/app3:latest
        imagePullPolicy: IfNotPresent
        name: app3
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: "100Mi"
          requests:
            cpu: "0.01"
            memory: "20Mi"
---
  1. Deploy and verify the pods are running. Move onto the next step once the pods are running.
kubectl create -f arcadia-deployment.yaml

kubectl get pods -n arcadia
  1. Create arcadia-service.yaml to create service for each portion of the arcadia application.
nano arcadia-service.yaml
  1. Edit contents for arcadia-service.yaml using the ClusterIP type service listening on port 80 to create a service for backend, main, app2, and app3.
##################################################################################################
# FILES - BACKEND
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: arcadia
  labels:
    app: backend
    service: backend
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: backend-80
  selector:
    app: backend
---
##################################################################################################
# MAIN
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: main
  namespace: arcadia
  labels:
    app: main
    service: main
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: backend-80
    name: main-80
  selector:
    app: main
---
##################################################################################################
# APP2
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: app2
  namespace: arcadia
  labels:
    app: app2
    service: app2
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: app2-80
  selector:
    app: app2
---
##################################################################################################
# APP3
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: app3
  namespace: arcadia
  labels:
    app: app3
    service: app3
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: app3-80
  selector:
    app: app3
---
  1. Verify services are running.
kubectl create -f arcadia-service.yaml
kubectl get services --namespace=arcadia

Expose_an_App_with_NGINX+_Ingress_Controller

In the Deploy an App section, you deployed the Arcadia finance app with a internal ClusterIP servicetype. In order to expose the application, you will now expose the Arcadia Application utilizing the NGINX+ Ingress controller by createing a manifest file and using VirtualServer and VirtualServerRoute Resources.

  1. Create arcadia-virtualserver.yaml manifest file.
nano arcadia-virtualserver.yaml
  1. Edit the contents of arcadia-virtualserver.yaml manifest. This manifest will allow us to route URI paths to specific services of the arcdia application.
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: arcadia
  namespace: arcadia
spec:
  host: arcadia-finance.f5.local
  upstreams:
    - name: main
      service: main
      port: 80
    - name: backend
      service: backend
      port: 80
    - name: app2
      service: app2
      port: 80
    - name: app3
      service: app3
      port: 80
  routes:
    - path: /
      action:
        pass: main
    - path: /files
      action:
        pass: backend
    - path: /api
      action:
        pass: app2
    - path: /app3
      action:
        pass: app3
  1. Deploy and confirm VirtualServer and VirtualServceRoute resources
kubectl create -f arcadia-virtualserver.yaml

kubectl get virtualserver arcadia -n arcadia

NGINX+_Edge_L4_configuration

In this part of the lab, we will first enable the NGINX+ Live Activity Monitoring with dashboard. We will then configure a separate conf file for the stream configurations required to allow the NGINX edge server to the the DNS resolution feature. This section will have configurations only deployed on NGINX Edge 01. We will look at HA configuration syncs in a later part of the lab. Additional documentation can be found here:

  1. Log into nginxedge01.f5.local as user01
  2. Modify the default.conf file to enable the api and dashboard
sudo nano /etc/nginx/conf.d/default.conf
  1. Modify the following:
  • change the default server port from 80 to 8080 (This will enable us to listen on 80 for L4 LB to NGINX+ Ingress Controller)
  • Uncomment location /api/ block (Make sure to remove the two lines for IP restrictions)
  • Uncomment location = /dashboard.html block
server {
    listen       8080 default_server;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}

    # enable /api/ location with appropriate access control in order
    # to make use of NGINX Plus API
    #
    location /api/ {
        api write=on;
    }

    # enable NGINX Plus Dashboard; requires /api/ location to be
    # enabled and appropriate access control for remote access
    #
    location = /dashboard.html {
        root /usr/share/nginx/html;
    }
}
  1. Test the configuration file for syntactic validity and reload NGINX+
sudo nginx -t && sudo nginx -s reload
  1. Test from a web browser. Launch the UDF Desktop via XRDP.
  • Be sure to log into the remote desktop with the ubuntu user. You may have to modify your remote desktop client settings to prompt for a user upon login.
  • Open firefox broswer and browse to http://10.1.1.4:8080/dashboard.html
  • Confirm access to the dashboard and that you are not seeing any errors.
  1. Next we will create the stream configurations for the L4 LB configurations in its own folder on NGINX Edge01
sudo mkdir /etc/nginx/stream.d
  1. Create a configuration file for these dedicated NGINX+ Edge stream settings.
sudo nano /etc/nginx/stream.d/nginxedge.conf
  1. Edit configuration file to handle traffic on port 80 and 443 with these configuration blocks:
  • Resolver directive tells NGINX which DNS servers to query every 10 seconds. The DNS resolver metrics will be captured in the live activity dashboard
  • Zone directive collects TCP stats which will also be displayed in the TCP/UDP Upstreams tab on the live activity dashboard
  • Resolve parameter tells NGINX+ to query KUBE-DNS for the list of IP addresses for the FQDN.
# NGINX edge server Layer 4 configuration file
# Use kube-dns ClusterIP address advertised through Calico for the NGINX Plus resolver 
# DNS query interval is 10 seconds

stream {
    log_format stream ‘$time_local $remote_addr - $server_addr - $upstream_addr;
    access_log /var/log/nginx/stream.log stream;

    # Sample configuration for TCP load balancing 
    upstream nginx-ingress-80 {
    # use the kube-dns service IP address advertised through Calico for the NGINX Plus resolver
        resolver 10.96.0.10 valid=10s status_zone=kube-dns; 
        zone nginx_kic_80 256k;

        server nginx-ingress-svc.nginx-ingress.svc.cluster.local:80 resolve;
    }

    upstream nginx-ingress-443 {
    # use the kube-dns service IP address advertised through Calico for the NGINX Plus resolver
        resolver 10.96.0.10 valid=10s status_zone=kube-dns; 
        zone nginx_kic_443 256k;
        
        server nginx-ingress-svc.nginx-ingress.svc.cluster.local:443 resolve; 
    }

    server {
        listen 80;
        status_zone tcp_server_80; 
        proxy_pass nginx-ingress-80;
    }

    server {
        listen 443;
        status_zone tcp_server_443; 
        proxy_pass nginx-ingress-443;
    } 
}
  1. Next you will need to update the nginx.conf to include the stream configuration
sudo nano /etc/nginx/nginx.conf
  1. Edit the nginx.conf file to include all configuration files stream.d folder
user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}

include /etc/nginx/stream.d/*.conf;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}


# TCP/UDP proxy and load balancing block
#
#stream {
    # Example configuration for TCP load balancing

    #upstream stream_backend {
    #    zone tcp_servers 64k;
    #    server backend1.example.com:12345;
    #    server backend2.example.com:12345;
    #}

    #server {
    #    listen 12345;
    #    status_zone tcp_server;
    #    proxy_pass stream_backend;
    #}
#}
  1. Test the configuration file for syntactic validity and reload NGINX+
sudo nginx -t && sudo nginx -s reload
  1. Test application access from NGINX Edge 01
curl -v http://localhost/ --header 'Host:arcadia-finance.f5.local'
  1. Refresh the webpage of the NGINX+ dashboard on the client desktop.
  • Review TCP/UDP Zones
  • Review TCP/UDP Upstreams. The IPs should match the pod IPs of the nginx-ingress controllers. (on k8scontrol01.f5.local kubectl get pods --all-namespaces -o wide)
  • Review Resolvers
  1. Test browser access from the UDF Desktop via XRDP.

NGINX+_HA_configurations

WORK IN PROGRESS - NOT COMPLETED. Additional documentation can be found here:

Configure Config Sync

In this section, you will be using NGINX+ Edge 01 server as the primary node and pushing the configuration files to the additional NGINX+ Edge 02 and 03 nodes. Additional documentation can be found here:

  1. Install nginx-sync package on NGINX+ Edge 01
sudo apt-get install nginx-sync
  1. Grant the primary machine ssh access as root to the peer machines. On the primary node, generate an SSH authentication key pair for root and view the public part of the key:
sudo ssh-keygen -t rsa -b 2048

sudo cat /root/.ssh/id_rsa.pub
  1. Copy the output as you will need it in the next step:
ssh-rsa AAAAB3Nz4rFgt...vgaD root@nginxedge01
  1. Launch new web shell access to NGINX Edge 02 and 03 to log in as root. You will append the public key to root’s authorized_keys file as the root user. The from=10.1.1.4 prefix restricts access to only the IP address of the primary node. Replace the AAAA string with your output
echo 'from=”10.1.1.4" ssh-rsa AAAAB3Nz4rFgt...vgaD root@node1' >> /root/.ssh/authorized_keys
  1. Modify /etc/ssh/sshd_config as root user:
/etc/ssh/sshd_config
  1. Add the following line to /etc/ssh/sshd_config as root user
PermitRootLogin without-password
  1. Reload sshd on each peer (but not the primary) by logging into NGINX Edge 02 and NGINX Edge 03 as root user
service ssh reload
  1. On the primary node, create file /etc/nginx-sync.conf
sudo nano /etc/nginx-sync.conf 
  1. Add the following contents to define the nodes and three configuration files
NODES="nginxedge02.f5.local nginxedge03.f5.local"
CONFPATHS="/etc/nginx/nginx.conf /etc/nginx/conf.d /etc/nginx/stream.d"
  1. Run the following command as root user to synchronize configuration and reload NGINX+ on the peers
nginx-sync.sh
  1. To confirm successful sync, ssh to nginxedge02 and nginxedge03 to confirm .conf files. You can also browse to the NGINX+ dashboard pages from the client browser:

High Availability based on keepalived and VRRP

NGINX_Management_Suite

WORK IN PROGRESS.
In this section of the lab, you will be configuring NGINX Management Suite (NMS). First you will install NMS 2.x. Then you will install the NMS agent on the NGINX+ Edge instances and register to the NMS server. Finally you will integrate NGINX+ Ingress controller with NMS.

Install NGINX+

The NGINX Management Suite platform uses NGINX as a frontend proxy and for managing user access. Refer to the earlier section of this lab on steps to install NGINX+.

  1. Confirm NGINX+ installation.
nginx -v

Install ClickHousone

NGINX Management Suite uses ClickHouse as a datastore for configuration settings and analytics information such as metrics, events, and alerts. Additional documentation can be found here:

  1. Deploy ClickHouse as a self-managed installation using the LTS package.
sudo apt-get install -y apt-transport-https ca-certificates dirmngr

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754

echo "deb https://packages.clickhouse.com/deb lts main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list

sudo apt-get update

sudo apt-get install -y clickhouse-server clickhouse-client
  1. Verify ClickHouse is running.
sudo service clickhouse-server status
  1. Enable ClickHouse so that it starts automatically if the server is restarted.
sudo systemctl enable clickhouse-server

Add NGINX Management Suite Repo to Apt

To install NGINX Management Suite, you need to add the official NGINX Management Suite repo to pull the packages from. To access the NGINX Management Suite repo, you’ll need to add appropriate cert and key files to the etc/ssl/nginx folder.

  1. Log in to MyF5, or follow the link in the trial activation email, and download the NMS repo .crt and .key files.
  2. Edit nginx-repo.key
sudo nano /etc/ssl/nginx/nginx-repo.key
  1. Replace the contents with the NMS repo key contents.
  2. Edit nginx-repo.crt
sudo nano /etc/ssl/nginx/nginx-repo.crt
  1. Replace the contents with the NMS repo key contents.
  2. Add the NGINX Management Suite Apt repository.
printf "deb https://pkgs.nginx.com/nms/ubuntu `lsb_release -cs` nginx-plus\n" | sudo tee /etc/apt/sources.list.d/nms.list

sudo wget -q -O /etc/apt/apt.conf.d/90pkgs-nginx https://cs.nginx.com/static/files/90pkgs-nginx
  1. Add the NGINX Signing Key to Apt repository.
wget -O /tmp/nginx_signing.key https://cs.nginx.com/static/keys/nginx_signing.key

sudo apt-key add /tmp/nginx_signing.key

Install Instance Manager

Important: The Instance Manager's administrator username and generated password are displayed in the terminal during the installation. You should make a note of the password and store it securely.

  1. Install the latest version of the Instance Manager module.
sudo apt-get update

sudo apt-get install -y nms-instance-manager
  1. NOTE the output for adminstrative credentials. Below is a sample output:
# Start NGINX Management Suite services
sudo systemctl start nms

Admin username: admin

Admin password: Vm8asdfjk3e9r52j23khqgfakaG
  1. Enable the NGINX Management Suite services
sudo systemctl enable nms
sudo systemctl enable nms-core
sudo systemctl enable nms-dpm
sudo systemctl enable nms-ingestion
sudo systemctl enable nms-integrations
  1. Start the NGINX Management Suite Services
sudo systemctl start nms
  1. Restart NGINX+ web server
sudo systemctl restart nginx
  1. Confirm that NGINX Management Suite from the client's RDP web browser. Login with the credentials

License Instance Manager - Add a License

  1. Access MyF5 Customer Portal from client's RDP web browser and download the NGINX Management Suite .lic file. The default save location will be /home/ubuntu/Downloads/
  2. Login to NGINX Management Suite
  1. Click on the Settings gear icon on the left hand side.
  2. On the Settings sidebar, select Licenses and Upload License.
  3. Locate the .lic file that you downloaded to your system, then select Upload.

Install Agent on NGINX Edge Servers

NMS requires agent on the NGINX instances to report telemetry back to the NMS environment. The agent can be installed on an insecure connection or a secure connection.

  1. from the command line on nginxedge01.f5.local via an unsecure connection.
curl -k https://10.1.1.12/install/nginx-agent | sudo sh
  1. Start the nginx agent software.
sudo systemctl start nginx-agent
  1. Enable the NGINX Agent to start on boot
sudo systemctl enable nginx-agent
  1. Verfiy the NGINX Agent is Running and registered. From nginxedge01.f5.local command line.
curl -k -u admin:Vm8asdfjk3e9r52j23khqgfakaG https://10.1.1.12/api/platform/v1/systems | jq
  1. You can also launch a new web brower from the RDP client and browse to the following page:

OPTIONAL - Multiple Ingresses

  1. Deploy another application - hipster online store
git clone https://github.com/GoogleCloudPlatform/microservices-demo.git
  1. Modify manifest to remove frontend service LoadBalancer - we will use NGINX+ instead
nano microservices-demo/release/kubernetes-manifests.yaml
  1. Delete the following lines:
apiVersion: v1
kind: Service
metadata: 
  name: frontend-external
spec:  
  type: LoadBalancer  
  selector:    
    app: frontend  
  ports:  
  - name: http    
    port: 80    
    targetPort: 8080
---
  1. Create Namespace for new app
kubectl create ns online-boutique-app
  1. Deploy app
kubectl -n online-boutique-app apply -f microservices-demo/release/kubernetes-manifests.yaml
  1. Get status of pods
kubectl get pods -n online-boutique-app
  1. Get services
kubectl get services -n online-boutique-app

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published