Sunday, July 12, 2020

AWS-EKS-TASK


AWS - EKS - TASK



EKS - Elastic Kubernetes Service , it is one of the great service offered by AWS , it helps in running K8S on AWS Cloud. It is fully managed by Amazon. It automatically detects the and restart the instances which have gone down / terminated.


Let's get Started :

Creating K8S cluster using AWS EKS :

We will be creating the cluster using a terraform file , because it is the best way as it creates or destroys the cluster in 1 click and also as it is a file we can share it or use it across multiple systems.
First of create a file named clusterdep with file extension as yml , so the file name should look like clusterdep.yml and then write the code given below.

The terraform code used is given below :

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: lwcluster
  region: us-east-1

nodeGroups:
  - name: ng1
    describedCapacity: 1
    instanceType: t2.micro
    ssh:
        publicKeyName: myredhatkey

  - name: ng2
    desiredCapacity: 1
    instanceType: t2.small
    ssh:
        publicKeyName: myredhatkey   

Now lets deploy our terraform code using the following command:

eksctl create cluster -f clusterdep.yml



Here we can see that our cluster is ready and is in active state

 

Now that we have created our cluster lets make some more changes to it and deploy our Wordpress on it , its going to be very interesting !



Lets modify the terraform file made earlier i.e clusterdep.yml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: lwcluster
  region: us-east-1

nodeGroups:
  - name: ng1
    describedCapacity: 1
    instanceType: t2.micro
    ssh:
        publicKeyName: myredhatkey

  - name: ng2
    desiredCapacity: 1
    instanceType: t2.small
    ssh:
        publicKeyName: myredhatkey
 
  - name: ngmixed
    minSize: 2
    maxSize: 3
    instanceDistribution:
      maxPrice: 0.017
      instanceTypes: ["t3.small","t3.medium"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
      spotInstancePools: 2
    ssh:
        publicKeyName: myredhatkey

Now let's create a web-st-pvc.yml file :

apiVersion: v1
kind: PersistentVolumeClaim

metadata:
  name: lwpvc1

spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Now once our cluster is created, we will check for dns and will paste it in browser.
It is worth mentioning that our website works on type LB - load balancer so traffic remains distributed and hence reduce the load on the servers and also we are using pvc so our data is persistent.
we will use kustomization file with secret keys.

kubectl create -k

Now we can launch our wordpress





Here we can see that our wordpress is configured.

Now let's use Helm package to launch Grafana and Prometheus :

HELM: It is a program for installing packages. It is package manager/ chart manager. Helm hub gives us k8s ready applications . The server side used for Helm is tiller.

For initialising Helm we will give the command :

helm init

Then we are required to run the following commands:

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

helm repo list

helm repo update

Now proceeding further we will create tiller, for doing that we will run the following commands:

kubectl -n kube-system create service account tiller

kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller

kubectl get pods --namespace kube-system

helm init --service-account tiller --upgrade

Now we will install Prometheus with the help of Helm, for doing that we will run the following commands:

kubectl create namespace prometheus

helm install stable/prometheus   --namespace prometheus  --set
alertmanager.presistentVolume.storageClass="gp2"  --set
server.presistentVolume,storageClass="gp2"

kubectl get svc -n prometheus

kubectl -n prometheus port-forward svc/whimsical-buffalo-prometheus-server 8888:80

 
We can see that prometheus is successfully configured

Now lets move further and install grafana, but for doing that we will need to run some specific commands as follows:

kubectl create namespace grafana

helm install stable/grafana  --namespace grafana  --set presistence.storageClassName="gp2" --set adminPassword='GrafanaAdmin' --set datasources."datasources\.yaml".apiVersion=1  --set datasources."datasources\.yaml".datasources[0].name=Prometheus --set datasources."datasources\.yaml".datasources[0].type=Prometheus  --set datasources."datasources\.yaml".datasources[0].url=http://promethus-server.prometheus.svc.cluster.local  --set
datasources."datasources\.yaml".datasources[0].access=proxy    --set
datasources."datasources\.yaml".datasources[0].isDefault=true --set service.type="LoadBalancer"

kubectl get sercret fair-numbat-grafana  --namespace grafana -o ymal

 

we can see that Grafana has been configured successfully.

Now lets configure the Fargate Cluster:

So for creating a fargate we just need to write a simple script.
Let the name of the script be farcluster.yml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: fargate-lwcluster
  region: ap-southeast-1

fargateProfiles:
  -name: fargate-default
   selectors:
    - namespace: kube-system
    - namespace: default

So as we have created the file now its time to run it , but for doing this as usual we will be running some commands given below :

kubectl get ns

kubectl get pods -n kube-system -o wide

eksctl create cluster -f farcluster.yml

eksctl get cluster --region ap-southeast-1

After running the above commands we will create our cluster , yay!!

You can find all the scripts here:

No comments:

Post a Comment