Getting Started with Kubernetes

Taufiq Ibrahim
10 min readMar 17, 2020

--

So, what is this all about? Another hype? Another trending tech things? Nope, this is just another simple article to show you on how to get started with Kubernetes and understand some simple operations on it. Another reason was, I found it difficult to find written article about end to end Kubernetes deployment.

The Equipment: Kubernetes

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

The Stage: Amazon EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.

The Show

Creating Kubernetes Cluster on Amazon EKS

Let’s start by writing simple YAML file and store it as eks.yaml. As simple as this:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-cluster
region: ap-southeast-1
managedNodeGroups:
- name: job-manager
instanceType: t3a.micro
desiredCapacity: 2
minSize: 2
maxSize: 10
tags:
'Project': 'KubernetesNoob'

We will use a cool tool called eksctl. This is a simple tool for creating Kubernetes cluster on EKS. Make sure you already set up aws configure command before.

$ eksctl create cluster -f eks.yaml

Above command basically doing these stuffs:

  • Create a new cluster using configuration file eks.yaml.
  • The cluster will named as my-cluster and will be created on Singapore region.
  • It will have 2 t3a.micro EC2 instances and tagged using Project: KubernetesNoob.

It will take a while to finish, around 10 to 15 minutes. While waiting for the cluster to be ready, let’s create a small Flask application.

Creating Simple Flask Webserver

Create a new requirements.txt file and populate with this.

aniso8601==8.0.0
attrs==19.3.0
click==7.1.1
Flask==1.1.1
flask-restplus==0.13.0
importlib-metadata==1.5.0
itsdangerous==1.1.0
Jinja2==2.11.1
jsonschema==3.2.0
MarkupSafe==1.1.1
pkg-resources==0.0.0
pyrsistent==0.15.7
pytz==2019.3
six==1.14.0
Werkzeug==0.16.1
zipp==3.1.0

Create a new Python3.7 virtual environment, activate it and install the dependencies.

$ virtualenv venv --python=python3.7
$ . venv/bin/activate
(venv)$ pip install -r requirements.txt

Create a directory called app and populate it with a file called main.py using below contents.

import platform
from flask import Flask
from flask_restplus import Resource, Api
app = Flask(__name__)
api = Api(app)
@api.route('/uname')
class HostName(Resource):
def get(self):
return {'uname': platform.uname()}
if __name__ == '__main__':
app.run(host="0.0.0.0", debug=True)

Run the webserver using:

(venv)$ python app/main.py

Check it on http://localhost:5000 .

my-flask service running locally using Python

Cool, now let’s Dockerized our new app. Create a file called Dockerfile.

FROM python:3.7-alpineRUN mkdir /app
WORKDIR /app
ADD ./app /app/
ADD ./requirements.txt /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]

That file is a set of instructions Docker will use to build the image. For this simple application, Docker is going to:

  1. Get the official Python Base Image for version 3.7 from Docker Hub.
  2. In the image, create a directory named app.
  3. Set the working directory to that new app directory.
  4. Copy the local directory’s contents to that new folder into the image.
  5. Run the pip installer (just like we did earlier) to pull the requirements into the image.
  6. Inform Docker the container listens on port 5000.
  7. Configure the starting command to use when the container starts.

Now, we must build the container image.

$ docker build -f Dockerfile -t my-flask-image:latest .

After image building process finished, we need to check wether the image is listed on our local Docker.

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my-flask-image latest 0dc76af1b731 26 seconds ago 123MB

Our Docker image is successfully listed and the size is 123MB. Let’s try to run this image locally before moving further.

$ docker run -p 5001:5000 my-flask-image

Test that on http://localhost:5001 since we mapped the Docker container port 5000 into host port 5001.

my-flask service running locally on Docker

Now, let’s check our EKS cluster. It should be ready by now.

Upload Docker Image to Amazon ECR

Currently our image, my-flask-image, is only available locally. In order to be available for use by our cluster it should be uploaded somewhere on the internet. Since we’re using AWS services, we will use another service called Amazon ECR.

Create a new repository called my-flask.

Click on View push commands button and follow the guidelines to upload your Docker image.

Depends on your internet speed, the upload process might take a while.

Get into The World of Kubernetes

Now, we have reached our new world…Kubernetes. Let’s recap first:

  • We have a running Kubernetes cluster on AWS EKS
  • We already have our app image in AWS ECR

For interacting with Kubernetes cluster, we need a tool called kubectl. Please follow the installation guide here.

Let’s we what we can do with this tool. You can find the cheatsheet provided by Kubernetes on https://kubernetes.io/docs/reference/kubectl/cheatsheet/.

First, let’s check all the services.

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 58m

Let’s see all available nodes (EC2 instances).

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-45-235.ap-southeast-1.compute.internal Ready <none> 57m v1.14.9-eks-1f0ca9
ip-192-168-95-3.ap-southeast-1.compute.internal Ready <none> 57m v1.14.9-eks-1f0ca9

I know that you’re really excited now and come up with question, “Show us how to deploy the app?”

Okay, calm down. Before we can deploy the app into the cluster, we need to create one more YAML file, we will call this file deployment.yaml.

apiVersion: v1
kind: Service
metadata:
name: my-flask-service
spec:
selector:
app: my-flask
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask
spec:
selector:
matchLabels:
app: my-flask
replicas: 2
template:
metadata:
labels:
app: my-flask
spec:
containers:
- name: my-flask
image: <IMAGE_URI_FROM_ECR>
ports:
- containerPort: 5000

Above YAML basically give instructions to Kubernetes cluster for what we want:

  • A load-balanced service exposing port 6000
  • Two instances of the my-flask container running using Docker image hosted on AWS ECR.

Use kubectl to send the YAML file to Kubernetes by running the following command:

$ kubectl apply -f deployment.yaml
service/my-flask-service created
deployment.apps/my-flask created

Now, the question is “How do I access those running containers?”.

Enter kubectl proxy. Run this command:

$ kubectl proxy --port 8080

Your API will be accessible using

http://localhost:8080/api/v1/namespaces/default/services/my-flask-service/proxy/uname
my-flask service running on EKS Kubernetes cluster

Deploy the Kubernetes Web UI Dashboard

This part follows https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html.

Do below steps locally:

  • Copy and paste the commands below into your terminal window and type Enter to execute them. These commands download the latest release, extract it, and apply the version 1.8+ manifests to your cluster.
DOWNLOAD_URL=$(curl -Ls "https://api.github.com/repos/kubernetes-sigs/metrics-server/releases/latest" | jq -r .tarball_url)
DOWNLOAD_VERSION=$(grep -o '[^/v]*$' <<< $DOWNLOAD_URL)
curl -Ls $DOWNLOAD_URL -o metrics-server-$DOWNLOAD_VERSION.tar.gz
mkdir metrics-server-$DOWNLOAD_VERSION
tar -xzf metrics-server-$DOWNLOAD_VERSION.tar.gz --directory metrics-server-$DOWNLOAD_VERSION --strip-components 1
kubectl apply -f metrics-server-$DOWNLOAD_VERSION/deploy/1.8+/
  • Verify that the metrics-server deployment is running the desired number of pods with the following command.
$ kubectl get deployment metrics-server -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 0/1 1 0 27s
  • Use the following command to deploy the Kubernetes dashboard.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

Create an eks-admin Service Account and Cluster Role Binding

  • Create a file called eks-admin-service-account.yaml with the text below. This manifest defines a service account and cluster role binding called eks-admin.
apiVersion: v1
kind: ServiceAccount
metadata:
name: eks-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: eks-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: eks-admin
namespace: kube-system
  • Apply the service account and cluster role binding to your cluster.
$ kubectl apply -f eks-admin-service-account.yaml

Connect to Kubernetes dashboard

To connect to the Kubernetes dashboard, we need to do below steps:

  • Retrieve an authentication token for the eks-admin service account. Copy the <authentication_token> value from the output. You use this token to connect to the dashboard.
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
  • Start the kubectl proxy if not yet started
$ kubectl proxy --port 8080
What???

Let’s explain what just happened. The URL gave us error message no endpoints available for service “kubernetes-dashboard.

We need to see the detail by issuing this command:

$ kubectl get all --namespace kubernetes-dashboard

Please note that we add — namespace kubernetes-dashboard argument. The reason is because the dashboard services is deployed using namespace. More on namespace here. So there are 2 pods currently in Pending state. Let’s dig into more detail for pod/dashboard-metrics-scraper-69fcc6d9df-fbkt7 by describing it.

$ kubectl describe pod/dashboard-metrics-scraper-69fcc6d9df-fbkt7 --namespace kubernetes-dashboard

Now, we found the reason. The pods are in Pending state due to insufficient resources. Previously we only deployed 2 EC2 instances.

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-45-235.ap-southeast-1.compute.internal Ready <none> 102m v1.14.9-eks-1f0ca9
ip-192-168-95-3.ap-southeast-1.compute.internal Ready <none> 102m v1.14.9-eks-1f0ca9

So, we need to spin up more resources.

First, let’s get back to eks.yaml:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-cluster
region: ap-southeast-1
managedNodeGroups:
- name: job-manager
instanceType: t3a.micro
desiredCapacity: 2
minSize: 2
maxSize: 10
tags:
'Project': 'KubernetesNoob'

Previously, we tell EKS to spin a cluster with 2 instances as stated in desiredCapacity value.

Use this command to:

$ eksctl get nodegroup --cluster=my-cluster

Now, we want to scale it. Note that the maximum new value is going to be limited by maxSize which is currently set as 10. Let’s scale it to 6 as desired value.

$ eksctl scale nodegroup --name=job-manager --cluster=my-cluster --nodes=6

While waiting, you can go to your EC2 dashboard and see new instances being provisioned.

Now, the scaling has been finished. We can verify by using the same command.

$ eksctl get nodegroup --cluster=my-cluster
The nodegroup has been resized

Let’s check what happened with our kubernetes-dashboard services.

$ kubectl get all --namespace kubernetes-dashboard

Horray, the state has been changed into Running.

Let’s try to access the URL: http://localhost:8080/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login

Put your token and Sign in

And…..

That’s your dashboard running!

This is the Workload view containing our my-flask service running using 2 pods.

Load Testing

The world is not enough without performing some load testing. Let’s use Locust to test our my-flask app.

Install Locust using this command:

$ pip install locustio

Create a new file called locustfile.py and put below code on it.

from locust import HttpLocust, TaskSet, task, betweenclass MyFlaskTasks(TaskSet):
@task
def about(self):
self.client.get("/api/v1/namespaces/default/services/my-flask-service/proxy/uname")
class WebsiteUser(HttpLocust):
task_set = MyFlaskTasks
wait_time = between(5, 15)

Run Locust test using this command:

$ locust --host=http://localhost:8080

Open your browser and go to http://localhost:8089

Set Number of total users to simulate=20000 and Hatch rate=2000.

You will see the load is shown in Kubernetes Dashboard.

Please note that this is not intended to show a heavy load testing use case. Instead, it’s showing that we can also use Kubernetes Dashboard to see what happened when the cluster is loaded.

Cleaning Up

Now, we can clean up all the things we’ve done by simply using this command.

$ eksctl delete cluster --region=ap-southeast-1 --name=my-cluster

Above command will take care all the clean up process, not limited to:

  • Performing clean up CloudFormation stacks
  • Deleting EC2 instances
  • Deleting EKS cluster

Note that you will have to manually delete ECR Docker image.

Recap

So, as a recap:

  • We have been creating a simple Kubernetes cluster on Amazon EKS using configuration files
  • We have been creating a simple Python Flask webserver and containerized it using Docker
  • We have pushed the containerized Flask app into Amazon ECR
  • We have been deploying containerized Flask app on Kubernetes
  • We have explored on how to use Kubernetes Dashboard
  • As a bonus, we even do the load testing using Locust

I hope this article can be useful.

--

--