Wednesday 5 August 2020

The Way Kubernetes Works

Figure 1:The ‘three+one’ cluster architecture

Developers and administrators in the current IT-transformed world are well aware that container orchestration plays a vital role in day-to-day deployment or management scenarios. There are plenty of orchestration tools, including Kubernetes. In this article we take a quick look at the architecture of Kubernetes, and how a service account can be created in it.

A container management tool is capable of managing containers running on a single host, but not intelligent enough to manage containers running on multiple hosts, including HA, Replica, load-balancing of a pod that has a hosted application running, etc. To overcome these limitations, we can use Kubernetes as an orchestrator tool running on top of container management systems like Docker, PodMan or rkt.

Architecture of Kubernetes

As seen in Figure 1, Kubernetes has an architecture of three plus one clusters, in which three hosts are nodes (slaves) and one has the master server (manager), which takes care of the deployment and management of an application.  The application, in the form of a pod, will get deployed on the nodes. Each node has Kubelet, which talks to the REST API of the master and container runtimes on the node. On each node, including the master, there is a network plugin running for the overlay network. In addition to these, etcdsecheduler and controller manifest on the master.

Isolation of the pod and service layer in Kubernetes

By default, Kubernetes uses Intranet networking to communicate between the node(s) and the pod, the pod and node(s), service to the pod(s) or pod(s) to the service. In Kubernetes, service is an object that takes endpoints as the pod(s) IP. It has a cluster IP, NodePort and LoadBalancer as type. By default, the cluster IP is not reachable from the outside world.

Service is an abstraction layer for security in Kubernetes, which isolates any public request directly hitting the pod. We can use Ingress, NodePort or LoadBalancer to access the application from the public world.

The security context in Kubernetes

By default, any pod run on Kubernetes can be secured by adding a security context to it. A security context defines privileges and access control settings for a pod or container.

The following is an example of a pod using a security context.

pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: myvol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: busybox
    command: [ “sh”, “-c”, “sleep 1h” ]
    volumeMounts:
    - name: myvol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false

In the above example, the container runs any process as a UID (UserID) 1000, and a GID (Group ID) 3000. The file system /test/data will have a GID 2000, the default being 0. This way, one can override the default configuration of a pod and achieve the next level of security.

ServiceAccount in Kubernetes

Kubernetes uses RBAC (role based access control) for getting access to clusters, using either ClusterRole or Role. Let us look at how a service account can be created using YAML.

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: user1
  namespace: kube-system

Popular Performance Monitoring Tools for Container Technology

Containers as microservice-based technology have a significant role in virtualising cloud applications. This article takes a quick look at the most popular performance monitoring tools for container technology that can ease the task of IT admins.

Containerisation is a technology to virtualise applications in a lightweight way that consumes less resources and time. This has led to the development of various container technologies like LXC, Docker and RKT. All these technologies work on the same principle — that the applications can share their host OS kernel and contains only appropriates binaries and libraries, making it smaller in size compared to a virtual machine.


Figure 1: Container monitoring with the Docker stats command

Figure 2: Installing the cAdvisor container monitoring tool

Why monitoring of containers is important

The monitoring of containers plays a significant role for developers because it gives a quick overview of the running applications and ensures that containers are meeting their expected goals. This helps to catch problems early and resolve issues quickly.

Popular tools for performance monitoring of containers

As containers run in their own namespaces, traditional Linux monitoring tools such as top, ps, tcpdump and lsof, from the host system, do not help to monitor the activity happening within a container. A proper understanding of the following tools will help researchers, practitioners and developers efficiently monitor the applications deployed on the container virtualisation platform. So let’s look at these popular container monitoring tools and their installation steps.

Docker Stats

To monitor resource usage of Docker containers, the simple solution is the Docker stats command. It is an open source default API available in the Docker daemon, and provides resource usage statistics of a running Docker container in terms of CPU, RAM, network, and block I/O usage. The following command can be used to get performance metrics of a running Docker container, as depicted in Figure 1:

$ docker stats container name

Figure 3: Dashboard for the cAdvisor monitoring tool

Figure 4: Installing the Prometheus container monitoring tool

cAdvisor

cAdvisor stands for container advisor and was created by Google in 2014. It is also an open source tool to monitor Docker containers. However, compared to Docker Stats, which is based on the command line interface (CLI), cAdvisor also provides a GUI for viewing API information. In cAdvisor, the isolation of the shared resources used by multiple container applications is based on lmctfy’s API. To set up and run cAdvisor, execute the following commands, as shown in Figure 2:

$ docker run\
 
--volume=/:/rootfs:ro\
 
--volume=/var/run:/var/run:rw \
 
--volume=/sys:/sys:ro \
 
--volume=/var/lib/docker/:/var/lib/docker:ro \
 
--publish=8080:8080 \  --detach=true \
 
--name=cadvisor \
 
google/cadvisor:latest

cAdvisor can now be accessed at http://localhost:8080, as shown in Figure 3.

Prometheus

Prometheus is another GUI based open source tool for monitoring Docker containers and was developed by SoundCloud. Along with Docker Stats and cAdvisor, Prometheus also provides an alert mechanism based on some applied rules. It makes use of the exporter API to capture and store container metrics. The installation of this container monitoring tool can be done by executing the following commands.

Step 1: Download the latest release of Prometheus, and then extract it:

$ tar xvfz prometheus-*.tar.gz
 
$ cd prometheus-*

Step 2: Change to the directory containing the source code of Prometheus and run the following command, as depicted in Figure 4:

$  ./prometheus  - -config.file = prometheus.yml

Prometheus can be accessed at http://localhost:9090, as shown in Figure 5.


Figure 5: Dashboard for the Prometheus monitoring tool

Sensu

Sensu is a container monitoring tool that provides support for all three container technologies, namely LXC, Docker, and RKT. Sensu is self-hosted and offers centralised metrics services. However, in the production environment, the deployment of this tool depends upon various complementary services such as Sensu API and Sensu Core, as shown in Figure 6. The Sensu monitoring tool can be installed using the following commands.

Step 1: Install the Sensu backend, as follows. Add the Sensu repository:

curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash

Next, install the sensu-go-backend package:

sudo apt-get install sensu-go-backend

Step 2: Install sensuctl. Add the Sensu repository:

$ curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash

Next, install the sensu-go-cli package:

$ sudo apt-get install sensu-go-cli

Step 3: Install Sensu agents. Add the Sensu repository:

$ curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash

Next, install the sensu-go-agent package:

$ sudo apt-get install sensu-go-agent

Start sensu-agent using a service manager:

$ service sensu-agent start

Figure 6: Installing the Sensu container monitoring tool

Figure 7: Dashboard of the Sysdig container monitoring tool

Sysdig

Sysdig is the most widely adopted container monitoring tool that provides support for alerts, data aggregation and visualisation. It is easy to deploy and provides a simple interface, where users can see the information about the CPU, memory and network usage. Thus, Sysdig is one of the good choices to monitor the performance of running container technology. It can be installed by executing the following commands:

$ curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | bash 
 
$ csysdig –vcontainers

A screenshot of the dashboard of this container monitoring tool is shown in Figure 7.

In the landscape of container technology, making a informed refined choice from the variety of container monitoring tools is the need of the hour. This article will help IT admin, to make an informed choice about the better container performance monitoring tools that can be deployed easily with features of alert mechanism, support for different data types, and dashboard visualisation.