|The name Kubernetes originates from Greek, meaning helmsman or pilot, and is the root of governor and cybernetic. K8s is an abbreviation derived by replacing the 8 letters “ubernete” with “8”.|
Kubernetes provides the following benefits to users.
Deploying applications easily
Scaling applications on the fly
Rolling out new features seamlessly
Limit hardware usage to required resources only
Although it can also be deployed on other platforms like AWS ECS, IPS is primarily focused on Kubernetes (K8s) (and its derivatives such as OpenShift). A basic understanding of K8s architecture is an added advantage if you are:
deploying a new IPS installation on top of an existing K8s cluster,
customizing or hacking around with an existing IPS installation on K8s, or
debugging or troubleshooting an IPS installation.
|The official K8s documentation is a valuable resource for understanding K8s in detail.|
|While K8s does not come with production support (although it is production-ready in most aspects), many third party vendors provide production support for either K8s or its derivatives. We recommend Red Hat OpenShift as such a well-established K8s derivative, with a comprehensive documentation space.|
For our purpose, K8s can be simplified to the following viewpoints:
Master and node service components
Networking, discovery and container provisioning
Pods, replication controllers, deployments and services
K8s architecture can be depicted by the following diagram.
In a nutshell, K8s consists of a master node (multiple masters are also possible) controlling a set of processes (pods) running on a set of nodes (minions). This orchestration is made possible by a collection of system services running on the master and each of the minions, communicating via a shared network:
provisions the K8s API used by other infra-level components as well as external users
manages pod replications ([replication controllers], [deployments], [pet sets] etc) that provide high availability for work units (pods) deployed inside K8s
manages scheduling of pods onto nodes based on resource availability and various other customizable constraints
etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. In Kubernetes, clustered etcd replicates the storage to all master instances in a cluster. This guarantees the high availability of the system.
In Kubernetes, the servers that execute work are referred to as nodes. Node servers require the following components to communicate with the master components.
Each individual node should run a docker service. Docker is used to execute containers in an isolated environment. Docker must be used to configure flannel otherwise it would not create a dedicated subnet to each server.
The main contact point of each node within a cluster group is done through a kubelet service. Kubelet communicates with the master to identify which commands require to be executed and the work to be carried out. The kubelet service is also responsible for maintaining the state of work on minions (node servers).
A pod is the smallest unit of Kubernetes in the object model that can be created or deployed. In simple terms, a pod represents a running process on a cluster.
A pod consists of an application container, storage resources, a network IP and options that govern how the containers should execute. A pod constitutes a unit of deployment which might consist of either a single container or a small number of containers that are tightly coupled.
A ReplicationController guarantees that a specified number of pod replicas are running at any given time. ReplicationController ensures that a pod or a homogeneous set of pods is always available.
A Deployment is a higher level abstract concept that manages ReplicationControllers and provides declarative updates to pods along with other useful features. However IPS uses ReplicationControllers as we need to provide custom update orchestration.
A DeploymentController handles declarative updates for Pods. Deployment controller ensures that the cluster’s actual state reaches the desired state at a controlled rate.
Kubernetes pods are not permanent units. ReplicationControllers create and destroy Pods dynamically. Each Pod get its own IP address and those IP addresses cannot be expected to be stable over time (Since IP addresses are allocated dynamically). This leads to a problem: if some set of Pods provides functionality to other pods inside the cluster, they cannot keep track of each other.
A K8s service defines a set of Pods and a policy that can be used to access them. The set of Pods targeted by a Service is usually determined by a Label Selector. Service abstraction enables decoupling.
Kubernetes approaches networking differently than Docker. K8s solves the following problems.
Tightly coupled container to container communications
Kubernetes assumes that pods can communicate with other pods, regardless of which host they are placed upon. Every pod is given its own IP address so that explicit links are not required to be created between pods. Also it removes the burden of mapping container ports to host ports.
This contrasts with the docker model. By default Docker uses host-private networking. It creates a virtual bridge and allocates a subnet from one of the private address blocks. Hence Docker containers can talk to other containers only if they are on the same machine. Containers on different machines cannot reach each other because they might end up with the exact same network ranges and IP addresses. However, Kubernetes takes a different approach.
Kubernetes applies IP addresses at the pod level. Containers within a pod share their network namespaces. Therefore, containers within a pod can communicate with each other using localhost. This implies that containers within a pod must coordinate port usage. However, this is not different from processes on a virtual machine.