• Docs >
  • 6. Kubernetes deployments
Shortcuts

6. Kubernetes deployments

There are two variants to deploy the Kubernetes:

  1. In a single node

  2. In a network of several servers (cluster)

6.1. 1. Single node deployment

6.1.1. 0. Prerequisites

  1. A SSH key pair on your local Linux/Mac OS/BSD machine. If you haven’t used SSH keys before, you can learn how to set them up by following this explanation of how to set up SSH keys on your local machine.

  2. One server running Ubuntu 16.04 with at least 2GB RAM, and python 3 installed. You should be able to SSH into this server as the root user with your SSH key pair.

  3. Ansible installed on your local machine. If you’re running Ubuntu 16.04 as your OS, follow the “Step 1 - Installing Ansible” section in How to Install and Configure Ansible on Ubuntu 16.04 to install Ansible. For installation instructions on other platforms like Mac OS X or CentOS, follow the official Ansible installation documentation.

  4. Familiarity with Ansible playbooks. For review, check out Configuration Management 101: Writing Ansible Playbooks.

6.1.2. 1. Preparing the inventory

Host file will be your inventory file and you’ve added one Ansible groups (masters) to it specifying the logical structure of your cluster. In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip) and specifies that Ansible should run remote commands as the root user. The last line of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.

6.1.3. 2. Creating a Non-Root User on All Remote Servers

In this section you will create a non-root user (shimmercat) with sudo privileges on the servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations. Here’s a breakdown of what this playbook does:

  • Creates the non-root user shimmercat.

    • Configures the sudoers file to allow the shimmercat user to run sudo commands without a password prompt.

    • Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub) to the remote shimmercatuser’s authorized key list. This will allow you to SSH into each server as the shimmercat user.

  • Disable swap, ensure that swap is disabled on the system. Otherwise kubernetes setup will fail. Then, execute the playbook by locally running:

$ ansible-playbook -i hosts ~/kube-cluster/initial.yml

Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.

6.1.3.1. 3. Installing Kubernetetes’ Dependencies

In this section, you will install the operating system level packages required by Kubernetes with Ubuntu’s package manager. These packages are:

  • Docker - a container runtime. It is the component that runs your containers.

  • kubeadm - a CLI tool that will install and configure the various components of a cluster in a standard way.

  • kubelet - a system service/program that runs on all nodes and handles node-level operations.

  • kubectl - a CLI tool used for issuing commands to the cluster through its API Server.

In the end we make sure that the cgroup driver used by kubelet is the same as the one used by Docker.

Then, execute the playbook by locally running:

$ ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

All system dependencies are now installed. Let’s set up the master node and initialize the cluster.

6.1.4. 3. Setting Up the Master Node

In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

This functionality is provided by pod network plugins. For this cluster, you will use Project Calico, a stable and performant option.

Here’s a breakdown of this play:

  • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=192.168.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Calico uses the above subnet by default; we’re telling kubeadm to use the same subnet.

  • The second task creates a .kube directory at /home/shimmercat. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

  • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

  • The fourth and fifth kubectl apply to install Calico. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The rbac-kdd.yaml and calico.yaml file contains the descriptions of objects required for setting up Calico in the cluster.

  • The fifth task enable to schedule pods on the master. By default, your cluster will not schedule pods on the master for security reasons.

  • The last task restart kubelet.

Then, execute the playbook by locally running:

$ ansible-playbook -i hosts ~/kube-cluster/master.yml

To check the status of the master node, SSH into it with the following command:

$ ssh shimmercat@master_ip

Once inside the master node, execute:

$ kubectl get nodes

You will now see the following output:

NAME     STATUS    ROLES     AGE       VERSION
master   Ready     master    1m        v1.10.3

The output states that the master node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

6.2. 2. Server network deployment (cluster)

Your cluster will include the following physical resources:

  • One master node

The master node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.

  • Two worker nodes

Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the master goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.

After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.

6.2.1. 0. Prerequisites

  1. An SSH key pair on your local Linux/Mac OS/BSD machine. If you haven’t used SSH keys before, you can learn how to set them up by following this explanation of how to set up SSH keys on your local machine.

  2. Three servers running running Ubuntu 16.04 with at least 2GB RAM, and python 3 installed. You should be able to SSH into these servers as the root user with your SSH key pair.

  3. Ansible installed on your local machine. If you’re running Ubuntu 16.04 as your OS, follow the “Step 1 - Installing Ansible” section in How to Install and Configure Ansible on Ubuntu 16.04 to install Ansible. For installation instructions on other platforms like Mac OS X or CentOS, follow the official Ansible installation documentation.

  4. Familiarity with Ansible playbooks. For review, check out Configuration Management 101: Writing Ansible Playbooks.

6.2.2. 1. Preparing the inventory

Host file will be your inventory archive and you’ve added two Ansible groups ( masters and workers) to it specifying the logical structure of your cluster.

In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip) and specifies that Ansible should run remote commands as the root user.

Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip and worker_2_ip, but you can place all the nodes you want) that also specify the ansible_user as root.

The last lines of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.

6.2.3. 2. Creating a Non-Root User on All Remote Servers

In this section you will create a non-root user with sudo privileges on all servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations. Here’s a breakdown of what this playbook does:

  • Creates the non-root user shimmercat.

    • Configures the sudoers file to allow the shimmercat user to run sudo commands without a password prompt.

    • Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub) to the remote shimmercatuser’s authorized key list. This will allow you to SSH into each server as the shimmercat user.

  • Disable swap, ensure that swap is disabled on the system. Otherwise kubernetes setup will fail. Then, execute the playbook by locally running:

ansible-playbook -i hosts ~/kube-cluster/initial.yml

Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.

6.2.4. 3. Installing Kubernetetes’ Dependencies

In this section, you will install the operating system level packages required by Kubernetes with Ubuntu’s package manager. These packages are:

  • Docker - a container runtime. It is the component that runs your containers.

  • kubeadm - a CLI tool that will install and configure the various components of a cluster in a standard way.

  • kubelet - a system service/program that runs on all nodes and handles node-level operations.

  • kubectl - a CLI tool used for issuing commands to the cluster through its API Server.

In the end we make sure that the cgroup driver used by kubelet is the same as the one used by Docker.

Then, execute the playbook by locally running:

	$ ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

All system dependencies are now installed. Let’s set up the master node and initialize the cluster.

6.2.5. 4. Setting Up the Master Node

In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

This functionality is provided by pod network plugins. For this cluster, you will use Project Calico, a stable and performant option.

Here’s a breakdown of this play:

  • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=192.168.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Calico uses the above subnet by default; we’re telling kubeadm to use the same subnet.

  • The second task creates a .kube directory at /home/shimmercat. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

  • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

  • The fourth and fifth kubectl apply to install Calico. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The rbac-kdd.yaml and calico.yaml file contains the descriptions of objects required for setting up Calico in the cluster.

  • The last task restart kubelet.

Then, execute the playbook by locally running:

$ ansible-playbook -i hosts ~/kube-cluster/master.yml

To check the status of the master node, SSH into it with the following command:

$ ssh shimmercat@master_ip

Once inside the master node, execute:

$ kubectl get nodes

You will now see the following output:

NAME             STATUS    ROLES     AGE       VERSION
master   Ready     master    1m        v1.10.3

The output states that the master node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

6.2.6. 5. Setting Up the Worker Nodes

Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the master’s API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.

Here’s what the playbook does:

  • The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.

  • The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.

Then, execute the playbook by locally running:

	$ ansible-playbook -i hosts ~/kube-cluster/workers.yml

With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let’s verify that the cluster is working as intended.

6.2.7. 6. Verifying the Cluster

A cluster can sometimes fail during setup because a node is down or network connectivity between the master and worker is not working correctly. Let’s verify the cluster and ensure that the nodes are operating correctly.

You will need to check the current state of the cluster from the master node to ensure that the nodes are ready. If you disconnected from the master node, you can SSH back into it with the following command:

	$ ssh ubuntu@master_ip

Then execute the following command to get the status of the cluster:

	$ kubectl get nodes

You will see output similar to the following:

NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    1m        v1.10.1
worker1   Ready     <none>    1m        v1.10.1
worker2   Ready     <none>    1m        v1.10.1

If all of your nodes have the value Ready for STATUS, it means that they’re part of the cluster and ready to run workloads.

If, however, a few of the nodes have NotReady as the STATUS, it could mean that the worker nodes haven’t finished their setup yet. Wait for around five to ten minutes before re-running kubectl get nodeand inspecting the new output. If a few nodes still have NotReady as the status, you might have to verify and re-run the commands in the previous steps.