Building a BareOS Cluster on VMs

Platform9 BareOS Cluster on Virtual Machines

Platform9 supports building clusters on bare bones operating systems on both virtual machines and physical. Building a BareOS cluster using VMs is easy, but the hypervisor and Single Vs Multi-Master architecture will make a difference. The issues are related to hypervisors that block egress traffic from VMs that are not tagged to the VMs assigned IP addresses and the use of VRRP. A multi-master architecture requires the use of VRRP to load balance Master Node traffic.

Single Master

Single Master deployments are the easiest and recommenced. No special networking is required and no reserved IP ranges are required.

Multi-Master

Multi-Master architectures will require planning and deep knowledge of the physical and virtual network. If you are building a Multi-Master Virtual Kubernetes cluster you will need to reserver an IP for the Multi-Master Virtual IP and ensure that no port security or security groups are blocking traffic.

Multi-Master Virtual IP

The virtual IP is used to provide redundancy and load balancing across each master node.
We use the Virtual Router Redundancy Protocol (VRRP) with Keepalived to provide a virtual IP (VIP) that fronts the active master node in a multi-master Kubernetes cluster. At any point in time, the VRRP protocol associates one of the master nodes with the virtual IP to which the clients (kubelet, users) connect. Lets call this the active master node.

Reserved IP Range

To ensure the Virtual IP isn’t assigned to another VM the IP must be in an addressable space that ins’t part of the available range that is assigned to new or existing Virtual Machines.

Hypervisor Recommendations

Below is a summary of recommendations for common hypervisors.

VMware

VMware won’t by default block traffic from a VM that isn’t tagged to one of the VMs assigned IPs, which means building a Multi-Master cluster is fairly straightforward. We do recommend that the Virtual IP be removed from the assignable pool of IPs.

OpenStack

We recommend using a Provider Network to avoid issues with Floating IPs.
Also, to use a multi-node VM cluster on OpenStack we would recommend creating a dedicated subnet

VirtualBox

Stick with a single node cluster. Unless you have 32GB of RAM and more than 8 vCPUs your cabernets experience will be impacted by slow performance.
Further, we also recommend to use single master architecture.

Node Minimum Specs

OS Support

We support Ubuntu 16 and 18 as well as CentOS 7.6/7.7/7.8.

Single node cluster

  • CPU: 4 CPUs
  • RAM: 16 GB
  • HDD: 30 GB

Multi Node Master

  • CPU: 4 CPUs
  • RAM: 16 GB
  • HDD: 30 GB

Multi Node Worker

  • CPU: 4 CPUs
  • RAM: 32 GB
  • HDD: 30 GB

Building a Cluster using the Web App

Make sure you have the required number of VMs ready running a supported OS. Login to your Platform9 account and navigate to the Infrastructure Dashboard and click + Cluster.

In the Create cluster wizard selects BareOS and click Deploy with BareOS
The first screen will check to see if there are any available nodes, if none of your nodes are connected to Platfrom9 then this is your first step, if you have nodes connected enter a name for the cluster and click next.

Connecting Nodes to Platform9

SSH to each VM and install the Platform9 CLI

bash <(curl -sL http://pf9.io/get_cli)

The installation will ask for your account details which are available on Step 1 of the BareOS Cluster Wizard.
Once the CLI is installed run the following command

pf9ctl cluster prep-node

Once Prep node is complete return to the web app.

Selecting Nodes and Configuring a Cluster

The remainder of the wizard will help you select your Master nodes, worker nodes and configure various aspects of the cluster.
On the Second step ensure that Privileged containers is enabled as this cannot be changed once the cluster is running.

Cluster Networking

If you are building a multi-master cluster ensure you the Virtual IP is reserved and that no network settings will block the traffic on egress from the master nodes.

CNI Selection

For a quick and easy CNI setup choose Flannel if you require network policy support, BGP or controls over NAT’ing and encapsulation choose Calico

Review and Finish

The final step is to review the settings and build the cluster. When you click Finish the cluster will begin to be built and you will be taken to the Node Health page of the cluster where you can observe the progress.

Create a Single Node Kubernetes Cluster using the CLI

  • Log into your Ubuntu server you just created on virtual box.

  • Download and install the PMK CLI by running the following command on your Ubuntu terminal.

bash <(curl -sL http://pf9.io/get_cli)
  • The CLI installer will ask for your PMK credentials.

Specify your PMK account url, your email address that you use to sign into PMK and your password.

The account url will be in https://pmkft-<numeric value>.platform9.io format

  • Once the CLI install finishes, you can run the pf9ctl CLI.

pf9ctl cluster --help

  • The cluster bootstrap command lets you easily create a single node cluster. Specify the name for your cluster, and the CLI will use reasonable defaults for all the other parameters to create the cluster for you.

pf9ctl cluster bootstrap MyTestCluster

Read more information about the Bootstrap command here

This will take ~5-10 minutes. Behind the scene, the CLI will create a Kubernetes cluster by making this node both the master and worker node for the cluster. It will install required Kubernetes packages and configure the cluster.

Thats it! Your single node PMK cluster is now ready. You can access the cluster via kubectl or use the PMK UI to access it and deploy workloads on it.