RHEL nodes unhealthy

Hi,

I added my 4 RHEL VM nodes by installing the Platform9 Host Agent. Then I authenticated all the nodes, and now I can see them as unhealthy in the list.
I still cannot see my existing cluster, tho. I suppose I’ll have to tear it down first and create a new one with Platform9.
Do you think resetting my cluster with kubeadm reset will help to resolve the issue? Or what else could be the reason for getting the unhealthy state?

BR,
Kai

Had to remove a block device from /etc/fsab, and use it for the docker volume and remove the Kubernetes CNI

wget https://raw.githubusercontent.com/platform9/support-locker/master/pmk/bd2tp.sh
chmod 700 bd2tp.sh
sudo ./bd2tp.sh /dev/sdc1 docker-vg

sudo rm -rf /etc/cni/net.d/
sudo yum remove kubernetes-cni
sudo shutdown -r now

Now 50% of the nodes are healthy, the other 50% are shown as disconnected. Pretty strange, tho.

Br,
Kai

Reinstalled the agent on those, now 100% are healthy :slight_smile:

@kai-graf
Just catching up on this.
Are you importing an existing Kubernetes cluster? or are you building a new cluster on RHEL? if so which version?

I couldn’t add my existing cluster, didn’t really know how. So I created a new one, and it’s working fine beside some minor issues with PVCs :slight_smile:

@kai-graf When you try importing the existing cluster, did you see any error in the UI? Will be great if you can post a screenshot. Thanks!

I don’t have a screenshot. I basically just installed the Host Agent on the machines running a cluster, but the nodes stayed unhealthy in the dashboard. Later I figured out this was because Platform9 had a conflict with the kubernetes-cni package present on the system, and also Platform9 needs to overwrite the configuration file of the Docker Daemon.