Unable to scale masters / attach new node to cluster

I setup a cluster with 3 masters, but had to remove one of my nodes due to some issues. When I tried to add that node back in to the cluster from the web UI, it just gets stuck on the loading state. When I try to attach it via the CLI in verbose mode, I get the following unhelpful logs:

2022-04-14T13:38:44.78Z	DEBUG	Trying to attach-node to cluster
2022-04-14T13:39:15.3859Z	DEBUG	Trying to attach-node to cluster
2022-04-14T13:39:45.9642Z	DEBUG	Trying to attach-node to cluster
2022-04-14T13:40:16.5402Z	DEBUG	Trying to attach-node to cluster
2022-04-14T13:40:47.1543Z	DEBUG	Trying to attach-node to cluster
2022-04-14T13:41:17.7677Z	DEBUG	Trying to attach-node to cluster
2022-04-14T13:41:18.4119Z	ERROR	Error occured while converting response body to string
2022-04-14T13:41:18.412Z	DEBUG
2022-04-14T13:41:18.412Z	DEBUG	OVF Service not present
2022-04-14T13:41:18.412Z	DEBUG	Sending Segment Event: Attaching-node
2022-04-14T13:41:18.412Z	INFO	Encountered an error while attaching master node to a Kubernetes cluster :

Would appreciate some help debugging this, I’ve tried to completely wipe the node and add it again, decommissioning the node first etc but I get the same issue every time. Note that I was able to add a fourth worker node without any issues.

We resolved this issue over Community Slack. Instructions for the thread

Open the Developer Tools → Network in your browser before using the scaling master’s option in the pf9 UI. If you see a failure message that says unhealthy etcd cluster follow these steps to resolve the issue

1 - Login to a functional Master node, ensure you have a recent etcd backup
2 - Switch to root user
3 - Run $ docker cp etcd:/usr/local/bin/etcdctl /opt/pf9/pf9-kube/bin
4 - Run $ export PATH=$PATH:/opt/pf9/pf9-kube/bin
5 - Run $ etcdctl member list --write-out=table , node the member ID of the Node which is no longer a part of the cluster.
6 - Set the variable value ENDPOINTS=<CLIENT_ADDR1>,<CLIENT_ADDR2>,<CLIENT_ADDR3> . These values are taken from the above output
7 - Run $ etcdctl --cert /etc/pf9/kube.d/certs/etcdctl/etcd/request.crt --key /etc/pf9/kube.d/certs/etcdctl/etcd/request.key --cacert /etc/pf9/kube.d/certs/etcdctl/etcd/ca.crt endpoint health --endpoints=$ENDPOINTS --write-out=table
8 - Ensure the Member we are about to remove is marked as unhealthy.
9 - Run $ etcdctl --cert /etc/pf9/kube.d/certs/etcdctl/etcd/request.crt --key /etc/pf9/kube.d/certs/etcdctl/etcd/request.key --cacert /etc/pf9/kube.d/certs/etcdctl/etcd/ca.crt member remove <member_id> --endpoints=$ENDPOINTS <member_id> is gotten from step 5.
10 - Run step 5 again to ensure
11 -Ensure the node to be added to the master has been cleaned of previous etcd data. Location on the node is usually - /var/opt/pf9/kube/etcd/data
12 - Add the new master node from the UI using the Scale Masters option