Master schedule only on one worker

Hi all,

I’m trying to configure a cluster that consists of three physical machines (1-Master+Worker, 2-Worker, 3-Worker), they all has ubuntu 20.04. After following the instructions now I have in the Infrastructure tab 1 healthy cluster and 3 healthy and connected nodes without having any errors. However, after several trials of scheduling multiple pods, the master only schedule on itself or on machine 3, but doesn’t schedule anything on machine 2. Is there any way to find out why this is happening or check whether machine 2 is actually working correctly within the cluster?

BR, Mostafa

@Mostafa When you built the cluster did you select “Make Master nodes Master + Worker”

If so this will allow workloads to run on the Master nodes.

Also, you could apply a label to each node. node=nodeOne node=nodeTwo node=nodeThree

And then deploy a Pod that targets that label.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    node: nodeTwo

yes but that’s intended and fine for me, the problem that I would like to solve is that the master schedules only in one of the workers and not the other one

For some reason kubectl doesn’t work with the cluster. It shows the following errors:

kubectl version --short
Client Version: v1.21.3
Error from server (NotFound): the server could not find the requested resource

kubectl get nodes --show-labels
error: the server doesn’t have a resource type “nodes”

Are there any additional steps to get kubectl to work with the cluster?

I tried to deploy a Pod and use the machine IP as the node selector, it showed this event:

I actually suspect that maybe the disk space in the worker is the problem so I’ll free some space and try again.

Hi,
pls check through log same, may be 2nd node is tainted or resource availability is less.

Issue solved after freeing more disk space in the tainted worker