Platform9 prepping node fails because Port 10250 is in use by K3S?

Reproduction Steps

  • Create new Linode virtual machine (Ubuntu 20.04 LTS)
  • SSH into node
  • Install K3S curl -sfL https://get.k3s.io | sh -
  • Install Platform9, according to directions

Expected Result

Platform9 installs properly on the node.

Actual Result

root@localhost:~# pf9ctl prep-node
✓ Loaded Config Successfully
✓ Missing package(s) installed successfully
✓ Removal of existing CLI
✓ Existing Platform9 Packages Check
✓ Required OS Packages Check
✓ SudoCheck
✓ CPUCheck
✓ DiskCheck
x MemoryCheck - At least 12 GB of memory is needed on host. Total memory found: 8 GB
x PortCheck - Following port(s) should not be in use: 10250
✓ Existing Kubernetes Cluster Check
✓ Check lock on dpkg
✓ Check lock on apt
✓ Check if system is booted with systemd
✓ Check time synchronization
✓ Check if firewalld service is not running
✓ Disabling swap and removing swap in fstab

2022-05-30T00:48:40.4796Z       FATAL   x Required pre-requisite check(s) failed. See /root/pf9/log/pf9ctl-20220530.log or use --verbose for logs

Netstat details

root@localhost:~# netstat --listening --numeric --programs
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:30437           0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 0.0.0.0:31629           0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:10258         0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      1155/k3s server
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      600/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      791/sshd: /usr/sbin
tcp        0      0 127.0.0.1:10010         0.0.0.0:*               LISTEN      1175/containerd
tcp6       0      0 :::10250                :::*                    LISTEN      1155/k3s server
tcp6       0      0 :::6443                 :::*                    LISTEN      1155/k3s server
tcp6       0      0 :::22                   :::*                    LISTEN      791/sshd: /usr/sbin
udp        0      0 127.0.0.53:53           0.0.0.0:*                           600/systemd-resolve
udp        0      0 0.0.0.0:8472            0.0.0.0:*                           -
raw6       0      0 :::58                   :::*                    7           597/systemd-network

Hi @trevor, welcome.
Why did you install K3s??

To set up a test cluster, to test Platform9 with? Are Kubernetes clusters with k3s not supported by Platform9?

I just assumed that since Amazon EKS, Azure AKS, and Google GKE are all supported, that for the Bare Metal option, I could use whatever Kubernetes distribution I wanted to. The directions in the console didn’t specify otherwise.

You don’t need to setup anything Kubernetes related, thats what we do.
We build clusters using VMs or BareMetal, all you need to do is have Ubuntu or CentOS and then run pf9ctl bootstrap

https://platform9.com/docs/kubernetes/cli-bootstrap

If you don’t need to set up anything Kubernetes-related, then what’s the point of importing a GKE, AKS, EKS cluster? You can import a managed cluster from a cloud provider, but you can’t import your own cluster?

That’s legitimately confusing.

Good question. One of the primary functions of Platform9 is to build and managed Kubernetes clusters, as I mentioned we can do this using one or more VMs, Bare Metal Servers, AWS or Azure. Specifically, when we look at users who want to run in a datacenter, co-lo or at the edge their first, and on-going challenge, is cluster lifecycle. This means creating the cluster, patching CVEs, executing upgrades and self-healing failed OS services. Further, Kubernetes clusters also require a CNI, DNS, load balancer and other core-infrastructure applications such as metrics server. Doing this manually results in a significant overhead as teams and individuals need to learn not just Kubernetes, but all of the tooling that goes into a cluster just to make it functional. Our platform removes that burden completely. We manage the cluster plus the core-infrastructure applications; creating a consumable cluster. Just connect the VM or Bare Metal and we do the rest.

In EKS, AKS and GKE users can leverage the out of the box services from the cloud provider for most of these lifecycle aspects. When a user imports a cluster our focus is running cloud native applications through services such as ArgoCD which we just launched, built in monitoring that users can add to clusters so that developers can see performance data and a light weight IDE that exposes events, logs, YAML and resource data for Pods, Deployments, ReplicaSets, Cron Jobs and more.

If you created a cluster using a tool like K3s, KubeAdm, or KubeSpray you still need to manage all of the core services, run upgrades, test compatibility, and find CVE notifications then integrate the fix. The initial build is taken care of, but the ongoing work remains. We are open to importing these clusters, if you want to create an idea for us to track the request you can do that at ideas.platform9.com

Side note, it looks like your instance is running 5.4. Would you like me to get that upgraded?
5.5 includes ArgoCD as a Service, you can learn more about it here.
The new release also includes changes to KubeVirt, Metal3 and enhances our IDE.


2 Likes

I’m starting to understand what the value that Platform9 provides. Thanks for the detailed explanation. I’m still confused why you guys specifically chose to allow importing from the “big 3” cloud vendors, but not arbitrary, pre-existing Kubernetes clusters. It sounds like you’re at least loosely interested in broadening the clusters that can be imported.

Based on the features you’re describing, such as keeping core components up-to-date (metrics server, ArgoCD, ingress controllers, etc.), you could still do that regardless of which Kubernetes platform someone is using (managed cloud or self-managed).

Of course, I also understand that if you guys completely manage the Kubernetes installation, from the ground up, it’s much easier for you guys to provide support and updates. I guess it just depends on which “layer” a prospective customer is interested in managing their clusters at. It wouldn’t surprise me if some customers you encounter have a mix-and-match of cloud-managed clusters and self-managed clusters.

Just speculating a bit …

Yeah, I noticed that the dashboard looked different than some of the screenshots I’ve come across. I’m not sure if that’s because I haven’t been able to import a cluster yet, or if it’s due to the minor version change. If you wouldn’t mind getting that upgraded regardless, I’d appreciate it.


I’ll keep poking around Platform9, as time allows. I signed up for the free tier, so I could figure out the value that it provides. I’ve been building CKA and CKAD training courses for my company, CBT Nuggets, and discovered Platform9 during my research. It sounds pretty neat, but my understanding of your intended use cases is clearly lacking.

Thanks again for the support!

Here to help anytime. I requested the team to upgrade your instance to 5.5. It was a major release so we have spread out upgrades, normally we just upgrade behind the scenes…. The power of SaaS.

We started with the big 3 as that was the market we saw our customers running the most.

Do you have clusters running currently? If so what are they built with?

1 Like

Thank you! And yes, I like the benefits of certain SaaS solutions for sure! The downside is the lack of agility depending on someone else’s priority list. But in general I’m a huge advocate of cloud technologies: AWS, Digital Ocean, Linode, Vultr, and others.

Long-lived Kubernetes Clusters

I don’t currently have any “long-lived” clusters. However, I’m planning to start keeping one around that I currently have in Linode, so I can observe the ongoing effects of installing/removing various components.

For the most part, I spin clusters up/down on-demand, mainly in Digital Ocean or Linode, and occasionally Vultr. The flexibility of creating and destroying clusters, with ease, is the lovely thing about these smaller cloud vendors.

Third-Party Kubernetes Components

The most important components that I find myself regularly coming back to include:

  • Cert Manager with Let’s Encrypt staging & production configured as ClusterIssuer resources
  • External DNS (DNS zone cbttrevor.com is hosted in Digital Ocean)
  • NGINX Ingress or Traefik Proxy
  • Cilium for Network Policy (haven’t deep-dived into its advanced features just yet)
  • Harbor, deployed via Helm chart
  • KNative for “serverless” components

Local Kubernetes Clusters

As far as local Kubernetes clusters, I find myself using:

  • k3s - very well-supported by Rancher Labs, massive control, scalability, flexibility, and configuration
  • k0s - this looks like a great alternative to k3s, but haven’t deep dived yet
  • minikube on occasion, but rare
  • Docker Desktop - rare, because it’s not scalable and I can’t control the master version
  • kubeadm - extremely rare, unless I’m forced to use it