Dynamic PV provisioning on BareOS cluster

Hi,

I got 4 VMs running with RHEL, 1 is the master and 3 are the worker nodes. On top I set up a Platform9 cluster. All good so far.

VM 1-3 have a 128 GB storage device attached, VM 4 has a 256 GB storage device attached. On all 4 VMs I had to sacrifice the storage device for Docker during the preparation of the machines for PF9, as it requires LVM2.
sudo ./bd2tp.sh /dev/sdc1 docker-vg

So my question now is: Could I use let’s say half of the space of that storage device on VM4 for dynamic provisioning of persistent volumes???
Before I switched from my self-managed cluster to PF9 I used to mount that drive and store the databases on it, but it was always a manual task to create the PVs for that.

BR,
Kai

Hi @kai-graf,

You can definitely use a part of that storage for Docker by creating partitions on the storage device. Once a partition has been created, it can be used to create the LVM thin pool.

Please refer to the following document for more information about this.
https://docs.platform9.com/kubernetes/bareos-centos-rhel-prerequisites#prepare-docker-storage

Thank you!

@kai-graf are you wanting to use a CSI driver for dynamic provisioning??
something like rook?

This is a very good question. I found this Drivers - Kubernetes CSI Developer Documentation and I’m pretty confused. Maybe TopoLVM might be suitable? How do I choose the right driver for that use case?
There is no UI in PF9 for that, I could only create StorageClasses, don’t think this is really helpful. I will probably need support from our Dev team to come up with some YAML files, if there is no ready-to-use solution.

This sounds pretty simple GitHub - metal-stack/csi-driver-lvm

@kai-graf CSI drivers are the connectivity between storage and Kubernetes, it automates the provisioning of volumes.
We are in the process of brining in the ability to deploy storage such as Rook Ceph and OpenEBS.

The CSI driver you provide is using Hostpath and will work on a single node, so to get started this will work.
If you’re looking for storage across all nodes, I would suggest setting up Rook Ceph https://docs.platform9.com/kubernetes/rook-ceph-csi
For Rook to work each node requires a spare drive that is a mounted unformatted volume. Rook then deploys Ceph as the storage layer and operates it.

I’m happy to setup a zoom call to help if you like.

Hi,

Thanks for your prompt help. Currently I don’t want to purchase a new storage device, so I tried to manually create a volume on one particular machine. However, it seems that this Docker thinpool can only be extended, not be reduced. May I stop docker, remove the thinpool, increase my test-lv and then manually create a volume with the same name thinpool, knowning that all docker containers need to be pulled again?
Or do you have a better idea?

[kai-graf@a0047-EPSViVM04 ~]$ sudo lvs
  LV       VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  test-lv  docker-vg -wi-a-----    5.00g
  thinpool docker-vg twi-aot--- <245.88g             2.38   0.71

Frankly, I don’t understand why PF9 overwrote my Docker Daemon file using Device Mapper. Mine was better using overlay2, much more flexible.

Okay, now I detached the volumes from the machines and attached two smaller ones to each VM.
Question regarding the Rook Ceph you proposed: We’re do I define the block device name, e.g. /etc/sde for each machine?

Now after some struggles to get docker up again (had to delete sudo rm -rf /var/lib/docker) it is working fine with csi-driver-lvm :slight_smile: