Member-only story
Kubernetes Provisioner Setup for Ceph RBD and CephFS
Without admin credentials
In this article, we will go through steps for setting Ceph RBD-based and CephFS-based Kubernetes storage class with provisioners.
We covered these steps in the two previous articles:
The major difference is that we did not use Ceph admin credentials here. When we provide a Ceph cluster for developers to use for their Kubernetes storage, we do not want to give Ceph admin credentials.
There are some other environment and software version differences between this article and the previous articles. The previous versions are in parentheses.
- Linux : Ubuntu 22.04 (Ubuntu 20.04)
- Ceph version : Pacific 16.2.7 (Pacific 16.2.10)
- Kubernetes version : v1.26.5 by kubespray v2.22.1 (v1.24.4 by rke 1.3.14)
- Rook Ceph Helm chart : v1.11.10 (v1.7.3)
However, I believe these differences are minor and the steps presented here should work for the previous setup as well.
The overall steps are same as the previous one with admin credentials.

Ceph step 1 and 2 are same for both cases. When admin credentials are given to Kubernetes, rook-ceph operator will create four Ceph clients: client.csi-cephfs-node, client.csi-cephfs-provisioner, client.csi-rbd-node, client.csi-rbd-provisioner. These clients are used for the Kubernetes nodes and provisioners.
If we do not want to use Ceph admin credentials, we need Ceph clients for Kubernetes node and provisioners. In addition, we need another Ceph client for cephcluster custom resource definition.
Now, we will go the complete setup step by step.
On Ceph side
First, create a rbd pool for block storage PVs and CephFS for file system based PVs.
sudo ceph osd pool create k8s-rbd-pool
sudo rbd pool init k8s-rbd-pool
sudo ceph fs volume create k8s-cephfs
Then, create a set of Ceph clients.