Member-only story

Using Ceph Cluster for Kubernetes: Part 1

How to connect an external Ceph cluster from Kubernetes and create rbd-based PVs

--

Today we will show how to set up connection to a Ceph cluster in a Kubernetes cluster and use it for persistent volumes.

The Ceph cluster being used here is described here: Ceph Cluster Deploy article. The Kubernetes is deployed by rke 1.3.14. It consists of one master node and one worker node. We are using the master node as a deployment node for the kubernetes cluster.

On the Kubernetes deploy node, we need helm package to be installed.

First add rook-release repo. Then, find a version to install.

helm repo add rook-release https://charts.rook.io/release
helm search repo rook-release/rook-ceph --versions

As of 2022/10/28, the most recent chart/app version is v1.10.4. But we used v1.7.3. The following command will install a rook-ceph operator.

Now, we need to create secrets and a configmap. FSID can be found in /etc/ceph/ceph.conf and CEPH_ADMIN_SECRET (key) can be found in /etc/ceph/ceph.client.admin.keyring on the ceph-mon node.

Not all the secrets got the appropriate adminKey, but these steps lead to a working configuration. The other observation is that in the end the ceph cluster have users named csi-rbd-node, csi-rbd-provisioner, csi-cephfs-node, and csi-cephs-provisioner.

Additional Ceph users created during rook-ceph installation

The capabilities that are assigned to each user can be useful for manually creating these users and provide these users to the rook-ceph configuration without admin client credentials for security concerns.

Now, create a custom resource definition called CephCluster, more precisely…

--

--

No responses yet

Write a response