Member-only story
Ceph Deployment with Rook: Part 2
This is a continuation of the previous article. If you have not read it, please go back and start from there.
Last time we pointed out that the Ceph cluster is in the HEALTH_WARN state due to the replication factor and the number of OSDs. This information is obtained by connecting to a ceph tool pod.
To find this pod, we run this command.
kubectl -n rook-ceph get pods -l "app=rook-ceph-tools"
This pod is launched due to the override of toolbox enabled property. See the previous article for where this is specified.
toolbox:
enabled: true
Once we connect to this pod, we can run ceph related commands, such as ceph health and ceph status.
kubectl -n rook-ceph exec -it rook-ceph-tools-877d765f-bc8rs -- /bin/bash
In the command above, rook-ceph-tools-877d765f-bc8rs is the name of the pod returned by the previous kubectl command.
Fixing the HEALTH_WARN
In the previous deployment on Ubuntu/Docker, we have the same warning. We fixed this warning by setting osd_pool_default_size. Let’s connect to the toolbox pod and run the command.
ceph config set global osd_pool_default_size 2
Now, we got this.
Comparing this with the previous health state, “OSD count 2 < osd_pool_default_size 3” is gone.
Previously, we did not have degraded data redundancy warnings. That was because we had fixed the osd_pool_default_size parameter before we created any pools. In this deployment, this value of osd_pool_default_size was already used in pool creation.
Now, we need to change pool specification. There are four CRDs created in the cluster deployment.
- cephclusters.ceph.rook.io : rook-ceph
- cephblockpools.ceph.rook.io : ceph-blockpool
- cephfilesystems.ceph.rook.io : ceph-filesystem
- cephobjectstores.ceph.rook.io : ceph-objectstore
We have already seen the rook-ceph that represents the Ceph cluster. The other three have the following section in spec.
Replicated:
Size: 3
There are 4 instances of the above section: one on ceph-blockpool and ceph-objectstore and two on ceph-filesystem.