Ceph Deployment with Rook: Part 2

DaeGon Kim
3 min readMay 9, 2023

This is a continuation of the previous article. If you have not read it, please go back and start from there.

Last time we pointed out that the Ceph cluster is in the HEALTH_WARN state due to the replication factor and the number of OSDs. This information is obtained by connecting to a ceph tool pod.

To find this pod, we run this command.

kubectl -n rook-ceph get pods -l "app=rook-ceph-tools"

This pod is launched due to the override of toolbox enabled property. See the previous article for where this is specified.

toolbox:
enabled: true

Once we connect to this pod, we can run ceph related commands, such as ceph health and ceph status.

kubectl -n rook-ceph exec -it rook-ceph-tools-877d765f-bc8rs -- /bin/bash

In the command above, rook-ceph-tools-877d765f-bc8rs is the name of the pod returned by the previous kubectl command.

Fixing the HEALTH_WARN

In the previous deployment on Ubuntu/Docker, we have the same warning. We fixed this warning by setting osd_pool_default_size. Let’s connect to the toolbox pod and run the command.

ceph config set global osd_pool_default_size 2

Now, we got this.

Comparing this with the previous health state, “OSD count 2 < osd_pool_default_size 3” is gone.

--

--

Responses (1)