Member-only story

Ceph: Let’s delete everything, Part 3

Deleting rbd images and pools

DaeGon Kim
3 min readDec 13, 2022

This is the third article of this series, Ceph: Let’s delete everything and the series’ last article.

Disconnecting Clients

To get a list of mapped images, use the following command on ceph client nodes.

sudo rbd showmapped

Running this command on ceph cluster nodes does not provide any information.

rbd showmapped outputs from a monitor node and client node

I could not figure out how to get a list of client nodes for rbd images from the ceph cluster node. The Ceph dashboard does not provide them either.

If rbd images are mapped using rbdmap.service, remove target map entries in /etc/ceph/rbdmap and restart the service.

sudo systemctl restart rbdmap.service

If rbd images are mapped using rbd map command, use this command.

sudo rbd device unmap [/dev/rbd0]

Before un-mapping, please make sure that block devices are not mounted.

For Kubernetes, we delete PVCs (Persistent Volumes Claims) and the associated storage class.

kubectl delete pvc [pvc-name] -n [namespace]
kubectl delete storageclasses.storage.k8s.io [name]

Deleting snapshots

--

--

No responses yet