top of page
Search
  • Writer's pictureUtkarsh Bhatt

Rook to C2(h): How to win Kubernetes with Ceph ?

Updated: Jul 28, 2023


Modern day Cloud Native story is incomplete without containers. And with the lean-mean deployments of containerised applications happening at scale there is a significant growth in tools like Kubernetes (K8s) for their orchestration and observation. This brings storage to the front in a new light. For K8s applications to consume reliable storage at scale they need a k8s native solution.


Ceph is a popular software defined storage solution in cloud native space and containerised ceph deployments are continuously growing along with tools for their orchestration. One such (and the most popular in the kubernetes space) tool is ROOK. Rook is a storage operator which leverages K8s and Ceph to provide a magical self-healing, self-scaling storaage service.


Today we'll do a teensy tutorial on setting up your own rook-ceph cluster. The only pre-requisite is having a K8s environment which we are prepared using a 3 node Microk8s deployment (but could be any flavor of K8s). We had additionally enabled the follwoing addons for Microk8s as well.

$ microk8s enable ha-cluster
$ microk8s enable dns
$ microk8s enable rbac

Once you verify that you have a working K8s deployment with:

$ kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
ingress       nginx-ingress-microk8s-controller-dfwpg    1/1     Running   0          18h
kube-system   coredns-7745f9f87f-25rnr                   1/1     Running   0          18h
kube-system   calico-node-9p9ml                          1/1     Running   0          18h
kube-system   calico-node-kggw6                          1/1     Running   0          18h
kube-system   calico-kube-controllers-574d68b45c-sqv5m   1/1     Running   0          18h
ingress       nginx-ingress-microk8s-controller-p829k    1/1     Running   0          18h
kube-system   calico-node-kp9qx                          1/1     Running   0          18h
ingress       nginx-ingress-microk8s-controller-qhp2w    1/1     Running   0          18h

Fetch Rook:

$ git clone --single-branch --branch v1.12.0    https://github.com/rook/rook.git

$ cd rook/deploy/examples/
$ cd rook/deploy/examples/
$ ls
README.md
bucket-notification-endpoint.yaml
bucket-notification.yaml
bucket-topic.yaml
ceph-client.yaml
...
wordpress.yaml

Note: At the time of writing this blog v1.12.0 was the latest Rook release.

Note: All yamls are sourced from the rook/deploy/example directory shown above.


Modify operator.yaml for your flavor of K8s:

For Microk8s following variable need to point at its kubelet path.

ROOK_CSI_KUBELET_DIR_PATH: var/snap/microk8s/common/var/lib/kubelet"

Modify cluster-test.yaml (test spec) or cluster.yaml (prod spec) to use appropriate container image (optional):

This step is optional and can be skipped if you wish to use the centOS based upstream ceph container images, however, since we are using the Canonical suite (microk8s, ubuntu etc) I prefer to use the Canonical Container images.

The following variables need to be changed in the yaml file.

image: "ghcr.io/canonical/ceph:main"

Deploy Rook operator:

$ kubectl create -f crds.yaml -f common.yaml -f operator.yaml

Deploy Ceph cluster:

$ kubectl create -f cluster.yaml

Deploy Toolbox:

Rook toolbox is a special pod used to do the standard CLI interfacing with the rest of the ceph cluster, it can be used to query the ceph cluster status, and perform all sort of operation from creating a subvolume group to creating a multisite user. It can be deployed as:

$ kubectl create -f toolbox.yaml

Check Rook cluster status:

Once you have the rook-ceph-tools pod running you can then query your ceph cluster status as:

$ toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}')
$ kubectl -n rook-ceph exec $toolbox -- ceph status
  cluster:
    id:     568dac65-46ed-4e3a-bc2d-5239fd4f1934
    health: HEALTH_OK
 
  services:
    mon:           1 daemons, quorum a (age 6m)
    mgr:           a(active, since 2m)
    mds:           1/1 daemons up, 1 hot standby
    osd:           1 osds: 1 up (since 21s), 1 in (since 14s)
    cephfs-mirror: 1 daemon active (1 hosts)
    rbd-mirror:    1 daemon active (1 hosts)
    rgw:           1 daemon active (1 hosts, 1 zones)
  
  data:
     volumes: 1/1 healthy
     pools:   13 pools, 201 pgs
     objects: 241 objects, 511 KiB
     usage:   49 MiB used, 6.0 GiB / 6 GiB avail
     pgs:     101 stale+active+clean
              100 active+clean

This my friends is how you deploy Rook operated containerised Ceph on Kubernetes.

Thanks and Cheers!

10 views0 comments

Recent Posts

See All
Delivering Package

Like my work ? Want to more about me ?

bottom of page