Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

An Innovator’s Guide to Kubernetes Storage Using Ceph

Ajay Nemade

Cloud & DevOps

Kubernetes, the awesome container orchestration tool is changing the way applications are being developed and deployed. You can specify the required resources you want and have it available without worrying about the underlying infrastructure. Kubernetes is way ahead in terms of high availability, scaling, managing your application, but storage section in the k8s is still evolving. Many storage supports are getting added and are production ready.

People are preferring clustered applications to store the data. But, what about the non-clustered applications? Where does these applications store data to make it highly available? Considering these questions, let’s go through the Ceph storage and its integration with Kubernetes.

What is Ceph Storage?

Ceph is open source, software-defined storage maintained by RedHat. It’s capable of block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing). This algorithm ensures that all the data is properly distributed across the cluster and data quickly without any constraints. Replication, Thin provisioning, Snapshots are the key features of the Ceph storage.

There are good storage solutions like Gluster, Swift but we are going with Ceph for following reasons:

  1. File, Block, and Object storage in the same wrapper.
  2. Better transfer speed and lower latency
  3. Easily accessible storage that can quickly scale up or down

We are going to use 2 types of storage in this blog to integrate with kubernetes.

  1. Ceph-RBD
  2. CephFS

Ceph Deployment

Deploying highly available Ceph cluster is pretty straightforward and easy. I am assuming that you are familiar with setting up the Ceph cluster. If not then refer the official document here.

If you check the status, you should see something like:

CODE: https://gist.github.com/velotiotech/9125fa16fbb89ddf3175fa1e5ddda356.js

Here notice that my Ceph monitors IPs are 10.0.1.118, 10.0.1.227 and 10.0.1.172

K8s Integration

After setting up the Ceph cluster, we would consume it with Kubernetes.  I am assuming that your Kubernetes cluster is up and running. We will be using Ceph-RBD and CephFS as storage in Kubernetes.

Ceph-RBD and Kubernetes

We need a Ceph RBD client to achieve interaction between Kubernetes cluster and CephFS. This client is not in the official kube-controller-manager container so let’s try to create the external storage plugin for Ceph.

CODE: https://gist.github.com/velotiotech/eb8a349525adc29227b8a590de5da2fa.js

CODE: https://gist.github.com/velotiotech/ccbed73ce17c2b3c6e0e18aba41c87a5.js

  • You will get output like this:

CODE: https://gist.github.com/velotiotech/88fdcfc48c5d9d3004a9a79b54865a47.js

  • Check RBD volume provisioner status and wait till it comes up in running state. You would see something like following:

CODE: https://gist.github.com/velotiotech/8ac4ff3075cfe97f21eecf9ed0f45c43.js

  • Once the provisioner is up, provisioner needs the admin key for the storage provision. You can run the following command to get the admin key:

CODE: https://gist.github.com/velotiotech/15bc39ee4c56ca71f85e6e0e07a4dce6.js

  • Let’s create a separate Ceph pool for Kubernetes and the new client key:

CODE: https://gist.github.com/velotiotech/a94b8772b163caeca5abd54431306a12.js

  • Get the auth token which we created in the above command and create kubernetes secret for new client secret for kube pool.

CODE: https://gist.github.com/velotiotech/8b9f49439dc1f3aaf61f323ab0bfac2c.js

  • Now let’s create the storage class.

CODE: https://gist.github.com/velotiotech/fc8f585710395497cd14b568a908a538.js

CODE: https://gist.github.com/velotiotech/bcc239716d6fa2979745b0b400a21d05.js

  • We are all set now. We can test the Ceph-RBD by creating the PVC. After creating the PVC, PV will get created automatically.  Let’s create the PVC now:

CODE: https://gist.github.com/velotiotech/82fabdcecab06ee1b004b8c0843c597f.js

CODE: https://gist.github.com/velotiotech/f577ac24bdb0c2732e495cf8c7ec9101.js

  • If you check pvc, you’ll find it shows that it’s been bounded with the pv which got created by storage class.
  • Let’s check the persistent volume

CODE: https://gist.github.com/velotiotech/2c87fdb7919bfe82ee1e85d821ce04d0.js

Till now we have seen how to use the block based storage i.e Ceph-RBD with kubernetes by creating the dynamic storage provisioner. Now let’s go through the process for setting up the storage using file system based storage i.e. CephFS.  

CephFS and Kubernetes

  • Let’s create the provisioner and storage class for the CephFS.  Create the dedicated namespace for CephFS

CODE: https://gist.github.com/velotiotech/0400ce7b361f6f591ae44e1a6e0be6a7.js

  • Create the kubernetes secrete using the Ceph admin auth token

CODE: https://gist.github.com/velotiotech/1d67b5b0dbf58a4a2f7080cdf83c3563.js

  • Create the cluster role, role binding, provisioner

CODE: https://gist.github.com/velotiotech/46036633937f53013d482bad3bcde863.js

CODE: https://gist.github.com/velotiotech/b7c1afa93f89fe1e80cd7981b7af72f5.js

  • Create the storage class

CODE: https://gist.github.com/velotiotech/6500440168c952d1f1e4ca9943fd676a.js

CODE: https://gist.github.com/velotiotech/06ffc38ab426e42adb6a784092d26984.js

  • We are all set now. CephFS provisioner is created. Let’s wait till it gets into running state.

CODE: https://gist.github.com/velotiotech/e5358bf8a19171f95f2d06c45a3bd5a5.js

  • Once the CephFS provider is up, try creating the persistent volume claim. In this step, storage class will take care of creating the persistent volume dynamically.

CODE: https://gist.github.com/velotiotech/c2d4c20deefc3511cc941e39836916d0.js

CODE: https://gist.github.com/velotiotech/3b1fa29fa5117643f6caa01952236959.js

  • Let’s check the create PV and PVC

CODE: https://gist.github.com/velotiotech/97a4dcfbf0231b39fb951b64d0cb218c.js

Conclusion

We have seen how to integrate the Ceph storage with Kubernetes. In the integration, we covered ceph-rbd and cephfs. This approach is highly useful when your application is not clustered application and if you are looking to make it highly available.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

An Innovator’s Guide to Kubernetes Storage Using Ceph

Kubernetes, the awesome container orchestration tool is changing the way applications are being developed and deployed. You can specify the required resources you want and have it available without worrying about the underlying infrastructure. Kubernetes is way ahead in terms of high availability, scaling, managing your application, but storage section in the k8s is still evolving. Many storage supports are getting added and are production ready.

People are preferring clustered applications to store the data. But, what about the non-clustered applications? Where does these applications store data to make it highly available? Considering these questions, let’s go through the Ceph storage and its integration with Kubernetes.

What is Ceph Storage?

Ceph is open source, software-defined storage maintained by RedHat. It’s capable of block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing). This algorithm ensures that all the data is properly distributed across the cluster and data quickly without any constraints. Replication, Thin provisioning, Snapshots are the key features of the Ceph storage.

There are good storage solutions like Gluster, Swift but we are going with Ceph for following reasons:

  1. File, Block, and Object storage in the same wrapper.
  2. Better transfer speed and lower latency
  3. Easily accessible storage that can quickly scale up or down

We are going to use 2 types of storage in this blog to integrate with kubernetes.

  1. Ceph-RBD
  2. CephFS

Ceph Deployment

Deploying highly available Ceph cluster is pretty straightforward and easy. I am assuming that you are familiar with setting up the Ceph cluster. If not then refer the official document here.

If you check the status, you should see something like:

CODE: https://gist.github.com/velotiotech/9125fa16fbb89ddf3175fa1e5ddda356.js

Here notice that my Ceph monitors IPs are 10.0.1.118, 10.0.1.227 and 10.0.1.172

K8s Integration

After setting up the Ceph cluster, we would consume it with Kubernetes.  I am assuming that your Kubernetes cluster is up and running. We will be using Ceph-RBD and CephFS as storage in Kubernetes.

Ceph-RBD and Kubernetes

We need a Ceph RBD client to achieve interaction between Kubernetes cluster and CephFS. This client is not in the official kube-controller-manager container so let’s try to create the external storage plugin for Ceph.

CODE: https://gist.github.com/velotiotech/eb8a349525adc29227b8a590de5da2fa.js

CODE: https://gist.github.com/velotiotech/ccbed73ce17c2b3c6e0e18aba41c87a5.js

  • You will get output like this:

CODE: https://gist.github.com/velotiotech/88fdcfc48c5d9d3004a9a79b54865a47.js

  • Check RBD volume provisioner status and wait till it comes up in running state. You would see something like following:

CODE: https://gist.github.com/velotiotech/8ac4ff3075cfe97f21eecf9ed0f45c43.js

  • Once the provisioner is up, provisioner needs the admin key for the storage provision. You can run the following command to get the admin key:

CODE: https://gist.github.com/velotiotech/15bc39ee4c56ca71f85e6e0e07a4dce6.js

  • Let’s create a separate Ceph pool for Kubernetes and the new client key:

CODE: https://gist.github.com/velotiotech/a94b8772b163caeca5abd54431306a12.js

  • Get the auth token which we created in the above command and create kubernetes secret for new client secret for kube pool.

CODE: https://gist.github.com/velotiotech/8b9f49439dc1f3aaf61f323ab0bfac2c.js

  • Now let’s create the storage class.

CODE: https://gist.github.com/velotiotech/fc8f585710395497cd14b568a908a538.js

CODE: https://gist.github.com/velotiotech/bcc239716d6fa2979745b0b400a21d05.js

  • We are all set now. We can test the Ceph-RBD by creating the PVC. After creating the PVC, PV will get created automatically.  Let’s create the PVC now:

CODE: https://gist.github.com/velotiotech/82fabdcecab06ee1b004b8c0843c597f.js

CODE: https://gist.github.com/velotiotech/f577ac24bdb0c2732e495cf8c7ec9101.js

  • If you check pvc, you’ll find it shows that it’s been bounded with the pv which got created by storage class.
  • Let’s check the persistent volume

CODE: https://gist.github.com/velotiotech/2c87fdb7919bfe82ee1e85d821ce04d0.js

Till now we have seen how to use the block based storage i.e Ceph-RBD with kubernetes by creating the dynamic storage provisioner. Now let’s go through the process for setting up the storage using file system based storage i.e. CephFS.  

CephFS and Kubernetes

  • Let’s create the provisioner and storage class for the CephFS.  Create the dedicated namespace for CephFS

CODE: https://gist.github.com/velotiotech/0400ce7b361f6f591ae44e1a6e0be6a7.js

  • Create the kubernetes secrete using the Ceph admin auth token

CODE: https://gist.github.com/velotiotech/1d67b5b0dbf58a4a2f7080cdf83c3563.js

  • Create the cluster role, role binding, provisioner

CODE: https://gist.github.com/velotiotech/46036633937f53013d482bad3bcde863.js

CODE: https://gist.github.com/velotiotech/b7c1afa93f89fe1e80cd7981b7af72f5.js

  • Create the storage class

CODE: https://gist.github.com/velotiotech/6500440168c952d1f1e4ca9943fd676a.js

CODE: https://gist.github.com/velotiotech/06ffc38ab426e42adb6a784092d26984.js

  • We are all set now. CephFS provisioner is created. Let’s wait till it gets into running state.

CODE: https://gist.github.com/velotiotech/e5358bf8a19171f95f2d06c45a3bd5a5.js

  • Once the CephFS provider is up, try creating the persistent volume claim. In this step, storage class will take care of creating the persistent volume dynamically.

CODE: https://gist.github.com/velotiotech/c2d4c20deefc3511cc941e39836916d0.js

CODE: https://gist.github.com/velotiotech/3b1fa29fa5117643f6caa01952236959.js

  • Let’s check the create PV and PVC

CODE: https://gist.github.com/velotiotech/97a4dcfbf0231b39fb951b64d0cb218c.js

Conclusion

We have seen how to integrate the Ceph storage with Kubernetes. In the integration, we covered ceph-rbd and cephfs. This approach is highly useful when your application is not clustered application and if you are looking to make it highly available.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings