Team of professionals

Back to all news

How to prepare dev/test Ceph environment

When you are using Ceph in production, it is important to have an environment where you can test your upcoming upgrades, configuration changes, integration of new clusters, or any other significant changes without touching real production clusters. Such an environment can be built with the tool called Vagrant, which can very quickly build virtualized environment described in one relatively simple config file.

We are using Vagrant on Linux with libvirt and hostmanager plugins. Libvirt is a toolkit to manage Linux KVM VMs. Vagrant can also create virtualized networks to interconnect those Vms and storage devices, so you can have an almost identical copy of your production cluster if you need it.

Let‘s create 5 nodes Ceph cluster. The first 3 nodes will be dedicated for control node daemons, all nodes will also be OSD nodes (2 x 10 GB disks on each node by default), and one node will be a client node. Client nodes can be used for testing access to cluster services. Mapping rbd images, mounting CephFS filesystems, accessing RGW buckets, or whatever you like. The host machine where the virtualized environment run can be any machine with Linux (Ubuntu 22.04 in our case) with KVM virtualization enabled.

user@hostmachine:~/$ kvm-ok 
INFO: /dev/kvm exists
KVM acceleration can be used

Install required packages:

sudo apt-get install qemu libvirt-daemon-system libvirt-clients ebtables dnsmasq-base
sudo apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
sudo apt-get install libguestfs-tools
sudo apt-get install build-essential

Install vagrant according to the steps on the official installation page: https://developer.hashicorp.com/vagrant/downloads
Then we need to install vagrant plugins:

vagrant plugin install vagrant-libvirt vagrant-hostmanager 

If there is no ssh keypair in ~/.ssh, generate one. This keypair will be injected into the Vms, because cephadm which we will use for Ceph deployment needs ssh connectivity between VMs and this keypair will be used for ssh authentication between nodes.

ssh-keygen

Now we should be prepared to start your virtual environment on the machine.

mkdir ceph-vagrant; cd ceph-vagrant
wget https://gist.githubusercontent.com/kmadac/171a5b84a6b64700f163c716f5028f90/raw/1cd844197c3b765571e77c58c98759db77db7a75/Vagrantfile

vagrant up

When vagrant up ends without any error, ceph will be installed in the background for a couple of more minutes. You can check deployment progress by accessing ceph shell on node0:

vagrant ssh vagrant ssh ceph1-node0
vagrant@ceph1-node0:~$ sudo cephadm shell
root@ceph1-node0:/# ceph -W cephadm –watch-debug

at the end, you should get a healthy ceph cluster with 3 MON daemons and 6 OSD daemons:

root@ceph1-node0:/# ceph -s
  cluster:
    id:     774c4454-7d1e-11ed-91a2-279e3b86d070
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1-node0,ceph1-node1,ceph1-node2 (age 13m)
    mgr: ceph1-node0.yxrsrj(active, since 21m), standbys: ceph1-node1.oqrkhf
    osd: 6 osds: 6 up (since 12m), 6 in (since 13m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   33 MiB used, 60 GiB / 60 GiB avail
    pgs:     1 active+clean

Now your cluster is up and running, and you can install additional services like CephFS or RGW, you can play with adding/removing nodes, and upgrading to the next version. By changing CLUSTER_ID variable in Vagrantfile and copying Vagrantfile to another directory, you can deploy a second cluster and try to setup replication (rbd-mirror, cephfs-mirror, RGW multizone configuration) between clusters. The boundaries of your imagination only constrain you.

When you are done with your tests, you can simply destroy the environment with

vagrant destroy -f

Author

Kamil Madáč
Grow2FIT Infrastructure Consultant

Kamil is a Senior Cloud / Infrastructure consultant with 20+ years of experience and strong know-how in designing, implementing, and administering private cloud solutions (primarily built on OpenSource solutions such as OpenStack). He has many years of experience with application development in Python and currently also with development in Go. Kamil has substantial know-how in SDS (Software-defined storage), SDN (Software-defined networking), Data storage (Ceph, NetApp), administration of Linux servers and operation of deployed solutions.
Kamil regularly contributes to OpenSource projects (OpenStack, Kuryr, Requests Lib – Python).

The entire Grow2FIT consulting team: Our Team

Related services