Bookmark this page

Lab: Deploying Red Hat Ceph Storage

In this review, you deploy a Red Hat Ceph Storage cluster using a service specification file.

Outcomes

You should be able to deploy a Red Hat Ceph Storage cluster using a service specification file.

If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.

Important

Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. This first comprehensive review will remove that cluster, but still requires the rest of the clean classroom environment.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start comprehensive-review1

This command confirms that the local container registry for the classroom is running and deletes the prebuilt Ceph cluster so it can be redeployed with the steps in this exercise.

Important

This lab start script immediately deletes the prebuilt Ceph cluster and takes a few minutes to complete. Wait for the command to finish before continuing.

Specifications

  • Deploy a four node Red Hat Ceph Storage cluster using a service specification file with these parameters:

    • Use the registry at registry.lab.example.com with the username registry and the password redhat.

    • Deploy MONs on the clienta, serverc, serverd, and servere nodes.

    • Deploy RGWs on the serverc and serverd nodes, with the service_id set to realm.zone.

    • Deploy MGRs on the clienta, serverc, serverd, and servere nodes.

    • Deploy OSDs on the serverc, serverd, and servere nodes, with the service_id set to default_drive_group. On all OSD nodes, use the /dev/vdb, /dev/vdc, and /dev/vdd drives as data devices.

      HostnameIP Address
      clienta.lab.example.com 172.25.250.10
      serverc.lab.example.com 172.25.250.12
      serverd.lab.example.com 172.25.250.13
      servere.lab.example.com 172.25.250.14
  • After the cluster is installed, manually add the /dev/vde and /dev/vdf drives as data devices on the servere node.

    • Set the OSD journal size to 1024 MiB.

    • Use 172.25.250.0/24 for the OSD public network, and 172.25.249.0/24 for the OSD cluster network.

  1. Using the serverc host as the bootstrap host, install the cephadm-ansible package, create the inventory file, and run the pre-flight playbook to prepare cluster hosts. `.

    1. On the serverc host, install the cephadm-ansible package.

      [student@workstation ansible]$ ssh admin@serverc
      [admin@serverc ~]$ sudo -i
      [root@serverc ~]# yum install cephadm-ansible
      ...output omitted...
      Complete!
    2. Create the hosts inventory file in the /usr/share/cephadm-ansible directory.

      [root@serverc ~]# cd /usr/share/cephadm-ansible
      [root@serverc cephadm-ansible]# cat hosts
      clienta.lab.example.com
      serverc.lab.example.com
      serverd.lab.example.com
      servere.lab.example.com
    3. Run the cephadm-preflight.yml playbook.

      [root@serverc cephadm-ansible]# ansible-playbook -i hosts \
      cephadm-preflight.yml --extra-vars "ceph_origin="
      ...output omitted...

      Note

      The ceph_origin variable is set to empty, which causes some playbooks tasks to be skipped because, in this classroom, the Ceph packages are installed from a local classroom repository. In a production environment, set ceph_origin to rhcs to enable the Red Hat Storage Tools repository for your supported deployment.

  2. On the serverc host, create the initial-config-primary-cluster.yaml cluster service specification file in the /root/ceph directory. Include four hosts with the following specifications:

    • Deploy MONs on clienta, serverc, serverd, and servere.

    • Deploy RGWs on serverc and serverd, with the service_id set to realm.zone.

    • Deploy MGRs on clienta, serverc, serverd, and servere.

    • Deploy OSDs on the serverc, serverd, and servere nodes, with the service_id set to default_drive_group. On all OSD nodes, use the /dev/vdb, /dev/vdc, and /dev/vdd drives as data devices.

      HostnameIP Address
      clienta.lab.example.com 172.25.250.10
      serverc.lab.example.com 172.25.250.12
      serverd.lab.example.com 172.25.250.13
      servere.lab.example.com 172.25.250.14
    1. Create the initial-config-primary-cluster.yaml cluster service specification file in the /root/ceph directory.

      [root@serverc cephadm-ansible]# cd /root/ceph
      [root@serverc ceph]# cat initial-config-primary-cluster.yaml
      service_type: host
      addr: 172.25.250.10
      hostname: clienta.lab.example.com
      ---
      service_type: host
      addr: 172.25.250.12
      hostname: serverc.lab.example.com
      ---
      service_type: host
      addr: 172.25.250.13
      hostname: serverd.lab.example.com
      ---
      service_type: host
      addr: 172.25.250.14
      hostname: servere.lab.example.com
      ---
      service_type: mon
      placement:
        hosts:
          - clienta.lab.example.com
          - serverc.lab.example.com
          - serverd.lab.example.com
          - servere.lab.example.com
      ---
      service_type: rgw
      service_id: realm.zone
      placement:
        hosts:
          - serverc.lab.example.com
          - serverd.lab.example.com
      ---
      service_type: mgr
      placement:
        hosts:
          - clienta.lab.example.com
          - serverc.lab.example.com
          - serverd.lab.example.com
          - servere.lab.example.com
      ---
      service_type: osd
      service_id: default_drive_group
      placement:
        host_pattern: 'server*'
      data_devices:
        paths:
          - /dev/vdb
          - /dev/vdc
          - /dev/vdd
  3. As the root user on the serverc host, bootstrap the Ceph cluster using the created service specification file.

    Set the Ceph dashboard password to redhat and use the --dashboard-password-noupdate option. Use the --allow-fqdn-hostname to use fully qualified domain names for the hosts. The registry URL is registry.lab.example.com, the username is registry, and the password is redhat.

    1. As the root user on the serverc host, run the cephadm bootstrap command with the provided parameters to bootstrap the Ceph cluster. Use the created service specification file.

      [root@serverc ceph]# cephadm bootstrap --mon-ip=172.25.250.12 \
      --apply-spec=initial-config-primary-cluster.yaml \
      --initial-dashboard-password=redhat \
      --dashboard-password-noupdate \
      --allow-fqdn-hostname \
      --registry-url=registry.lab.example.com \
      --registry-username=registry \
      --registry-password=redhat
      ...output omitted...
      Ceph Dashboard is now available at:
      
             URL: https://serverc.lab.example.com:8443/
      	    User: admin
      	Password: redhat
      
      Applying initial-config-primary-cluster.yaml to cluster
      Adding ssh key to clienta.lab.example.com
      Adding ssh key to serverd.lab.example.com
      Adding ssh key to servere.lab.example.com
      Added host 'clienta.lab.example.com' with addr '172.25.250.10'
      Added host 'serverc.lab.example.com' with addr '172.25.250.12'
      Added host 'serverd.lab.example.com' with addr '172.25.250.13'
      Added host 'servere.lab.example.com' with addr '172.25.250.14'
      Scheduled mon update...
      Scheduled rgw.realm.zone update...
      Scheduled mgr update...
      Scheduled osd.default_drive_group update...
      
      You can access the Ceph CLI with:
      
        sudo /sbin/cephadm shell --fsid cd6a42ce-36f6-11ec-8c67-52540000fa0c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
      
      
      Please consider enabling telemetry to help improve Ceph:
      
      	ceph telemetry on
      
      For more information see:
      
        https://docs.ceph.com/docs/pacific/mgr/telemetry/
      
      Bootstrap complete.
    2. As the root user on the serverc host, run the cephadm shell.

      [root@serverc ceph]# cephadm shell
    3. Verify that the cluster status is HEALTH_OK. Wait until the cluster reaches the HEALTH_OK status.

      [ceph: root@serverc /]# ceph status
        cluster:
          id:     cd6a42ce-36f6-11ec-8c67-52540000fa0c
          health: HEALTH_OK
      
        services:
          mon: 1 daemons, quorum serverc.lab.example.com (age 2m)
          mgr: serverc.lab.example.com.anabtp(active, since 91s), standbys: clienta.trffqp
          osd: 9 osds: 9 up (since 21s), 9 in (since 46s)
      
        data:
          pools:   1 pools, 1 pgs
          objects: 0 objects, 0 B
          usage:   47 MiB used, 90 GiB / 90 GiB avail
          pgs:     1 active+clean
  4. Label the clienta host as the admin node. Manually copy the ceph.conf and ceph.client.admin.keyring files to the admin node. On the admin node, test the cephadm shell.

    1. Label the clienta host as the admin node.

      [ceph: root@serverc /]# ceph orch host label add clienta.lab.example.com _admin
      Added label _admin to host clienta.lab.example.com
    2. Copy the ceph.conf and ceph.client.admin.keyring files from the serverc host to the clienta host. Locate these files in /etc/ceph on both hosts.

      [ceph: root@serverc /]# exit
      exit
      [root@serverc ceph]# cd /etc/ceph
      [root@serverc ceph]# scp {ceph.client.admin.keyring,ceph.conf} \
      root@clienta:/etc/ceph/
      Warning: Permanently added 'clienta' (ECDSA) to the list of known hosts.
      ceph.client.admin.keyring                   100%   63   105.6KB/s   00:00
      ceph.conf                                   100%  177   528.3KB/s   00:00
    3. On the admin node, test the cephadm shell.

      [root@serverc ceph]# exit
      logout
      [admin@serverc ~]$ exit
      Connection to serverc closed.
      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      Inferring fsid cd6a42ce-36f6-11ec-8c67-52540000fa0c
      Inferring config /var/lib/ceph/cd6a42ce-36f6-11ec-8c67-52540000fa0c/mon.clienta/config
      Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...47ff
      [ceph: root@clienta /]# ceph health
      HEALTH_OK
  5. Manually add OSDs to the servere node using devices /dev/vde and /dev/vdf. Set 172.25.250.0/24 for the OSD public network and 172.25.249.0/24 for the OSD cluster network.

    1. Display the servere node storage device inventory on the Ceph cluster. Verify that the /dev/vde and /dev/vdf devices are available.

      [ceph: root@clienta /]# ceph orch device ls --hostname=servere.lab.example.com
      Hostname                 Path      Type  Serial                Size   Health   Ident  Fault  Available
      servere.lab.example.com  /dev/vde  hdd   4d212d34-e5a0-4347-9  10.7G  Unknown  N/A    N/A    Yes
      servere.lab.example.com  /dev/vdf  hdd   d86b1a78-10b5-46af-9  10.7G  Unknown  N/A    N/A    Yes
      servere.lab.example.com  /dev/vdb  hdd   1880975e-c78f-4347-8  10.7G  Unknown  N/A    N/A    No
      servere.lab.example.com  /dev/vdc  hdd   2df15dd0-8eb6-4425-8  10.7G  Unknown  N/A    N/A    No
      servere.lab.example.com  /dev/vdd  hdd   527656ac-8c51-47b2-9  10.7G  Unknown  N/A    N/A    No
    2. Create the OSDs using the /dev/vde and /dev/vdf devices on the servere node.

      [ceph: root@clienta /]# ceph orch daemon add osd servere.lab.example.com:/dev/vde
      Created osd(s) 9 on host 'servere.lab.example.com'
      [ceph: root@clienta /]# ceph orch daemon add osd servere.lab.example.com:/dev/vdf
      Created osd(s) 10 on host 'servere.lab.example.com'
    3. For the OSD options, set public_network to 172.25.250.0/24 and cluster_network to 172.25.249.0/24.

      [ceph: root@clienta /]# ceph config set osd public_network 172.25.250.0/24
      [ceph: root@clienta /]# ceph config set osd cluster_network 172.25.249.0/24
    4. Return to workstation as the student user.

      [ceph: root@clienta /]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

[student@workstation ~]$ lab grade comprehensive-review1

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish comprehensive-review1

This concludes the lab.

Revision: cl260-5.0-29d2128