Bookmark this page

Guided Exercise: Deploying Red Hat Ceph Storage

In this exercise, you will install a Red Hat Ceph Storage cluster.

Outcomes

You should be able to install a containerized Ceph cluster by using a service specification file.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start deploy-deploy

This command confirms that the local container registry for the classroom is running and deletes the prebuilt Ceph cluster so it can be redeployed with the steps in this exercise.

Important

This lab start script immediately deletes the prebuilt Ceph cluster and takes a few minutes to complete. Wait for the command to finish before continuing.

Procedure 2.1. Instructions

  1. Log in to serverc as the admin user and switch to the root user.

    [student@workstation ~]$ ssh admin@serverc
    [admin@serverc ~]$ sudo -i
    [root@serverc ~]#
  2. Install the cephadm-ansible package, create the inventory file, and run the cephadm-preflight.yml playbook to prepare cluster hosts.

    1. Install the cephadm-ansible package in serverc.

      [root@serverc ~]# yum install cephadm-ansible
      ...output omitted...
      Complete!
    2. Create the hosts inventory in the /usr/share/cephadm-ansible directory.

      [root@serverc ~]# cd /usr/share/cephadm-ansible
      [root@serverc cephadm-ansible]# cat hosts
      clienta.lab.example.com
      serverc.lab.example.com
      serverd.lab.example.com
      servere.lab.example.com
    3. Run the cephadm-preflight.yml playbook.

      [root@serverc cephadm-ansible]# ansible-playbook -i hosts \
      cephadm-preflight.yml --extra-vars "ceph_origin="
      ...output omitted...

      The ceph_origin variable is set to empty, which causes some playbook tasks to be skipped because, in this classroom, the Ceph packages are installed from a local classroom repository. For your production environment, set ceph_origin to rhcs to enable the Red Hat Storage Tools repository for your supported deployment.

  3. Review the initial-config-primary-cluster.yaml file in the /root/ceph/ directory.

    ---
    service_type: host 1
    addr: 172.25.250.10
    hostname: clienta.lab.example.com
    ---
    service_type: host
    addr: 172.25.250.12
    hostname: serverc.lab.example.com
    ---
    service_type: host
    addr: 172.25.250.13
    hostname: serverd.lab.example.com
    ---
    service_type: host
    addr: 172.25.250.14
    hostname: servere.lab.example.com
    ---
    service_type: mon 2
    placement:
      hosts:
        - clienta.lab.example.com
        - serverc.lab.example.com
        - serverd.lab.example.com
        - servere.lab.example.com
    ---
    service_type: rgw 3
    service_id: realm.zone
    placement:
      hosts:
        - serverc.lab.example.com
        - serverd.lab.example.com
    ---
    service_type: mgr 4
    placement:
      hosts:
        - clienta.lab.example.com
        - serverc.lab.example.com
        - serverd.lab.example.com
        - servere.lab.example.com
    ---
    service_type: osd 5
    service_id: default_drive_group
    placement: 6
      host_pattern: 'server*'
    data_devices:
      paths:
        - /dev/vdb
        - /dev/vdc
        - /dev/vdd

    1

    The service_type: host defines the nodes to add after the cephadm bootstrap completes. Host clienta will be configured as an admin node.

    2

    The Ceph Orchestrator deploys one monitor daemon by default. In the file the service_type: mon deploys a Ceph monitor daemon in the listed hosts.

    3

    The service_type: mgr deploys a Ceph Object Gateway daemon in the listed hosts.

    4

    The service_type: mgr deploys a Ceph Manager daemon in the listed hosts.

    5

    The service_type: osd deploys a ceph-osd daemon in the listed hosts backed by the /dev/vdb device.

    6

    Defines where and how to deploy the daemons.

  4. As the root user on the serverc node, run the cephadm bootstrap command to create the Ceph cluster. Use the service specification file located at initial-config-primary-cluster.yaml

    [root@serverc ~]# cd /root/ceph
    [root@serverc ceph]# cephadm bootstrap --mon-ip=172.25.250.12 \
    --apply-spec=initial-config-primary-cluster.yaml \
    --initial-dashboard-password=redhat \
    --dashboard-password-noupdate \
    --allow-fqdn-hostname \
    --registry-url=registry.lab.example.com \
    --registry-username=registry \
    --registry-password=redhat
    ...output omitted...
    Ceph Dashboard is now available at:
    
    	     URL: https://serverc.lab.example.com:8443/
    	    User: admin
    	Password: redhat
    
    Applying initial-config-primary-cluster.yaml to cluster
    Adding ssh key to clienta.lab.example.com
    Adding ssh key to serverd.lab.example.com
    Adding ssh key to servere.lab.example.com
    Added host 'clienta.lab.example.com' with addr '172.25.250.10'
    Added host 'serverc.lab.example.com' with addr '172.25.250.12'
    Added host 'serverd.lab.example.com' with addr '172.25.250.13'
    Added host 'servere.lab.example.com' with addr '172.25.250.14'
    Scheduled mon update...
    Scheduled rgw.realm.zone update...
    Scheduled mgr update...
    Scheduled osd.default_drive_group update...
    
    You can access the Ceph CLI with:
    
    	sudo /usr/sbin/cephadm shell --fsid 8896efec-21ea-11ec-b6fe-52540000fa0c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
    
    Please consider enabling telemetry to help improve Ceph:
    
    	ceph telemetry on
    
    For more information see:
    
    	https://docs.ceph.com/docs/pacific/mgr/telemetry/
    
    Bootstrap complete.
  5. Verify the status of the Ceph storage cluster.

    1. Run the cephadm shell.

      [root@serverc ~]# cephadm shell
      ...output omitted...
      [ceph: root@serverc /]#
    2. Verify that the cluster status is HEALTH_OK.

      [ceph: root@serverc /]# ceph status
        cluster:
          id:     8896efec-21ea-11ec-b6fe-52540000fa0c
          health: HEALTH_OK
      
        services:
          mon: 4 daemons, quorum serverc.lab.example.com,serverd,servere,clienta (age 10s)
          mgr: serverc.lab.example.com.bypxer(active, since 119s), standbys: serverd.lflgzj, clienta.hloibd, servere.jhegip
          osd: 9 osds: 9 up (since 55s), 9 in (since 75s)
      
        data:
          pools:   1 pools, 1 pgs
          objects: 0 objects, 0 B
          usage:   47 MiB used, 90 GiB / 90 GiB avail
          pgs:     1 active+clean

      Your cluster might be in the HEALTH_WARN state for a few minutes until all services and OSDs are ready.

  6. Label clienta as the admin node. Verify that you can execute cephadm commands from clienta.

    1. Apply the _admin label to clienta to label it as the admin node.

      [ceph: root@serverc /]# ceph orch host label add clienta.lab.example.com _admin
      Added label _admin to host clienta.lab.example.com
    2. Manually copy the ceph.conf and ceph.client.admin.keyring files from serverc to clienta. These files are located in /etc/ceph.

      [ceph: root@serverc /]# exit
      exit
      [root@serverc ceph]# cd /etc/ceph
      [root@serverc ceph]# scp {ceph.client.admin.keyring,ceph.conf} \
      root@clienta:/etc/ceph/
      Warning: Permanently added 'clienta' (ECDSA) to the list of known hosts.
      ceph.client.admin.keyring                   100%   63   105.6KB/s   00:00
      ceph.conf                                   100%  177   528.3KB/s   00:00
    3. Return to workstation as the student user, then log into clienta as the admin user and start the cephadm shell. Verify that you can execute cephadm commands from clienta.

      [root@serverc ceph]# exit
      [admin@serverc ~]$ exit
      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      Inferring fsid 8896efec-21ea-11ec-b6fe-52540000fa0c
      Inferring config /var/lib/ceph/8896efec-21ea-11ec-b6fe-52540000fa0c/mon.clienta/config
      Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...47ff
      [ceph: root@clienta /]# ceph health
      HEALTH_OK
    4. Return to workstation as the student user.

      [ceph: root@clienta /]# exit
      [root@clienta ~]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Finish

On the workstation machine, use the lab command to complete this exercise. This command does not disable or modify the Ceph cluster you just deployed. Your new cluster will be used in the next exercise in this chapter.

[student@workstation ~]$ lab finish deploy-deploy

This concludes the guided exercise.

Revision: cl260-5.0-29d2128