Abstract
| Goal | Deploy a new Red Hat Ceph Storage cluster and expand the cluster capacity. |
| Objectives |
|
| Sections |
|
| Lab |
Deploying Red Hat Ceph Storage |
After completing this section, you should be able to prepare for and perform a Red Hat Ceph Storage cluster deployment using cephadm command-line tools.
Use the cephadm utility to deploy a new Red Hat Ceph Storage 5 cluster.
The cephadm utility consists of two main components:
The cephadm shell.
The cephadm orchestrator.
The cephadm shell command runs a bash shell within a Ceph-supplied management container.
Use the cephadm shell to perform cluster deployment tasks initially and cluster management tasks after the cluster is installed and running.
Launch the cephadm shell to run multiple commands interactively, or to run a single command.
To run it interactively, use the cephadm shell command to open the shell, then run Ceph commands.
The cephadm orchestrator provides a command-line interface to the orchestrator ceph-mgr modules, which interface with external orchestration services.
The purpose of an orchestrator is to coordinate configuration changes that must be performed cooperatively across multiple nodes and services in a storage cluster.
[root@node ~]# cephadm shell
[ceph: root@node /]#To run a single command, use the cephadm shell command followed by two dashes and the Ceph command.
[root@node ~]# cephadm shell -- CEPH_COMMANDPlanning for Cluster Service Colocation
All of the cluster services now run as containers.
Containerized Ceph services can run on the same node; this is called colocation.
Colocation of Ceph services allows for better resource utilization while maintaining secure isolation between the services.
The following daemons can be collocated with OSD daemons: RADOSGW, MDS, RBD-mirror, MON, MGR, Grafana, and NFS Ganesha.
Secure Communication Between Hosts
The cephadm command uses SSH to communicate with storage cluster nodes. The cluster SSH key is created during the cluster bootstrap process. Copy the cluster public key to each host that will be a cluster member.
Use the following command to copy the cluster key to a cluster node:
[root@node ~]#cephadm shell[ceph: root@node /]#ceph cephadm get-pub-key > ~/ceph.pub[ceph: root@node /]#ssh-copy-id -f -i ~/ceph.pub root@node.example.com
The steps to deploy a new cluster are:
Install the cephadm-ansible package on the host you have chosen as the bootstrap node, which is the first node in the cluster.
Run the cephadm preflight playbook. This playbook verifies that the host has the required prerequisites.
Use cephadm to bootstrap the cluster. The bootstrap process accomplishes the following tasks:
Installs and starts a Ceph Monitor and a Ceph Manager daemon on the bootstrap node.
Creates the /etc/ceph directory.
Writes a copy of the cluster public SSH key to /etc/ceph/ceph.pub and adds the key to the /root/.ssh/authorized_keys file.
Writes a minimal configuration file needed to communicate with the new cluster to the /etc/ceph/ceph.conf file.
Writes a copy of the client.admin administrative secret key to the /etc/ceph/ceph.client.admin.keyring file.
Deploys a basic monitoring stack with prometheus and grafana services, as well as other tools such as node-exporter and alert-manager.
Installing Prerequisites
Install cephadm-ansible on the bootstrap node:
[root@node ~]# yum install cephadm-ansibleRun the cephadm-preflight.yaml playbook.
This playbook configures the Ceph repository and prepares the storage cluster for bootstrapping.
It also installs prerequisite packages, such as podman, lvm2, chrony, and cephadm.
The preflight playbook uses the cephadm-ansible inventory file to identify the admin and client nodes in the storage cluster.
The default location for the inventory file is /usr/share/cephadm-ansible/hosts.
The following example shows the structure of a typical inventory file:
[admin] node00 [clients] client01 client02 client03
To run the pre-flight playbook:
[root@node ~]# ansible-playbook -i INVENTORY-FILE cephadm-preflight.yml \
--extra-vars "ceph_origin=rhcs"Bootstrapping the Cluster
The cephadm bootstrapping process creates a small storage cluster on a single node, consisting of one Ceph Monitor and one Ceph Manager, plus any required dependencies.
Expand the storage cluster by using the ceph orchestrator command or the Dashboard GUI to add cluster nodes and services.
Before bootstrapping, you must create a username and password for the registry.redhat.io container registry.
Visit https://access.redhat.com/RegistryAuthentication for instructions.
Use the cephadm bootstrap command to bootstrap a new cluster:
[root@node ~]# cephadm bootstrap --mon-ip=MON_IP \
--registry-url=registry.redhat.io \
--registry-username=REGISTRY_USERNAME --registry-password=REGISTRY_PASSWORD \
--initial-dashboard-password=DASHBOARD_PASSWORD --dashboard-password-noupdate \
--allow-fqdn-hostnameThe script displays this output when finished:
Ceph Dashboard is now available at:
URL: https://boostrapnode.example.com:8443/
User: admin
Password: adminpassword
You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.Using a Service Specification file
Use the cephadm bootstrap command with the --apply-spec option and a service specification file to both bootstrap a storage cluster and configure additional hosts and daemons.
The configuration file is a YAML file that contains the service type, placement, and designated nodes for services to deploy.
The following is an example of a services configuration file:
service_type: host
addr: node-00
hostname: node-00
---
service_type: host
addr: node-01
hostname: node-01
---
service_type: host
addr: node-02
hostname: node-02
---
service_type: mon
placement:
hosts:
- node-00
- node-01
- node-02
---
service_type: mgr
placement:
hosts:
- node-00
- node-01
- node-02
---
service_type: rgw
service_id: realm.zone
placement:
hosts:
- node-01
- node-02
---
service_type: osd
placement:
host_pattern: "*"
data_devices:
all: trueHere is an example command to bootstrap a cluster using a services configuration file:
[root@node ~]# cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME \
--mon-ip MONITOR-IP-ADDRESSThe Ceph orchestrator supports assigning labels to hosts. Labels can be used to group cluster hosts so that you can deploy Ceph services to multiple hosts at the same time. A host can have multiple labels.
Labeling simplifies cluster management tasks by helping to identify the daemons running on each host.
For example, you can use the ceph orch host ls command to list
You can use the Ceph orchestrator or a YAML service specification file to deploy or remove daemons on specifically labeled hosts.
Except for the _admin label, labels are free-form and have no specific meaning.
You can use labels such as mon, monitor, mycluster_monitor, or other text strings to label and group cluster nodes.
For example, assign the mon label to nodes that you deploy MON daemons to.
Assign the mgr label for nodes that you deploy MGR daemons to, and assign rgw for RADOS gateways.
For example, the following command applies the _admin label to a host to designate is as the admin node.
[ceph: root@node /]# ceph orch host label add ADMIN_NODE _adminDeploy cluster daemons to specific hosts by using labels.
[ceph: root@node /]# ceph orch apply prometheus --placement="label:prometheus"To configure the admin node, perform the following steps:
Assign the admin label to the node, as shown previously.
Copy the admin key to the admin node.
Copy the ceph.conf file to the admin node.
[root@node ~]#scp /etc/ceph/ceph.client.admin.keyring ADMIN_NODE:/etc/ceph/[root@node ~]#scp /etc/ceph/ceph.conf ADMIN_NODE:/etc/ceph/
For more information, refer to the Red Hat Ceph Storage 5 Installation Guide at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/installation_guide/index