In this exercise, you navigate the Dashboard GUI primary screens and activities.
Outcomes
You should be able to navigate the Dashboard GUI primary screens.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start intro-interface
This command confirms that the Ceph cluster in the classroom is operating.
Procedure 1.2. Instructions
Use Firefox to navigate to the Dashboard web GUI URL at https://serverc.lab.example.com:8443.
If prompted, accept the self-signed certificates that are used in this classroom.
On the Dashboard login screen, enter your credentials.
User name: admin
Password: redhat
The main screen appears, with three sections: , , and .
The section shows an overview of the whole cluster status.
You can see the cluster health status, which can be HEALTH_OK, HEALTH_WARN, or HEALTH_ERR.
Check that your cluster is in the HEALTH_OK state.
This section displays the number of cluster hosts, the number of monitors and the cluster quorum that uses those monitors, the number and status of OSDs, and other options.
The Dashboard section displays overall Ceph cluster capacity, the number of objects, the placement groups, and the pools. Check that the capacity of your cluster is approximately 90 GiB.
The section displays throughput information, and read and write disk speed. Because the cluster just started, the throughput and speed should be 0.
Navigate to the menu.
The section displays the host members of the Ceph cluster, the Ceph services that are running on each host, and the cephadm version that is running.
In your cluster, check that the three hosts serverc, serverd, and servere are running the same cephadm version.
In this menu, you can add hosts to the cluster, and edit or delete the existing hosts.
The section displays the physical disks that the Ceph cluster detects. You can view physical disk attributes, such as their host, device path, and size. Verify that the total number of physical disks on your cluster is 20. In this menu, if you select one physical disk and press , then that disk's LED starts flashing to make it easy to physically locate disks in your cluster.
Navigate to the section to view information about the cluster OSDs.
This section displays the number of OSDs, which host they reside on, the number and usage of placement groups, and the disk read and write speeds.
Verify that serverc contains three OSDs.
You can create, edit, and delete OSDs from this menu.
Navigate to the section, which displays your cluster's CRUSH map.
This map provides information about your cluster's physical hierarchy.
Verify that the three host buckets serverc, serverd, and servere are defined within the default bucket of type root.
The section displays the Ceph logs.
View the and the .
You can filter logging messages by priority, keyword, and date.
View the Info log messages by filtering by Priority.
Navigate to the menu.
The menu displays existing pool information, including the pool name and type of data protection, and the application.
You can also create, edit, or delete the pools from this menu.
Verify that your cluster contains a pool called default.rgw.log.
This concludes the guided exercise.