After completing this section, you should be able to configure the Ceph iSCSI Gateway to export RADOS Block Devices using the iSCSI protocol, and configure clients to use the iSCSI Gateway.
Red Hat Ceph Storage 5 can provide highly available iSCSI access to RADOS block device images stored in the cluster. The iSCSI protocol allows clients (initiators) to send SCSI commands to storage devices (targets) over TCP/IP networks. Each initiator and target is uniquely identified by an iSCSI qualified name (IQN). Clients with standard iSCSI initiators can access cluster storage without requiring native Ceph RBD client support.
The Linux I/O target kernel subsystem runs on every iSCSI gateway to support the iSCSI protocol.
Previously called LIO, the ISCSI target subsystem is now called TCM, or the Target Core Mod.
The TCM subsystem utilizes a user-space pass-through (TCMU) to interact with the Ceph librbd library to expose RBD images to iSCSI clients.
Object Storage Devices (OSDs) and Monitors (MONs) do not require any iSCSI-specific server settings. To limit client SCSI time outs, reduce the delay timeout setting that the cluster uses to detect a failing OSD.
In the cephadm shell, run the ceph tell <daemon_type>.<id> config set command to set the timeout parameters.
[root@node ~]#ceph config set osd osd_heartbeat_interval 5[root@node ~]#ceph config set osd osd_heartbeat_grace 20[root@node ~]#ceph config set osd osd_client_watch_timeout 15
You can deploy the iSCSI gateway on dedicated nodes or collocated with the OSDs. Meet the following prerequisites before deploying a Red Hat Ceph Storage iSCSI gateway:
Install the iSCSI gateway nodes with Red Hat Enterprise Linux 8.3 or later.
Have an operational cluster running Red Hat Ceph Storage 5 or later.
Have 90 MiB of RAM available for each RBD image exposed as a target on iSCSI gateway nodes.
Open TCP ports 3260 and 5000 on the firewall on each Ceph iSCSI node.
Create a new RADOS block device or use an existing, available device.
To deploy iSCSI gateway nodes, use the cephadm shell to create a configuration file called iscsi-gateway.yaml in the /etc/ceph/ directory.
The file should display as follows:
service_type: iscsi service_id: iscsi placement: hosts: - serverc.lab.example.com - servere.lab.example.com spec: pool: iscsipool1 trusted_ip_list: "172.25.250.12,172.25.250.14" api_port: 5000 api_secure: false api_user: admin api_password: redhat
Run the ceph orch apply command to implement the configuration by using the -i option to use a specification file.
[ceph: root@node /]# ceph orch apply -i /etc/ceph/iscsi-gateway.yaml
Scheduled iscsi.iscsi update...List the gateways and verify that they are present.
[ceph: root@node /]# ceph dashboard iscsi-gateway-list
{"gateways": {"serverc.lab.example.com": {"service_url": "http://admin:redhat@172.25.250.12:5000"}, "servere.lab.example.com": {"service_url": "http://admin:redhat@172.25.250.14:5000"}}}Open a web browser and log in to the Ceph Dashboard as a user with administrative privileges. In the Ceph Dashboard web UI, click → to display the page.
Use the Ceph Dashboard or the ceph-iscsi
gwcli utility to configure an iSCSI target.
The Using gwcli to add more iSCSI gateways section of the Block Device Guide for Red Hat Ceph Storage 5 provides detailed instructions on how to manage iSCSI targets using the ceph-iscsi
gwcli utility.
These are example steps to configure an iSCSI target from the Ceph Dashboard.
Log in to the Dashboard.
On the navigation menu, click → .
Click the tab.
Select from the Create list.
In the Create Target window, set the following parameters:
Modify the Target IQN (optional).
Click and select the first of at least two gateways.
Click and select an image for the target to export.
Click .
![]() |
Configuring an iSCSI initiator to communicate with the Ceph iSCSI gateway is the same as for any industry-standard iSCSI gateway.
For RHEL 8, install the iscsi-initiator-utils and the device-mapper-multipath packages.
The iscsi-initiator-utils package contains the utilities required to configure an iSCSI initiator.
When using more than one iSCSI gateway, configure clients for multipath support to failover between gateways by using the cluster's iSCSI targets.
A system might be able to access the same storage device through multiple different communication paths, whether those are using Fibre Channel, SAS, iSCSI, or some other technology. Multipathing allows you to configure a virtual device that can use any of these communication paths to access your storage. If one path fails, then the system automatically switches to use one of the other paths instead.
If deploying a single iSCSI gateway for testing, skip the multipath configuration.
These example steps configure an iSCSI initiator to use multipath support and to log in to an iSCSI target. Configure your client's Challenge-Handshake Authentication Protocol (CHAP) user name and password to log in to the iSCSI targets.
Install the iSCSI initiator tools.
[root@node ~]# yum install iscsi-initiator-utilsConfigure multipath I/O.
Install the multipath tools.
[root@node ~]# yum install device-mapper-multipathEnable and create a default multipath configuration.
[root@node ~]# mpathconf --enable --with_multipathd yAdd the following to the /etc/multipath.conf file.
devices {
device {
vendor "LIO-ORG"
hardware_handler "1 alua"
path_grouping_policy "failover"
path_selector "queue-length 0"
failback 60
path_checker tur
prio alua
prio_args exclusive_pref_bit
fast_io_fail_tmo 25
no_path_retry queue
}
}Restart the multipathd service.
[root@node ~]# systemctl reload multipathdIf required for your configuration, set CHAP authentication.
Update the CHAP user name and password to match your iSCSI gateway configuration in the /etc/iscsi/iscsid.conf file.
node.session.auth.authmethod = CHAP node.session.auth.username = user node.session.auth.password = password
Discover and log in to the iSCSI portal, and then view targets and their multipath configuration.
Discover the iSCSI portal.
[root@node ~]# iscsiadm -m discovery -t st -p 10.30.0.210
10.30.0.210:3260,1 iqn.2001-07.com.ceph:1634089632951
10.30.0.133:3260,2 iqn.2001-07.com.ceph:1634089632951Log in to the iSCSI portal.
[root@node ~]# iscsiadm -m node -T iqn.2001-07.com.ceph:1634089632951 -l
Logging in to [iface: default, target: iqn.2001-07.com.ceph:1634089632951, portal: 10.30.0.210,3260]
Logging in to [iface: default, target: iqn.2001-07.com.ceph:1634089632951, portal: 10.30.0.133,3260]
Login to [iface: default, target: iqn.2001-07.com.ceph:1634089632951, portal: 10.30.0.210,3260] successful.
Login to [iface: default, target: iqn.2001-07.com.ceph:1634089632951, portal: 10.30.0.133,3260] successful.Verify any attached SCSI targets.
[root@node ~]#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda8:0 0 1G 0 disk └─mpatha253:0 0 1G 0 mpathsdb8:16 0 1G 0 disk └─mpatha253:0 0 1G 0 mpath vda 252:0 0 10G 0 disk
Use the multipath command to show devices set up in a failover configuration with a priority group for each path.
[root@node ~]#multipath -llmpatha(3600140537b026aa91c844138da53ffe7) dm-0 LIO-ORG,TCMU device size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='queue-length 0' prio=50 status=active | `- 2:0:0:0sda8:0 active ready running `-+- policy='queue-length 0' prio=10 status=enabled `- 3:0:0:0sdb8:16 active ready running
If logging in to an iSCSI target through a single iSCSI gateway, then the system creates a physical device for the iSCSI target (for example, /dev/sd).
If logging in to an iSCSI target through multiple iSCSI gateways with Device Mapper multipath, then the Device Mapper creates a multipath device (for example, X/dev/mapper/mpath).a
For more information, refer to the The Ceph iSCSI Gateway chapter in the Block Device Guide for Red Hat Ceph Storage 5 at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/block_device_guide/index#the-ceph-iscsi-gateway
For more information, refer to the Management of iSCSI functions using the Ceph dashboard section in the _Dashboard Guide for Red Hat Ceph Storage 5 at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/dashboard_guide/index#management-of-iscsi-functions-on-the-ceph-dashboard
Further information, refer to the Device Mapper Multipath configuration for Red Hat Enterprise Linux 8 is available at Configuring Device Mapper Multipath at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_device_mapper_multipath/