After completing this section, you should be able to describe the purpose for each of the cluster networks, and view and modify the network configuration.
The public network is the default network for all Ceph cluster communication.
The cephadm tool assumes that the network of the first MON daemon IP address is the public network.
New MON daemons are deployed in the public network unless you explicitly define a different network.
Configuring a separate cluster network might improve cluster performance by decreasing the public network traffic load and separating client traffic from back-end OSD operations traffic.
Configure the nodes for a separate cluster network by performing the following steps.
Configure an additional network interface on each cluster node.
Configure the appropriate cluster network IP addresses on the new network interface on each node.
Use the --cluster-network option of the cephadm bootstrap command to create the cluster network at the cluster bootstrap.
You can use a cluster configuration file to set public and cluster networks.
You can configure more than one subnet for each network, separated by commas.
Use CIDR notation for the subnets (for example, 172.25.250.0/24).
[global] public_network = 172.25.250.0/24,172.25.251.0/24 cluster_network = 172.25.249.0/24
If you configure multiple subnets for a network, those subnets must be able to route to each other.
The public and cluster networks can be changed with the ceph config set command or with the ceph config assimilate-conf command.
MON daemons bind to specific IP addresses, but MGR, OSD, and MDS daemons bind to any available IP address by default.
In Red Hat Ceph Storage 5, cephadm deploys daemons in arbitrary hosts by using the public network for most services.
To handle where cephadm deploys new daemons, you can define a specific subnet for a service to use.
Only hosts that have an IP address in the same subnet are considered for deployment of that service.
To set the 172.25.252.0/24 subnet to MON daemons:
[ceph: root@node /]# ceph config set mon public_network 172.25.252.0/24The example command is the equivalent of the following [mon] section in a cluster configuration file.
[mon] public_network = 172.25.252.0/24
Use the ceph orch daemon add command to manually deploy daemons to a specific subnet or IP address.
[ceph: root@node /]#ceph orch daemon add mon cluster-host02:172.25.251.0/24[ceph: root@node /]#ceph orch daemon rm mon.cluster-host01
Using runtime ceph orch daemon commands for configuration changes is not recommended.
Instead, use service specification files as the recommended method for managing Ceph clusters.
The default value of the ms_bind_ipv4 setting is true for the cluster and the value of the ms_bind_ipv6 setting is false.
To bind Ceph daemons to IPv6 addresses, set ms_bind_ipv6 to true and set ms_bind_ipv4 to false in a cluster configuration file.
[global] public_network = <IPv6 public-network/netmask> cluster_network = <IPv6 cluster-network/netmask>
Configuring the network MTU to support jumbo frames is a recommended practice on storage networks and might improve performance.
Configure an MTU value of 9000 on the cluster network interface to support jumbo frames.
All nodes and networking devices in a communication path must have the same MTU value. For bonded network interfaces, set the MTU value on the bonded interface and the underlying interfaces will inherit the same MTU value.
Configuring a separate cluster network might also increase cluster security and availability, by reducing the attack surface over the public network, thwarting some types of Denial of Service (DoS) attacks against the cluster, and preventing traffic disruption between OSDs.
When traffic between OSDs gets disrupted, clients are prevented from reading and writing data.
Separating back-end OSD traffic onto its own network might help to prevent data breaches over the public network.
To secure the back-end cluster network, ensure that traffic is not routed between the cluster and public networks.
Ceph OSD and MDS daemons bind to TCP ports in the 6800 to 7300 range by default.
To configure a different range, change the ms_bind_port_min and ms_bind_port_max settings.
The following table lists the default ports for Red Hat Ceph Storage 5.
Table 3.1. Default Ports by Red Hat Ceph Storage Service
| Service name | Ports | Description |
|---|---|---|
| Monitor (MON) | 6789/TCP (msgr), 3300/TCP (msgr2) | Communication within the Ceph cluster |
| OSD | 6800-7300/TCP | Each OSD uses three ports in this range: one for communicating with clients and MONs over the public network; one for sending data to other OSDs over a cluster network, or over the public network if the former does not exist; and another for exchanging heartbeat packets over a cluster network or over the public network if the former does not exist. |
| Metadata Server (MDS) | 6800-7300/TCP | Communication with the Ceph Metadata Server |
| Dashboard/Manager (MGR) | 8443/TCP | Communication with the Ceph Manager Dashboard over SSL |
| Manager RESTful Module | 8003/TCP | Communication with the Ceph Manager RESTful module over SSL |
| Manager Prometheus Module | 9283/TCP | Communication with the Ceph Manager Prometheus plug-in |
| Prometheus Alertmanager | 9093/TCP | Communication with the Prometheus Alertmanager service |
| Prometheus Node Exporter | 9100/TCP | Communication with the Prometheus Node Exporter daemon |
| Grafana server | 3000/TCP | Communication with the Grafana service |
| Ceph Object Gateway (RGW) | 80/TCP | Communication with Ceph RADOSGW. If the client.rgw configuration section is empty, cephadm uses the default port 80. |
| Ceph iSCSI Gateway | 9287/TCP | Communication with Ceph iSCSI Gateway |
MONs always operate on the public network.
To secure MON nodes for firewall rules, configure rules with the public interface and public network IP address.
You can do it by manually adding the port to the firewall rules.
[root@node ~]#firewall-cmd --zone=public --add-port=6789/tcp[root@node ~]#firewall-cmd --zone=public --add-port=6789/tcp --permanent
You can also secure MON nodes by adding the ceph-mon service to the firewall rules.
[root@node ~]#firewall-cmd --zone=public --add-service=ceph-mon[root@node ~]#firewall-cmd --zone=public --add-service=ceph-mon --permanent
To secure OSD nodes for firewall rules, configure rules with the appropriate network interface and IP address.
[root@node ~]#firewall-cmd --zone=<public-or-cluster> --add-port=6800-7300/tcp[root@node ~]#firewall-cmd --zone=<public-or-cluster> \ --add-port=6800-7300/tcp --permanent
You can also secure OSD nodes by adding the ceph service to the firewall rules.
[root@node ~]#firewall-cmd --zone=<public-or-cluster> --add-service=ceph[root@node ~]#firewall-cmd --zone=<public-or-cluster> \ --add-service=ceph --permanent
For more information, refer to the Network Configuration Reference chapter in the Red Hat Ceph Storage 5 Configuration Guide at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/configuration_guide/index#ceph-network-configuration