After completing this section, you should be able to explain how clusters group hosts in a data center, and the information required to create a new cluster.
A cluster is a group of hosts in a single data center with the same architecture and CPU model. A cluster is a migration domain, such that the cluster's virtual machines may only live-migrate to other hosts defined within that same cluster. All cluster hosts must be configured with the same resources, including logical networks, storage domains, and sufficient computing capacity.
When preparing to build a cluster, or when adding physical hosts to a cluster, it is recommended that all hosts use the same CPU model. CPU features are detected by an initializing application and expected to remain for the duration of that application's runtime. Live-migrating applications expect those same CPU features to exist and function after moving to another host, and would fail if the destination host does not support the required features. This limitation does not apply to stopped virtual machines that are cold-migrated or exported to other hosts or clusters, since those virtual machines will re-detect available CPU features when restarted in a new host.
Matching CPU Features in Cluster Hosts
Clusters with a mix of CPU models must restrict hosts to a CPU feature set (family) shared by all cluster host CPUs, determined by the oldest physical CPU family in the cluster. The most efficient performance is achieved by populating all cluster hosts with the same physical CPU model. In mixed-CPU clusters, applications may not be able to take advantage of newer CPU features since they are not shared by all cluster CPUs, thus reducing potential performance. The cluster CPU type (family) is set when the cluster is created. Hosts that do not meet that minimum CPU requirement can not be added to the cluster, unless the cluster's CPU type is reconfigured.
Red Hat recommends standardizing the make, model, hardware, and firmware of all hosts assigned to the same cluster, thus assuring predictable virtual machine performance characteristics on any host in the cluster.
Multiple Clusters per Data Center
A data center may have multiple clusters, to support different application use case requirements. Use clusters to segregate hardware and workloads into classes or groups, as needed. One use case is to segregate application components, such as running front-end web and back-end database applications or middleware in different clusters. Another example is to group workloads according to performance tuning, data isolation and security requirements.
All clusters in a same data center share access to the same storage domains and logical networks.
Data centers configured as Local use only local disks and are restricted to the single cluster and host that manage the local storage.
Virtual machines in local data centers cannot live-migrate and are unsuitable for resilient production use.
During initial installation, RHV creates an initial cluster named Default in the initial Default data center.
Additional clusters can be created, by users with sufficient privileges, using the Administration Portal, the REST API, or from the command line.
The data center administrator must prepare to make multiple decisions when creating a new cluster.
After the cluster is created, hosts that meet the cluster's specifications can be added to it, or removed from it, as needed.
Before creating the cluster, learn if all hosts have the same CPU type, or determine the oldest CPU family in use, by view the host hardware information found under the General tab's Hardware link. Record the CPU model and CPU Type for each host expected to be in this cluster, to verify that all CPUs are identical, or can share the same CPU Type family.
Using the Form in the Administration Portal
The form for creating a new cluster is found on the Administration Portal's Clusters tab, and includes multiple required fields for successfully creating a new cluster. In addition to setting a useful cluster name, description, and comment, an administrator will select the data center to which this cluster belongs. Previous configuration choices for the selected data center, such as the data center's storage type, will determine the available choices on this form.
Selecting a Host-shared Management Network
The menu displays all currently available logical networks on this host. At minimum, the default logical network created during the RHV installation is available. If a different logical network is required, but not yet available, you must first create the logical network and assign it to all hosts that will be in this cluster, before returning here to select it. Alternatively, choose any available logical network now, and change the configuration to a another logical network later. Additional logical networks are recommended to segregate between management traffic, and virtual machine workload or migration traffic.
Choosing the Best Performing, Common CPU Type
As previously discussed, choose the CPU Architecture and the best shared CPU Type from your collected host information. This important setting forces a CPU type feature set upon all hosts in the cluster. Hosts that do not meet this minimum will not be allowed to join. Hosts with a better CPU will be feature-restricted down to this CPU type's feature set.
Setting the API Compatibility Version
The compatibility setting specifies the common API protocol version supported among all RHV hosts and management engines that will communicate together as a single RHV infrastructure. When building a new infrastructure, choose the most recent version setting. When adding new clusters to an existing infrastructure, or when adding systems during RHV upgrades or expansion, choose the same compatibility version currently configured throughout the existing infrastructure. During upgrades, newer components can be installed to temporarily function using older protocols to avoid disrupting the virtualization environment.
Networking and Service Infrastructure Options
Two network switch types are available, but only one can be configured at a time. Legacy configuration is common, but Open vSwitch () has become popular due to the flexibility and capabilities of software-defined networking. The OVS option is a Technology Preview, and is not yet supported for production use. When supported in a future release, you would choose OVS if you expect to integrate your Red Hat Virtualization infrastructure with OVS networks provided by Red Hat OpenStack Platform (RHOSP) or Red Hat OpenShift Container Platform (RHOCP).
Set when hosts in this cluster will be used to run virtual machines.
The engine host cluster that manages the RHV-M self-hosted VM can deselect this option to avoid contention on egnine hosts between the RHV-M and production workload VMs.
When building a normal workload cluster, enable the virt service.
Hosts in this cluster will be used as Red Hat Gluster Storage server nodes, and not for running virtual machines. You cannot add a Red Hat Enterprise Virtualization Hypervisor host to a cluster with this option enabled.
It is common for hosted virtual machines to be brought down safely, and hosts placed in maintenance mode to perform diagnostics or scheduled, or unexpected, maintenance. To help track this activity, the cluster has two settings, one each for virtual machines and hosts, to require that a reason be entered when either shutting down or placing hosts in maintenance mode. When this feature is enabled, the reason entered is logged with the event.
Using an External Random Number Generator
Some application workloads require significant amounts of entropy to operate properly.
To assist this need, the cluster can be configured to utilize the /dev/hwrng hardware-based random number generator instead of the default /dev/urandom device.
To select and use this hardware-based entropy source requires that every host in the cluster have that hardware device available and functioning.
With this option selected, new hosts added to cluster must also support that hardware device.
Site-specific Configuration Options
Other cluster configuration choices are available in additional tabs on the New Cluster form, but other settings have defaults that do not require modification during initial cluster creation. If you are familiar with the use those settings in your RHV environment, you can modify them, now or at a later time. Some options are discussed later in this course.
Memory page sharing threshold, CPU thread handling, and memory ballooning.
Rules for determining when existing VMs migrate automatically between hosts. For load balancing, as an example.
Rules for selecting on which host to new virtual machine will start.
Choosing console connection protocols and proxies, such as SPICE.
Actions taken when managed hosts fail, to ensure that attached storage is not corrupted.
Defining the MAC address range to be used for NICs on VMs in this cluster, other than using the data center default pool.
When creating a cluster using the Administration Portal's form, selecting the final button creates the cluster using the entered settings. The Portal's workflow expects you to add more infrastructure resources immediately, and opens a window to guide you through the process. Unless you are ready to add additional hosts now, select the button to close the window without taking further action.
A MAC address pool defines a range of MAC addresses allocated for each cluster. Each cluster is configured with only one MAC address pool, but that pool may contain multiple address ranges. The same MAC address pool can be shared by multiple clusters. The default installation MAC address pool created by the RHV installation is used for each cluster, unless another MAC address pool is created and assigned. However, when clusters share physical networks, it is not recommended to also share an address pool, because each cluster will be unaware about addresses assigned by the other clusters, which can result in conflicting MAC address assignments. Instead, create a unique MAC address range for each cluster sharing a physical network.
Red Hat Virtualization automatically generates and assigns MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. The MAC address pool assigns the next available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges, with available MAC addresses, defined in a single MAC address pool, the ranges take turns serving incoming requests in the same way available MAC addresses are selected.
A pool can be configured to allow manually assigning duplicates. A MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address when required.
Using Pools for the Infrastructure Migration Solution
Understanding MAC address pool behavior is important when using the Infrastructure Migration Solution (IMS), which automates the migration of VMware virtual machines to your Red Hat Virtualization environment. IMS is covered later in this course. Migrated virtual machines can retain their original MAC addresses, because the migration process preserves the source virtual machine's MAC addresses.
You can create a MAC address pool that includes the existing MAC addresses of the source VMware virtual machines to be migrated. If the RHV MAC address pool range overlaps the VMware MAC address range, migrating virtual machine MAC addresses must not duplicate addresses on existing virtual machines, or else the migration will fail.
To ensure that migrated virtual machines obtain MAC addresses in the same range as normal virtual machines created in Red Hat Virtualization, create a MAC address pool to provide new MAC addresses to the VMware virtual machines during migration.
Further information is available in the Clusters chapter of the Administration Guide for Red Hat Virtualization; at https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#chap-Clusters