In this review, you will configure your new Red Hat Virtualization environment.
Outcomes
You should be able to:
Integrate users in an LDAP directory with your Red Hat Virtualization environment
Create a new data center
Create a new cluster
Register and activate RHV-H hosts
Create a new storage domain
Create additional logical networks
Upload the Red Hat Enterprise Linux installation boot ISO to the data storage domain
Delete your lab environment and provision a new lab for the comprehensive review. The start script will fail if the environment is not new.
If you wish to go back and do a Lab or GE after running a start script for the review, then you must delete the environment again and re-provision a new one.
Log in as the student user on workstation and run the lab deploy-cr start command.
This command ensures that the RHV environment is setup correctly.
[student@workstation ~]$lab deploy-cr start
Instructions
Configure your Red Hat Virtualization environment according to the following specifications.
Integrate utility.lab.example.com, the Red Hat Enterprise Linux (RHEL) Identity Manager (IdM) server, with your Red Hat Virtualization environment to provide users in a new profile named lab.example.com.
You should use the StartTLS protocol to connect.
The PEM-encoded CA certificate to validate the connection is available at http://utility.lab.example.com/ipa/config/ca.crt.
Use uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com as the search user DN.
The password to authenticate using that DN is redhat.
Continue with default settings otherwise while integrating the RHEL IdM server with your RHV environment.
Set rhvadmin as the system-wide administrative user in your RHV environment.
Create two new data centers named datacenter1 and datacenter2.
In the datacenter1 data center, create a new cluster named cluster1.
In the datacenter2 data center, create a new cluster named cluster2.
Register the three RHV-H hosts as mentioned in the following table with your RHV environment.
These RHV-H hosts use redhat as the password of the root user.
The following table mentions the expected clusters and IP addresses of each host.
Ensure that these RHV-H hosts are active in the cluster.
| Host | Cluster |
|---|---|
hostb.lab.example.com | cluster1 |
hostc.lab.example.com | cluster1 |
hostd.lab.example.com | cluster2 |
Configure logical networks to help separate network traffic.
In datacenter1, create a new logical network named virtual for virtual machine traffic.
It should be tagged as VLAN 10.
It should be usable by virtual machines.
It should not be used for any RHV infrastructure traffic.
It should be associated with the eth0 interface of all hosts in cluster1.
The hosts should use DHCP to get IPv4 settings for that network.
In datacenter1, create a new logical network named storage for storage traffic.
It should not use VLAN tagging.
It should not be usable by virtual machines.
It should not be used for any RHV infrastructure traffic.
It should be associated with the eth1 interface of all hosts in cluster1.
The hosts should statically configure IPv4 settings for that network as indicated in the following table.
In datacenter2, create a new logical network named storage for storage traffic.
It should not use VLAN tagging.
It should not be usable by virtual machines.
It should not be used for any RHV infrastructure traffic.
It should be associated with the eth1 interface of all hosts in cluster2.
The hosts should statically configure IPv4 settings for that network as indicated in the following table.
The following two tables summarize the logical network configuration for hosts in cluster1 and cluster2.
Table 15.1. Logical Networks of cluster1
| Host | Logical network | VLAN tag | Host interface | IPv4 configuration |
|---|---|---|---|---|
hostb | ovirtmgmt | untagged | eth0 | DHCP (172.25.250.11/255.255.255.0) |
virtual | 10 | eth0 | DHCP | |
storage | untagged | eth1 | Static (172.24.0.11/255.255.255.0) | |
hostc | ovirtmgmt | untagged | eth0 | DHCP (172.25.250.12/255.255.255.0) |
virtual | 10 | eth0 | DHCP | |
storage | untagged | eth1 | Static (172.24.0.12/255.255.255.0) |
Table 15.2. Logical Networks of cluster2
| Host | Logical network | VLAN tag | Host interface | IPv4 configuration |
|---|---|---|---|---|
hostd | ovirtmgmt | untagged | eth0 | DHCP (172.25.250.13/255.255.255.0) |
storage | untagged | eth1 | Static (172.24.0.13/255.255.255.0) |
Create a new data domain named datastorage1 in the datacenter1 data center using the NFS export 172.24.0.8:/exports/data.
Use the hostc.lab.example.com RHV-H hosts in datacenter1 to mount the NFS export.
Create a new data domain named datastorage2 in the datacenter2 data center using the iSCSI LUN available from the 172.24.0.8 iSCSI portal.
Upload the boot image, available at http://materials.example.com/rhel-server-7.6-x86_64-boot.iso, to the datastorage1 data domain.
This boot image acts as the installation media for the Red Hat Enterprise Linux 7.6 operating system.
Configure RHV-M to use the utility.lab.example.com RHEL IdM server to provide users in a new profile named lab.example.com.
Enable the StartTLS protocol to establish a secure LDAP connection between RHV-M and the RHEL IdM server.
Download the PEM-encoded CA certificate, required to validate the secure connection, from http://utility.lab.example.com/ipa/config/ca.crt.
Use uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com as the search user DN.
The password to authenticate using that DN is redhat.
Continue with default settings otherwise while integrating the RHEL IdM server with your RHV environment.
From workstation, open an SSH connection to rhvm as root.
[student@workstation ~]$ssh root@rhvm...output omitted...[root@rhvm ~]#
Use the rpm command to verify that the ovirt-engine-extension-aaa-ldap-setup package is installed on rhvm.
[root@rhvm ~]#rpm -q ovirt-engine-extension-aaa-ldap-setupovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7ev.noarch
The ovirt-engine-extension-aaa-ldap-setup package is already installed because it is automatically included in a self-hosted engine installation, like the one used in this class.
To start the interactive setup, run the ovirt-engine-extension-aaa-ldap-setup command.
[root@rhvm ~]#ovirt-engine-extension-aaa-ldap-setup[ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup Configuration files: ['/etc/ovirt-engine-extension-aaa-ldap-setup.conf.d/10-packaging.conf'] Log file: /tmp/ovirt-engine-extension-aaa-ldap-setup-20190702112955-wwd3ln.log Version: otopi-1.8.2 (otopi-1.8.2-1.el7ev) [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment customization ...output omitted...
Type 6 to select IPA from the Available LDAP implementations list.
...output omitted... 6 - IPA ...output omitted...Please select:6
Press Enter to accept the default setting of using DNS to resolve the host name of the Red Hat Enterprise Linux Identity Manager server.
...output omitted... NOTE: It is highly recommended to use DNS resolution for LDAP server. If for some reason you intend to use hosts or plain address disable DNS usage.Use DNS (Yes, No) [Yes]:Enter
Type 1 to select Single server from the Available policy method list.
...output omitted... Available policy method: 1 - Single server 2 - DNS domain LDAP SRV record 3 - Round-robin between multiple hosts 4 - Failover between multiple hostsPlease select:1
Type utility.lab.example.com to specify the host address of the Red Hat Enterprise Linux Identity Manager server.
Please enter host address:utility.lab.example.com
Press Enter to accept the default secure connection method (StartTLS) for the Red Hat Enterprise Linux Identity Manager server.
...output omitted...Please select protocol to use (startTLS, ldaps, plain) [startTLS]:Enter
Select the URL method to obtain the CA certificate.
Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure):URL
Specify http://utility.lab.example.com/ipa/config/ca.crt as the URL to the CA certificate.
URL:http://utility.lab.example.com/ipa/config/ca.crt[ INFO ] Connecting to LDAP using 'ldap://utility.lab.example.com:389' [ INFO ] Executing startTLS [ INFO ] Connection succeeded
The Red Hat Enterprise Linux Identity Manager server in the classroom has been configured with a user that the RHV Manager can use to search the LDAP directory for user information.
The user DN is uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com.
The password for this DN is redhat.
Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous):uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=comEnter search user password:redhat[ INFO ] Attempting to bind using 'uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com'
Accept dc=lab,dc=example,dc=com as the proposed base DN by pressing Enter.
Please enter base DN (dc=lab,dc=example,dc=com) [dc=lab,dc=example,dc=com]:Enter
Type No to indicate that you will not use single sign-on for virtual machines.
Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]:No
Specify lab.example.com as the name of the profile for the external domain.
Please specify profile name that will be visible to users [utility.lab.example.com]:lab.example.com[ INFO ] Stage: Setup validation
Test the login function to ensure that the Red Hat Enterprise Linux Identity Manager server is connected to the Red Hat Virtualization Manager.
NOTE: It is highly recommended to test drive the configuration before applying it into engine. Login sequence is executed automatically, but it is recommended to also execute Search sequence manually after successful Login sequence. Please provide credentials to test login flow:Enter user name:rhvadminEnter user password:redhat[ INFO ] Executing login sequence... ...output omitted... [ INFO ] Login sequence executed successfully
Press Enter to use Done as the default selection.
This completes the configuration.
Please make sure that user details are correct and group membership meets expectations (search for PrincipalRecord and GroupRecord titles). Abort if output is incorrect.Select test sequence to execute (Done, Abort, Login, Search) [Done]:Enter[ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration (early) [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up CONFIGURATION SUMMARY Profile name is: lab.example.com The following files were created: /etc/ovirt-engine/aaa/lab.example.com.jks /etc/ovirt-engine/aaa/lab.example.com.properties /etc/ovirt-engine/extensions.d/lab.example.com-authz.properties /etc/ovirt-engine/extensions.d/lab.example.com-authn.properties [ INFO ] Stage: Clean up Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20190702121518-kusgin.log: [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
Use systemctl to restart the ovirt-engine service.
Wait for the service to finish activating components before accessing the RHV Manager Administration Portal.
[root@rhvm ~]#systemctl restart ovirt-engine
Log out from rhvm.
[root@rhvm ~]#logoutConnection to rhvm closed.[student@workstation ~]$
Assign the SuperUser role, system-wide, to the rhvadmin user in the lab.example.com profile.
On workstation, open Firefox and navigate to https://rhvm.lab.example.com/ovirt-engine.
The RHV-M host's web console service may present a TLS certificate for the HTTPS connection that is signed by an unrecognized Certificate Authority.
Either add a security exception for that certificate in your web browser, or configure your web environment to use a trusted Certificate Authority.
Click Administration Portal to log in to the web interface as the internal user called admin with redhat as the password.
Select the internal profile.
In the menu, click , and then click .
In the Configure dialog box, click System Permissions.
Click the button to add a role to a user.
In the Add System Permission to User dialog box, click the radio button, if not already selected.
Click the drop-down list under to select the item.
This item represents the lab.example.com profile you configured in the preceding steps to allow Red Hat Virtualization Manager to use the Red Hat Enterprise Linux Identity Manager as a source for the users.
Click to display the users in the Red Hat Enterprise Linux Identity Manager server.
In the list of users that displays, click the check box for the rhvadmin user.
Click the drop-down list under Role to Assign.
From the list of available roles, select SuperUser role for rhvadmin.
Click to assign the specified role to the selected user.
Notice that the rhvadmin user displays in the System Permissions list.
This list confirms that the rhvadmin user has been assigned a role granting administrative access to Red Hat Virtualization.
In the Configure dialog box, click .
Create two new data centers named datacenter1 and datacenter2.
From the menu, navigate to → .
From the Compute >> Data Centers page, click .
In the New Data Center window, enter datacenter1 in the Name field.
Keep the default values for the other fields.
Click the button to create the datacenter1 data center.
The Data Center - Guide Me window displays.
In the Data Center - Guide Me window, click .
From the Compute >> Data Centers page, click .
In the New Data Center window, enter datacenter2 in the Name field.
Keep the default values for the other fields.
Click the button to create the datacenter2 data center.
The Data Center - Guide Me window displays.
In the Data Center - Guide Me window, click .
Confirm that the Compute >> Data Centers page lists both datacenter1 and datacenter2.
Create the new clusters named cluster1, and cluster2 within the datacenter1 and datacenter2 data centers respectfully.
From the menu, navigate to → to access the Compute >> Hosts page.
From the Compute >> Hosts page, click on the name of hostb.lab.example.com to access the Compute >> Hosts >> hostb.lab.example.com page.
Use the value of CPU Type, under the Hardware section of the General tab in the Compute >> Hosts >> hostb.lab.example.com page, to determine the type of CPU of the hostb host.
This CPU type is same for hostc and hostd.
From the menu, navigate to → to access the Compute >> Clusters page.
In the Compute >> Clusters page, click .
The New Cluster window displays.
In the New Cluster window, under the General section, set the values of the fields according to following table.
While selecting the value of CPU Type from the available items, ensure that the value matches CPU Type of hostb you determined previously.
| Field | Value |
|---|---|
| Data Center | |
| Name | cluster1 |
| Management Network | |
| CPU Architecture | |
| Compatibility Version | |
| Switch Type |
Leave all the other fields with their default values, and click to create the cluster. The Cluster - Guide Me window displays. In the Cluster - Guide Me window, click to continue creating the cluster without configuring the host of the cluster.
From the Compute >> Clusters page, click to create another cluster named cluster2.
Set the data center for cluster2 to datacenter2.
Use the previous table to specify the other properties of cluster2.
Before you remove the hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com RHV-H hosts from the Default cluster, confirm that no virtual machine is running on any of these hosts.
From the menu, navigate to → to access the Compute >> Virtual Machines page.
From the Compute >> Virtual Machines page, confirm that no virtual machine runs on any of the hostb.lab.example.com, hostc.lab.example.com, hostd.lab.example.com RHV-H hosts.
If you see a virtual machine that runs on any of hostb.lab.example.com, hostc.lab.example.com, hostd.lab.example.com, power down that virtual machine.
You should see the HostedEngine virtual machine that runs on hosta.lab.example.com of the Default cluster.
This virtual machine contains the RHV self-hosted engine.
Do not power down this virtual machine.
Mark hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com for maintenance.
From the menu, click on , and then click on to access the Compute >> Hosts page.
From the Compute >> Hosts page, click on the row for hostb.lab.example.com.
From the drop-down menu, select to mark hostb.lab.example.com for maintenance.
The Maintenace Host(s) window displays.
Click to enable the Maintenance status of hostb.lab.example.com.
Confirm that the value of the Status field for hostb.lab.example.com is Maintenance.
Use the previous steps to enable the Maintenance status for hostc.lab.example.com and hostd.lab.example.com.
Remove hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com from the Default cluster.
From the Compute >> Hosts page, click on the row for hostb.lab.example.com.
Click the button.
The Remove Host(s) window displays.
In the Remove Host(s) window, click to remove hostb.lab.example.com.
Use the previous step to remove hostc.lab.example.com and hostd.lab.example.com.
Confirm that the hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com RHV-H hosts no longer display in the Compute >> Hosts page.
Add hostb.lab.example.com and hostc.lab.example.com to cluster1.
From the Compute >> Hosts page, click . The New Host window displays.
In the New Host window, under the General section, set the values of the fields according to the following table.
| Field | Value |
|---|---|
| Host Cluster | |
| Name | hostb.lab.example.com |
| Hostname | hostb.lab.example.com |
| SSH Port | 22 |
| Password | redhat |
Ensure that the Activate host after install check box is selected.
Leave all the other fields with their default values, and click to register the hostb.lab.example.com RHV-H host.
The Power Management Configuration window displays, warning you about the non configuration of Power Management for hostb.lab.example.com.
In the Power Management Configuration window, click to continue adding the host to cluster1 without its Power Management configuration.
Wait until the value of the Status field for hostb.lab.example.com transitions from Installing to Up.
Use the previous steps to add hostc.lab.example.com to cluster1.
While setting the values of different fields for hostc.lab.example.com in the New Host window, enter hostc.lab.example.com as the values of the Name and Hostname fields.
For values of the other fields, refer to the preceding table.
Add hostd.lab.example.com to cluster2.
From the Compute >> Hosts page, click . The New Host window displays.
In the New Host window, under the General section, set the values of the fields according to the following table.
| Field | Value |
|---|---|
| Host Cluster | |
| Name | hostd.lab.example.com |
| Hostname | hostd.lab.example.com |
| SSH Port | 22 |
| Password | redhat |
Ensure that the Activate host after install check box is selected.
Leave all the other fields with their default values, and click to register the hostd.lab.example.com RHV-H host.
The Power Management Configuration window displays, warning you about the non configuration of Power Management for hostd.lab.example.com.
In the Power Management Configuration window, click to continue adding the host to cluster2 without its Power Management configuration.
Wait until the value of the Status field for hostd.lab.example.com transitions from Installing to Up.
Create a new logical network named virtual in the datacenter1 data center to separate management network traffic from virtual machine network traffic.
Specify the virtual logical network as a virtual machine network using the VLAN number 10.
In the RHV-M Administration Portal, navigate to → from the menu to access the Network >> Networks page. In the Network >> Networks page, click to create a new logical network. The New Logical Network window displays. In the New Logical Network window, set the values of the different fields according to the following table.
| Field | Value |
|---|---|
| Data Center | |
| Name | virtual |
| Description | Virtual Machine Network |
| Comment | Network for Virtual Machine Traffic |
Select the Enable VLAN tagging check box and enter 10 as the VLAN number in the text field next to the Enable VLAN tagging check box.
Ensure that the VM network check box is selected, and leave all the other fields with their default values.
Click to create the virtual logical network.
Create the logical network named storage in both the datacenter1, and datacenter2 data centers to separate storage traffic from the management network traffic and the virtual machine network traffic.
Disable the VLAN tagging for this logical network.
In the RHV-M Administration Portal, navigate to → from the menu to access the Network >> Networks page. In the Network >> Networks page, click to create a new logical network. The New Logical Network window displays. In the New Logical Network window, set the values of the different fields according to the following table.
| Field | Value |
|---|---|
| Data Center | |
| Name | storage |
| Description | Storage Network |
| Comment | Network for Storage Traffic |
Ensure that the Enable VLAN tagging check box is clear.
Clear the VM network check box, and leave all the other fields with their default values.
Click to create the storage logical network.
Use the previous steps to create a new logical network named storage in datacenter2.
Disable the VLAN tagging for this logical network.
For values of the fields other than Data center in the New Logical Network window, refer to the preceding table.
Assign the virtual, and storage logical networks to the eth0, and eth1 network interfaces respectively of hostb.lab.example.com.
Use DHCP to obtain the IPv4 settings for the network device of the host in the virtual logical network.
Set the 172.24.0.11 IP address and the 255.255.255.0 netmask statically for the network device in the storage logical network.
From the menu, navigate to → to access the Compute >> Hosts page.
From the Compute >> Hosts page, click on the name of the hostb.lab.example.com host to access the Compute >> Hosts >> hostb.lab.example.com page.
In the Compute >> Hosts >> hostb.lab.example.com page, click the Network Interfaces tab.
Click the button to change the network configuration of hostb.lab.example.com.
In the Setup Host hostb.lab.example.com Networks window, click and drag the virtual (VLAN 10) box from the right side to the left side of the window.
Drop the box next to the eth0 network interface.
After dropping, you should see two logical networks assigned to the eth0 interface.
Click and drag the storage box from the right side to the left side of the window.
Drop the box onto the no network assigned field, next to the eth1 network interface.
Click on the pencil icon inside the storage box.
The Edit Network storage window displays.
In the Edit Network storage window, under Boot Protocol, select the Static radio button.
In the IP field, type 172.24.0.11 as the IP address of hostb.lab.example.com in the storage network.
In the Netmask/Routing Prefix field, type 255.255.255.0 as the netmask.
Leave the Gateway field empty.
Click to save the settings.
Ensure that the check boxes for the Verify connectivity between Host and Engine and Save network configuration options are selected.
Click to save the new network configuration for hostb.lab.example.com.
If you see the value of the Status field for the RHV-H hosts as Non-operational, click the drop-down button and select .
Assign the virtual, and storage logical networks to the eth0, and eth1 network interfaces respectively of hostc.lab.example.com.
Use DHCP to obtain the IPv4 settings for the network device of the host in the virtual logical network.
Set the 172.24.0.12 IP address and the 255.255.255.0 netmask statically for the network device in the storage logical network.
From the menu, navigate to → to access the Compute >> Hosts page.
From the Compute >> Hosts page, click on the name of the hostc.lab.example.com host to access the Compute >> Hosts >> hostc.lab.example.com page.
In the Compute >> Hosts >> hostc.lab.example.com page, click the Network Interfaces tab.
Click the button to change the network configuration of hostc.lab.example.com.
In the Setup Host hostc.lab.example.com Networks window, click and drag the virtual (VLAN 10) box from the right side to the left side of the window.
Drop the box next to the eth0 network interface.
After dropping, you should see two logical networks assigned to the eth0 interface.
Click and drag the storage box from the right side to the left side of the window.
Drop the box onto the no network assigned field, next to the eth1 network interface.
Click on the pencil icon inside the storage box.
The Edit Network storage window displays.
In the Edit Network storage window, under Boot Protocol, select the Static radio button.
In the IP field, type 172.24.0.12 as the IP address of hostc.lab.example.com in the storage network.
In the Netmask/Routing Prefix field, type 255.255.255.0 as the netmask.
Leave the Gateway field empty.
Click to save the settings.
Ensure that the check boxes for the Verify connectivity between Host and Engine and Save network configuration options are selected.
Click to save the new network configuration for hostc.lab.example.com.
Assign the storage logical network to the eth1 network interface of hostd.lab.example.com.
Set the 172.24.0.13 IP address and the 255.255.255.0 netmask statically for the network device in the storage logical network.
From the menu, navigate to → to access the Compute >> Hosts page.
From the Compute >> Hosts page, click on the name of the hostc.lab.example.com host to access the Compute >> Hosts >> hostd.lab.example.com page.
In the Compute >> Hosts >> hostd.lab.example.com page, click the Network Interfaces tab.
Click the button to change the network configuration of hostd.lab.example.com.
In the Setup Host hostd.lab.example.com Networks window, click and drag the storage box from the right side to the left side of the window.
Drop the box onto the no network assigned field, next to the eth1 network interface.
Click on the pencil icon inside the storage box.
The Edit Network storage window displays.
In the Edit Network storage window, under Boot Protocol, select the Static radio button.
In the IP field, type 172.24.0.13 as the IP address of hostd.lab.example.com in the storage network.
In the Netmask/Routing Prefix field, type 255.255.255.0 as the netmask.
Leave the Gateway field empty.
Click to save the settings.
Ensure that the check boxes for the Verify connectivity between Host and Engine and Save network configuration options are selected.
Click to save the new network configuration for hostd.lab.example.com.
Create an NFS-based storage domain called datastorage1 to function as the data domain in the datacenter1 data center.
This storage domain should use 172.24.0.8:/exports/data as the NFS export path in the back end for the datastorage1 storage domain in the datacenter1 data center.
Use the hostc.lab.example.com RHV-H host in datacenter1 to mount the NFS export.
The 172.24.0.8 IP address belongs to the storage network.
From the menu, navigate to → .
Click .
In the New Domain window, set the values of the fields according to the following table.
| Field | Value |
|---|---|
| Data Center | |
| Domain Function | |
| Storage Type | |
| Host to Use | |
| Name | datastorage1 |
| Export Path | 172.24.0.8:/exports/data |
Click to create the datastorage1 storage domain.
From the Storage Domains page under , verify that the datastorage1 storage domain exists, and displays the Active status in the Cross Data Center Status column.
It may take a couple of minutes for the datastorage1 storage domain status to transition from Locked to Active.
Create an iSCSI-based storage domain called datastorage2 to function as the data domain in the datacenter2 data center.
Use a LUN from the iSCSI target on the 172.24.0.8 address of utility.
The 172.24.0.8 IP address belongs to the storage network.
From the menu, navigate to → .
Click .
In the New Domain window, set the values of the fields according to the following table.
| Field | Value |
|---|---|
| Data Center | |
| Domain Function | |
| Storage Type | |
| Host to Use | |
| Name | datastorage2 |
In the Discover Targets section, specify 172.24.0.8 in the Address field.
Set the Port field to 3260, if not already set.
Click to display the available iSCSI target LUNs.
The utility system uses the 172.24.0.8 IP address in the storage network.
Verify that the Targets > LUNs section includes the iqn.2019-07.com.example.lab:utility target name.
Click the right arrow button for the iqn.2019-07.com.example.lab:utility target name to log in to it.
Click next to the iqn.2019-07.com.example.lab:utility target name to expand and display the list of available iSCSI target LUNs.
Click for the displayed iSCSI target LUN.
Click to create the datastorage2 storage domain.
If you encounter a warning that mentions about the destructive behavior of the operation, select the Approve operation check box and click .
From the Storage Domains page under , verify that the datastorage2 storage domain exists with an Active status in the Cross Data Center Status column.
It may take a couple of minutes for the datastorage2 storage domain status to transition from Locked to Active.
Upload the boot image, available at http://materials.example.com/rhel-server-7.6-x86_64-boot.iso, to the datastorage1 data domain.
Use rhel-server-7.6-x86_64-boot.iso as the name for the image in RHV.
This boot image acts as the installation media for the Red Hat Enterprise Linux 7.6 operating system.
On workstation, open a terminal and download http://materials.example.com/rhel-server-7.6-x86_64-boot.iso as /home/student/Downloads/rhel-server-7.6-x86_64-boot.iso.
[student@workstation ~]$curl -o \/home/student/Downloads/rhel-server-7.6-x86_64-boot.iso \http://materials.example.com/rhel-server-7.6-x86_64-boot.iso
From the menu of the RHV-M Administration Portal, navigate to → to access the Storage >> Storage Domains page.
From the Storage >> Storage Domains page, click on the name of the datastorage1 data center to access the Storage >> Storage Domains >> datastorage1 page.
From the Storage >> Storage Domains >> datastorage1 page, click on the disks tab.
Click the drop-down button and select . The Upload Image window displays.
In the Upload Image window, click to point to /home/student/Downloads/rhel-server-7.6-x86_64-boot.iso.
Click the button to verify this. If clicking the button returns a green success box, then you are ready to upload. If clicking the button returns an orange warning box, click the ovirt-engine certificate link within the warning box. Check the box next to Trust this CA to identify websites and then click the button. After you have done this, click the button again. It should return a green success box.
If you accidentally forget to check the box next to Trust this CA to identify websites, the following procedure will bring up that window again:
Open Preferences for Firefox and then select Privacy & Security in the left menu.
Scroll down to the Security section (at the bottom) and click the button.
In the Certificate Manager window, scroll down to lab.example.com, click rhvm.lab.example.com.34088 so that it is highlighted, and then click the button.
Back on the Preferences tab for Privacy & Security, scroll up to the Cookies and Site Data section and then click the button.
Accept the default selections and click the button. Confirm your choice by clicking the button in the new window that appears.
Click OK to upload the image.
Wait until the value of the Status field for the image transitions from Locked to OK. It takes a couple of minutes for the Status field to transition to OK.