Bookmark this page

Lab: Configuring a Red Hat Virtualization Environment

In this review, you will configure your new Red Hat Virtualization environment.

Outcomes

You should be able to:

  • Integrate users in an LDAP directory with your Red Hat Virtualization environment

  • Create a new data center

  • Create a new cluster

  • Register and activate RHV-H hosts

  • Create a new storage domain

  • Create additional logical networks

  • Upload the Red Hat Enterprise Linux installation boot ISO to the data storage domain

Delete your lab environment and provision a new lab for the comprehensive review. The start script will fail if the environment is not new.

If you wish to go back and do a Lab or GE after running a start script for the review, then you must delete the environment again and re-provision a new one.

Log in as the student user on workstation and run the lab deploy-cr start command. This command ensures that the RHV environment is setup correctly.

[student@workstation ~]$ lab deploy-cr start

Instructions

Configure your Red Hat Virtualization environment according to the following specifications.

  • Integrate utility.lab.example.com, the Red Hat Enterprise Linux (RHEL) Identity Manager (IdM) server, with your Red Hat Virtualization environment to provide users in a new profile named lab.example.com. You should use the StartTLS protocol to connect. The PEM-encoded CA certificate to validate the connection is available at http://utility.lab.example.com/ipa/config/ca.crt. Use uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com as the search user DN. The password to authenticate using that DN is redhat. Continue with default settings otherwise while integrating the RHEL IdM server with your RHV environment.

  • Set rhvadmin as the system-wide administrative user in your RHV environment.

  • Create two new data centers named datacenter1 and datacenter2. In the datacenter1 data center, create a new cluster named cluster1. In the datacenter2 data center, create a new cluster named cluster2.

  • Register the three RHV-H hosts as mentioned in the following table with your RHV environment. These RHV-H hosts use redhat as the password of the root user. The following table mentions the expected clusters and IP addresses of each host. Ensure that these RHV-H hosts are active in the cluster.

    HostCluster
    hostb.lab.example.comcluster1
    hostc.lab.example.comcluster1
    hostd.lab.example.comcluster2
  • Configure logical networks to help separate network traffic.

    In datacenter1, create a new logical network named virtual for virtual machine traffic. It should be tagged as VLAN 10. It should be usable by virtual machines. It should not be used for any RHV infrastructure traffic. It should be associated with the eth0 interface of all hosts in cluster1. The hosts should use DHCP to get IPv4 settings for that network.

    In datacenter1, create a new logical network named storage for storage traffic. It should not use VLAN tagging. It should not be usable by virtual machines. It should not be used for any RHV infrastructure traffic. It should be associated with the eth1 interface of all hosts in cluster1. The hosts should statically configure IPv4 settings for that network as indicated in the following table.

    In datacenter2, create a new logical network named storage for storage traffic. It should not use VLAN tagging. It should not be usable by virtual machines. It should not be used for any RHV infrastructure traffic. It should be associated with the eth1 interface of all hosts in cluster2. The hosts should statically configure IPv4 settings for that network as indicated in the following table.

    The following two tables summarize the logical network configuration for hosts in cluster1 and cluster2.

    Table 15.1. Logical Networks of cluster1

    HostLogical networkVLAN tagHost interfaceIPv4 configuration
    hostbovirtmgmtuntaggedeth0DHCP (172.25.250.11/255.255.255.0)
    virtual10eth0DHCP
    storageuntaggedeth1Static (172.24.0.11/255.255.255.0)
    hostcovirtmgmtuntaggedeth0DHCP (172.25.250.12/255.255.255.0)
    virtual10eth0DHCP
    storageuntaggedeth1Static (172.24.0.12/255.255.255.0)

    Table 15.2. Logical Networks of cluster2

    HostLogical networkVLAN tagHost interfaceIPv4 configuration
    hostdovirtmgmtuntaggedeth0DHCP (172.25.250.13/255.255.255.0)
    storageuntaggedeth1Static (172.24.0.13/255.255.255.0)

  • Create a new data domain named datastorage1 in the datacenter1 data center using the NFS export 172.24.0.8:/exports/data. Use the hostc.lab.example.com RHV-H hosts in datacenter1 to mount the NFS export.

  • Create a new data domain named datastorage2 in the datacenter2 data center using the iSCSI LUN available from the 172.24.0.8 iSCSI portal.

  • Upload the boot image, available at http://materials.example.com/rhel-server-7.6-x86_64-boot.iso, to the datastorage1 data domain. This boot image acts as the installation media for the Red Hat Enterprise Linux 7.6 operating system.

  1. Configure RHV-M to use the utility.lab.example.com RHEL IdM server to provide users in a new profile named lab.example.com. Enable the StartTLS protocol to establish a secure LDAP connection between RHV-M and the RHEL IdM server. Download the PEM-encoded CA certificate, required to validate the secure connection, from http://utility.lab.example.com/ipa/config/ca.crt. Use uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com as the search user DN. The password to authenticate using that DN is redhat. Continue with default settings otherwise while integrating the RHEL IdM server with your RHV environment.

    1. From workstation, open an SSH connection to rhvm as root.

      [student@workstation ~]$ ssh root@rhvm
      ...output omitted...
      [root@rhvm ~]# 
    2. Use the rpm command to verify that the ovirt-engine-extension-aaa-ldap-setup package is installed on rhvm.

      [root@rhvm ~]# rpm -q ovirt-engine-extension-aaa-ldap-setup
      ovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7ev.noarch

      The ovirt-engine-extension-aaa-ldap-setup package is already installed because it is automatically included in a self-hosted engine installation, like the one used in this class.

    3. To start the interactive setup, run the ovirt-engine-extension-aaa-ldap-setup command.

      [root@rhvm ~]# ovirt-engine-extension-aaa-ldap-setup
      [ INFO  ] Stage: Initializing
      [ INFO  ] Stage: Environment setup
                Configuration files: ['/etc/ovirt-engine-extension-aaa-ldap-setup.conf.d/10-packaging.conf']
                Log file: /tmp/ovirt-engine-extension-aaa-ldap-setup-20190702112955-wwd3ln.log
                Version: otopi-1.8.2 (otopi-1.8.2-1.el7ev)
      [ INFO  ] Stage: Environment packages setup
      [ INFO  ] Stage: Programs detection
      [ INFO  ] Stage: Environment customization
      ...output omitted...
    4. Type 6 to select IPA from the Available LDAP implementations list.

      ...output omitted...
                 6 - IPA
      ...output omitted...
                Please select: 6
    5. Press Enter to accept the default setting of using DNS to resolve the host name of the Red Hat Enterprise Linux Identity Manager server.

      ...output omitted...
                NOTE:
                It is highly recommended to use DNS resolution for LDAP server.
                If for some reason you intend to use hosts or plain address disable DNS usage.
      
                Use DNS (Yes, No) [Yes]: Enter
    6. Type 1 to select Single server from the Available policy method list.

      ...output omitted...
                Available policy method:
                 1 - Single server
                 2 - DNS domain LDAP SRV record
                 3 - Round-robin between multiple hosts
                 4 - Failover between multiple hosts
                Please select: 1
    7. Type utility.lab.example.com to specify the host address of the Red Hat Enterprise Linux Identity Manager server.

      Please enter host address: utility.lab.example.com
    8. Press Enter to accept the default secure connection method (StartTLS) for the Red Hat Enterprise Linux Identity Manager server.

      ...output omitted...
                Please select protocol to use (startTLS, ldaps, plain) [startTLS]: Enter
    9. Select the URL method to obtain the CA certificate.

      Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): URL
    10. Specify http://utility.lab.example.com/ipa/config/ca.crt as the URL to the CA certificate.

      URL: http://utility.lab.example.com/ipa/config/ca.crt
      [ INFO  ] Connecting to LDAP using 'ldap://utility.lab.example.com:389'
      [ INFO  ] Executing startTLS
      [ INFO  ] Connection succeeded
    11. The Red Hat Enterprise Linux Identity Manager server in the classroom has been configured with a user that the RHV Manager can use to search the LDAP directory for user information. The user DN is uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com. The password for this DN is redhat.

      Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous): uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com
      Enter search user password: redhat
      [ INFO  ] Attempting to bind using 'uid=rhvadmin,cn=users,cn=accounts,dc=lab,dc=example,dc=com'
    12. Accept dc=lab,dc=example,dc=com as the proposed base DN by pressing Enter.

      Please enter base DN (dc=lab,dc=example,dc=com) [dc=lab,dc=example,dc=com]: Enter
    13. Type No to indicate that you will not use single sign-on for virtual machines.

      Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]: No
    14. Specify lab.example.com as the name of the profile for the external domain.

      Please specify profile name that will be visible to users [utility.lab.example.com]: lab.example.com
      [ INFO  ] Stage: Setup validation
    15. Test the login function to ensure that the Red Hat Enterprise Linux Identity Manager server is connected to the Red Hat Virtualization Manager.

      NOTE:
      It is highly recommended to test drive the configuration before applying it into engine.
      Login sequence is executed automatically, but it is recommended to also execute Search sequence manually after successful Login sequence.
      
      Please provide credentials to test login flow:
      Enter user name: rhvadmin
      Enter user password: redhat
      [ INFO  ] Executing login sequence...
      ...output omitted...
      [ INFO  ] Login sequence executed successfully
    16. Press Enter to use Done as the default selection. This completes the configuration.

      Please make sure that user details are correct and group membership meets expectations (search for PrincipalRecord and GroupRecord titles).
      Abort if output is incorrect.
      Select test sequence to execute (Done, Abort, Login, Search) [Done]: Enter
      [ INFO  ] Stage: Transaction setup
      [ INFO  ] Stage: Misc configuration (early)
      [ INFO  ] Stage: Package installation
      [ INFO  ] Stage: Misc configuration
      [ INFO  ] Stage: Transaction commit
      [ INFO  ] Stage: Closing up
                CONFIGURATION SUMMARY
                Profile name is: lab.example.com
                The following files were created:
                    /etc/ovirt-engine/aaa/lab.example.com.jks
                    /etc/ovirt-engine/aaa/lab.example.com.properties
                    /etc/ovirt-engine/extensions.d/lab.example.com-authz.properties
                    /etc/ovirt-engine/extensions.d/lab.example.com-authn.properties
      [ INFO  ] Stage: Clean up
                Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20190702121518-kusgin.log:
      [ INFO  ] Stage: Pre-termination
      [ INFO  ] Stage: Termination
    17. Use systemctl to restart the ovirt-engine service. Wait for the service to finish activating components before accessing the RHV Manager Administration Portal.

      [root@rhvm ~]# systemctl restart ovirt-engine
    18. Log out from rhvm.

      [root@rhvm ~]# logout
      Connection to rhvm closed.
      [student@workstation ~]$ 
  2. Assign the SuperUser role, system-wide, to the rhvadmin user in the lab.example.com profile.

    1. On workstation, open Firefox and navigate to https://rhvm.lab.example.com/ovirt-engine. The RHV-M host's web console service may present a TLS certificate for the HTTPS connection that is signed by an unrecognized Certificate Authority. Either add a security exception for that certificate in your web browser, or configure your web environment to use a trusted Certificate Authority. Click Administration Portal to log in to the web interface as the internal user called admin with redhat as the password. Select the internal profile.

    2. In the menu, click Administration, and then click Configure.

    3. In the Configure dialog box, click System Permissions.

    4. Click the Add button to add a role to a user.

    5. In the Add System Permission to User dialog box, click the User radio button, if not already selected. Click the drop-down list under Search to select the lab.example.com (lab.example.com-authz) item. This item represents the lab.example.com profile you configured in the preceding steps to allow Red Hat Virtualization Manager to use the Red Hat Enterprise Linux Identity Manager as a source for the users.

    6. Click GO to display the users in the Red Hat Enterprise Linux Identity Manager server.

    7. In the list of users that displays, click the check box for the rhvadmin user.

    8. Click the drop-down list under Role to Assign. From the list of available roles, select SuperUser role for rhvadmin.

    9. Click OK to assign the specified role to the selected user. Notice that the rhvadmin user displays in the System Permissions list. This list confirms that the rhvadmin user has been assigned a role granting administrative access to Red Hat Virtualization.

    10. In the Configure dialog box, click Close.

  3. Create two new data centers named datacenter1 and datacenter2.

    1. From the menu, navigate to ComputeData Centers.

    2. From the Compute >> Data Centers page, click New.

    3. In the New Data Center window, enter datacenter1 in the Name field. Keep the default values for the other fields. Click the OK button to create the datacenter1 data center. The Data Center - Guide Me window displays.

    4. In the Data Center - Guide Me window, click Configure Later.

    5. From the Compute >> Data Centers page, click New.

    6. In the New Data Center window, enter datacenter2 in the Name field. Keep the default values for the other fields. Click the OK button to create the datacenter2 data center. The Data Center - Guide Me window displays.

    7. In the Data Center - Guide Me window, click Configure Later.

    8. Confirm that the Compute >> Data Centers page lists both datacenter1 and datacenter2.

  4. Create the new clusters named cluster1, and cluster2 within the datacenter1 and datacenter2 data centers respectfully.

    1. From the menu, navigate to ComputeHosts to access the Compute >> Hosts page. From the Compute >> Hosts page, click on the name of hostb.lab.example.com to access the Compute >> Hosts >> hostb.lab.example.com page.

    2. Use the value of CPU Type, under the Hardware section of the General tab in the Compute >> Hosts >> hostb.lab.example.com page, to determine the type of CPU of the hostb host. This CPU type is same for hostc and hostd.

    3. From the menu, navigate to ComputeClusters to access the Compute >> Clusters page. In the Compute >> Clusters page, click New. The New Cluster window displays. In the New Cluster window, under the General section, set the values of the fields according to following table. While selecting the value of CPU Type from the available items, ensure that the value matches CPU Type of hostb you determined previously.

      FieldValue
      Data Centerdatacenter1
      Namecluster1
      Management Networkovirtmgmt
      CPU Architecturex86_64
      Compatibility Version4.3
      Switch TypeLinux Bridge

      Leave all the other fields with their default values, and click OK to create the cluster. The Cluster - Guide Me window displays. In the Cluster - Guide Me window, click Configure Later to continue creating the cluster without configuring the host of the cluster.

    4. From the Compute >> Clusters page, click New to create another cluster named cluster2. Set the data center for cluster2 to datacenter2. Use the previous table to specify the other properties of cluster2.

  5. Before you remove the hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com RHV-H hosts from the Default cluster, confirm that no virtual machine is running on any of these hosts.

    1. From the menu, navigate to ComputeVirtual Machines to access the Compute >> Virtual Machines page.

    2. From the Compute >> Virtual Machines page, confirm that no virtual machine runs on any of the hostb.lab.example.com, hostc.lab.example.com, hostd.lab.example.com RHV-H hosts.

      If you see a virtual machine that runs on any of hostb.lab.example.com, hostc.lab.example.com, hostd.lab.example.com, power down that virtual machine.

      Warning

      You should see the HostedEngine virtual machine that runs on hosta.lab.example.com of the Default cluster. This virtual machine contains the RHV self-hosted engine. Do not power down this virtual machine.

  6. Mark hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com for maintenance.

    1. From the menu, click on Compute, and then click on Hosts to access the Compute >> Hosts page.

    2. From the Compute >> Hosts page, click on the row for hostb.lab.example.com.

    3. From the Management drop-down menu, select Maintenance to mark hostb.lab.example.com for maintenance. The Maintenace Host(s) window displays. Click OK to enable the Maintenance status of hostb.lab.example.com.

    4. Confirm that the value of the Status field for hostb.lab.example.com is Maintenance.

    5. Use the previous steps to enable the Maintenance status for hostc.lab.example.com and hostd.lab.example.com.

  7. Remove hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com from the Default cluster.

    1. From the Compute >> Hosts page, click on the row for hostb.lab.example.com. Click the Remove button. The Remove Host(s) window displays. In the Remove Host(s) window, click OK to remove hostb.lab.example.com.

    2. Use the previous step to remove hostc.lab.example.com and hostd.lab.example.com.

    3. Confirm that the hostb.lab.example.com, hostc.lab.example.com, and hostd.lab.example.com RHV-H hosts no longer display in the Compute >> Hosts page.

  8. Add hostb.lab.example.com and hostc.lab.example.com to cluster1.

    1. From the Compute >> Hosts page, click New. The New Host window displays.

    2. In the New Host window, under the General section, set the values of the fields according to the following table.

      FieldValue
      Host Clustercluster1
      Namehostb.lab.example.com
      Hostnamehostb.lab.example.com
      SSH Port22
      Passwordredhat

      Ensure that the Activate host after install check box is selected. Leave all the other fields with their default values, and click OK to register the hostb.lab.example.com RHV-H host. The Power Management Configuration window displays, warning you about the non configuration of Power Management for hostb.lab.example.com. In the Power Management Configuration window, click OK to continue adding the host to cluster1 without its Power Management configuration.

    3. Wait until the value of the Status field for hostb.lab.example.com transitions from Installing to Up.

    4. Use the previous steps to add hostc.lab.example.com to cluster1. While setting the values of different fields for hostc.lab.example.com in the New Host window, enter hostc.lab.example.com as the values of the Name and Hostname fields. For values of the other fields, refer to the preceding table.

  9. Add hostd.lab.example.com to cluster2.

    1. From the Compute >> Hosts page, click New. The New Host window displays.

    2. In the New Host window, under the General section, set the values of the fields according to the following table.

      FieldValue
      Host Clustercluster2
      Namehostd.lab.example.com
      Hostnamehostd.lab.example.com
      SSH Port22
      Passwordredhat

      Ensure that the Activate host after install check box is selected. Leave all the other fields with their default values, and click OK to register the hostd.lab.example.com RHV-H host. The Power Management Configuration window displays, warning you about the non configuration of Power Management for hostd.lab.example.com. In the Power Management Configuration window, click OK to continue adding the host to cluster2 without its Power Management configuration.

    3. Wait until the value of the Status field for hostd.lab.example.com transitions from Installing to Up.

  10. Create a new logical network named virtual in the datacenter1 data center to separate management network traffic from virtual machine network traffic. Specify the virtual logical network as a virtual machine network using the VLAN number 10.

    1. In the RHV-M Administration Portal, navigate to NetworkNetworks from the menu to access the Network >> Networks page. In the Network >> Networks page, click New to create a new logical network. The New Logical Network window displays. In the New Logical Network window, set the values of the different fields according to the following table.

      FieldValue
      Data Centerdatacenter1
      Namevirtual
      DescriptionVirtual Machine Network
      CommentNetwork for Virtual Machine Traffic

      Select the Enable VLAN tagging check box and enter 10 as the VLAN number in the text field next to the Enable VLAN tagging check box. Ensure that the VM network check box is selected, and leave all the other fields with their default values. Click OK to create the virtual logical network.

  11. Create the logical network named storage in both the datacenter1, and datacenter2 data centers to separate storage traffic from the management network traffic and the virtual machine network traffic. Disable the VLAN tagging for this logical network.

    1. In the RHV-M Administration Portal, navigate to NetworkNetworks from the menu to access the Network >> Networks page. In the Network >> Networks page, click New to create a new logical network. The New Logical Network window displays. In the New Logical Network window, set the values of the different fields according to the following table.

      FieldValue
      Data Centerdatacenter1
      Namestorage
      DescriptionStorage Network
      CommentNetwork for Storage Traffic

      Ensure that the Enable VLAN tagging check box is clear. Clear the VM network check box, and leave all the other fields with their default values. Click OK to create the storage logical network.

    2. Use the previous steps to create a new logical network named storage in datacenter2. Disable the VLAN tagging for this logical network. For values of the fields other than Data center in the New Logical Network window, refer to the preceding table.

  12. Assign the virtual, and storage logical networks to the eth0, and eth1 network interfaces respectively of hostb.lab.example.com. Use DHCP to obtain the IPv4 settings for the network device of the host in the virtual logical network. Set the 172.24.0.11 IP address and the 255.255.255.0 netmask statically for the network device in the storage logical network.

    1. From the menu, navigate to ComputeHosts to access the Compute >> Hosts page.

    2. From the Compute >> Hosts page, click on the name of the hostb.lab.example.com host to access the Compute >> Hosts >> hostb.lab.example.com page.

    3. In the Compute >> Hosts >> hostb.lab.example.com page, click the Network Interfaces tab.

    4. Click the Setup Host Networks button to change the network configuration of hostb.lab.example.com.

    5. In the Setup Host hostb.lab.example.com Networks window, click and drag the virtual (VLAN 10) box from the right side to the left side of the window. Drop the box next to the eth0 network interface. After dropping, you should see two logical networks assigned to the eth0 interface.

    6. Click and drag the storage box from the right side to the left side of the window. Drop the box onto the no network assigned field, next to the eth1 network interface.

    7. Click on the pencil icon inside the storage box. The Edit Network storage window displays. In the Edit Network storage window, under Boot Protocol, select the Static radio button.

    8. In the IP field, type 172.24.0.11 as the IP address of hostb.lab.example.com in the storage network.

    9. In the Netmask/Routing Prefix field, type 255.255.255.0 as the netmask. Leave the Gateway field empty.

    10. Click OK to save the settings.

    11. Ensure that the check boxes for the Verify connectivity between Host and Engine and Save network configuration options are selected.

    12. Click OK to save the new network configuration for hostb.lab.example.com.

      Warning

      If you see the value of the Status field for the RHV-H hosts as Non-operational, click the Management drop-down button and select Activate.

  13. Assign the virtual, and storage logical networks to the eth0, and eth1 network interfaces respectively of hostc.lab.example.com. Use DHCP to obtain the IPv4 settings for the network device of the host in the virtual logical network. Set the 172.24.0.12 IP address and the 255.255.255.0 netmask statically for the network device in the storage logical network.

    1. From the menu, navigate to ComputeHosts to access the Compute >> Hosts page.

    2. From the Compute >> Hosts page, click on the name of the hostc.lab.example.com host to access the Compute >> Hosts >> hostc.lab.example.com page.

    3. In the Compute >> Hosts >> hostc.lab.example.com page, click the Network Interfaces tab.

    4. Click the Setup Host Networks button to change the network configuration of hostc.lab.example.com.

    5. In the Setup Host hostc.lab.example.com Networks window, click and drag the virtual (VLAN 10) box from the right side to the left side of the window. Drop the box next to the eth0 network interface. After dropping, you should see two logical networks assigned to the eth0 interface.

    6. Click and drag the storage box from the right side to the left side of the window. Drop the box onto the no network assigned field, next to the eth1 network interface.

    7. Click on the pencil icon inside the storage box. The Edit Network storage window displays. In the Edit Network storage window, under Boot Protocol, select the Static radio button.

    8. In the IP field, type 172.24.0.12 as the IP address of hostc.lab.example.com in the storage network.

    9. In the Netmask/Routing Prefix field, type 255.255.255.0 as the netmask. Leave the Gateway field empty.

    10. Click OK to save the settings.

    11. Ensure that the check boxes for the Verify connectivity between Host and Engine and Save network configuration options are selected.

    12. Click OK to save the new network configuration for hostc.lab.example.com.

  14. Assign the storage logical network to the eth1 network interface of hostd.lab.example.com. Set the 172.24.0.13 IP address and the 255.255.255.0 netmask statically for the network device in the storage logical network.

    1. From the menu, navigate to ComputeHosts to access the Compute >> Hosts page.

    2. From the Compute >> Hosts page, click on the name of the hostc.lab.example.com host to access the Compute >> Hosts >> hostd.lab.example.com page.

    3. In the Compute >> Hosts >> hostd.lab.example.com page, click the Network Interfaces tab.

    4. Click the Setup Host Networks button to change the network configuration of hostd.lab.example.com.

    5. In the Setup Host hostd.lab.example.com Networks window, click and drag the storage box from the right side to the left side of the window. Drop the box onto the no network assigned field, next to the eth1 network interface.

    6. Click on the pencil icon inside the storage box. The Edit Network storage window displays. In the Edit Network storage window, under Boot Protocol, select the Static radio button.

    7. In the IP field, type 172.24.0.13 as the IP address of hostd.lab.example.com in the storage network.

    8. In the Netmask/Routing Prefix field, type 255.255.255.0 as the netmask. Leave the Gateway field empty.

    9. Click OK to save the settings.

    10. Ensure that the check boxes for the Verify connectivity between Host and Engine and Save network configuration options are selected.

    11. Click OK to save the new network configuration for hostd.lab.example.com.

  15. Create an NFS-based storage domain called datastorage1 to function as the data domain in the datacenter1 data center. This storage domain should use 172.24.0.8:/exports/data as the NFS export path in the back end for the datastorage1 storage domain in the datacenter1 data center. Use the hostc.lab.example.com RHV-H host in datacenter1 to mount the NFS export. The 172.24.0.8 IP address belongs to the storage network.

    1. From the menu, navigate to StorageDomains.

    2. Click New Domain.

    3. In the New Domain window, set the values of the fields according to the following table.

      FieldValue
      Data Centerdatacenter1
      Domain FunctionData
      Storage TypeNFS
      Host to Usehostc.lab.example.com
      Namedatastorage1
      Export Path172.24.0.8:/exports/data
    4. Click OK to create the datastorage1 storage domain.

    5. From the Storage Domains page under Storage, verify that the datastorage1 storage domain exists, and displays the Active status in the Cross Data Center Status column. It may take a couple of minutes for the datastorage1 storage domain status to transition from Locked to Active.

  16. Create an iSCSI-based storage domain called datastorage2 to function as the data domain in the datacenter2 data center. Use a LUN from the iSCSI target on the 172.24.0.8 address of utility. The 172.24.0.8 IP address belongs to the storage network.

    1. From the menu, navigate to StorageDomains.

    2. Click New Domain.

    3. In the New Domain window, set the values of the fields according to the following table.

      FieldValue
      Data Centerdatacenter2
      Domain FunctionData
      Storage TypeiSCSI
      Host to Usehostd.lab.example.com
      Namedatastorage2
    4. In the Discover Targets section, specify 172.24.0.8 in the Address field. Set the Port field to 3260, if not already set. Click Discover to display the available iSCSI target LUNs.

      The utility system uses the 172.24.0.8 IP address in the storage network.

    5. Verify that the Targets > LUNs section includes the iqn.2019-07.com.example.lab:utility target name. Click the right arrow button for the iqn.2019-07.com.example.lab:utility target name to log in to it.

    6. Click + next to the iqn.2019-07.com.example.lab:utility target name to expand and display the list of available iSCSI target LUNs. Click Add for the displayed iSCSI target LUN. Click OK to create the datastorage2 storage domain.

      Note

      If you encounter a warning that mentions about the destructive behavior of the operation, select the Approve operation check box and click OK.

    7. From the Storage Domains page under Storage, verify that the datastorage2 storage domain exists with an Active status in the Cross Data Center Status column. It may take a couple of minutes for the datastorage2 storage domain status to transition from Locked to Active.

  17. Upload the boot image, available at http://materials.example.com/rhel-server-7.6-x86_64-boot.iso, to the datastorage1 data domain. Use rhel-server-7.6-x86_64-boot.iso as the name for the image in RHV. This boot image acts as the installation media for the Red Hat Enterprise Linux 7.6 operating system.

    1. On workstation, open a terminal and download http://materials.example.com/rhel-server-7.6-x86_64-boot.iso as /home/student/Downloads/rhel-server-7.6-x86_64-boot.iso.

      [student@workstation ~]$ curl -o \
      /home/student/Downloads/rhel-server-7.6-x86_64-boot.iso \
      http://materials.example.com/rhel-server-7.6-x86_64-boot.iso
    2. From the menu of the RHV-M Administration Portal, navigate to StorageDomains to access the Storage >> Storage Domains page.

    3. From the Storage >> Storage Domains page, click on the name of the datastorage1 data center to access the Storage >> Storage Domains >> datastorage1 page.

    4. From the Storage >> Storage Domains >> datastorage1 page, click on the disks tab.

    5. Click the Upload drop-down button and select Start. The Upload Image window displays.

    6. In the Upload Image window, click Choose File to point to /home/student/Downloads/rhel-server-7.6-x86_64-boot.iso.

    7. Click the Test Connection button to verify this. If clicking the Test Connection button returns a green success box, then you are ready to upload. If clicking the Test Connection button returns an orange warning box, click the ovirt-engine certificate link within the warning box. Check the box next to Trust this CA to identify websites and then click the OK button. After you have done this, click the Test Connection button again. It should return a green success box.

      Important

      If you accidentally forget to check the box next to Trust this CA to identify websites, the following procedure will bring up that window again:

      1. Open Preferences for Firefox and then select Privacy & Security in the left menu.

      2. Scroll down to the Security section (at the bottom) and click the View Certificates... button.

      3. In the Certificate Manager window, scroll down to lab.example.com, click rhvm.lab.example.com.34088 so that it is highlighted, and then click the Delete or Distrust button.

      4. Back on the Preferences tab for Privacy & Security, scroll up to the Cookies and Site Data section and then click the Clear Data... button.

      5. Accept the default selections and click the Clear button. Confirm your choice by clicking the Clear Now button in the new window that appears.

    8. Click OK to upload the image.

    9. Wait until the value of the Status field for the image transitions from Locked to OK. It takes a couple of minutes for the Status field to transition to OK.

Evaluation

On workstation, run the lab deploy-cr grade command to confirm success of this exercise. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab deploy-cr grade

Finish

On workstation, run the lab deploy-cr finish script to complete this lab.

[student@workstation ~]$ lab deploy-cr finish

This concludes the lab.

Revision: rh318-4.3-c05018e