In this review, you back up and restore RHV-M, and update RHV-H hosts.
Outcomes
You should be able to:
Backup and restore a Red Hat Virtualization Manager installation
Update Red Hat Virtualization Hosts
Log in as the student user on workstation and run the lab update-cr start command.
[student@workstation ~]$lab update-cr start
Instructions
Perform the following steps:
Create a full backup without stopping the Red Hat Virtualization infrastructure. The backup file should be created at /root/rhvm-backup.tgz and the log file should be created at /root/backup.log.
Clean out the Red Hat Virtualization Manager configuration using engine-cleanup and then restore your backup into that clean environment.
Your restore command should create /root/restore.log and it should restore permissions.
Log in to the Administration Portal as the admin user to confirm that the restoration from backup was successful.
Apply updates to the hosts running Red Hat Virtualization Host and currently attached to cluster1 in your environment.
Use the provided rhvh_updates.repo file from http://materials.example.com/yum.repos.d/rhvh_updates.repo to enable Yum repositories containing the necessary software updates.
Create a full backup of Red Hat Virtualization Manager without stopping the RHV infrastructure.
From workstation, open a terminal and use ssh to log in to rhvm.lab.example.com using the user name root.
The student user on the workstation system is configured with the SSH keys for root user from rhvm.lab.example.com to allow passwordless access.
[student@workstation ~]$ssh root@rhvm.lab.example.com...output omitted...
To create a full backup without stopping RHV infrastructure, issue the engine-backup command, specifying the scope of this backup, the name of the backup file, and the name of the log file:
[root@rhvm ~]#engine-backup --scope=all --mode=backup \--file=rhvm-backup.tgz --log=backup.logBacking up: Notifying engine - Files - Engine database 'engine' - DWH database 'ovirt_engine_history' Packing into file 'rhvm-backup.tgz' Notifying engine Done.
Exit out of rhvm.lab.example.com.
Clean up the RHV-M configuration using engine-cleanup and restore your backup into that clean environment.
Log in to the Administration Portal as the admin user to confirm that the restoration from backup was successful.
From workstation, open a terminal and use ssh to log into hosta.lab.example.com as root.
The student user on the workstation system is configured with the SSH keys needed to log in to hosta.lab.example.com as the root user.
[student@workstation ~]$ssh root@hosta.lab.example.com...output omitted...
Set the maintenance mode to global.
[root@hosta ~]#hosted-engine --set-maintenance --mode=global
Exit from hosta.lab.example.com.
[root@hosta ~]#exit...output omitted...
SSH to rhvm.lab.example.com as root.
Issue the engine-cleanup command to completely clean up the environment.
The engine-cleanup command executes an interactive environment, taking you through a series of questions with default settings displayed in square brackets.
[root@rhvm ~]#engine-cleanup[ INFO ] Stage: Initializing (...) [ INFO ] Stage: Environment customizationDo you want to remove all components? (Yes, No) [Yes]:<ENTER>(...)During execution engine service will be stopped (OK, Cancel) [OK]:<ENTER>All the installed ovirt components are about to be removed, data will be lost (OK, Cancel) [Cancel]:OK(...) --== END OF SUMMARY ==-- [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20190827063123-w605h6.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20190827063653-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of cleanup completed successfully
To restore a full backup of the RHV infrastructure, on rhvm.lab.example.com server issue the engine-backup command, specifying the scope of this restore, the name of the backup file, and the name of the log file:
[root@rhvm ~]#engine-backup --scope=all --mode=restore \--file=rhvm-backup.tgz --log=restore.log --restore-permissionsPreparing to restore: - Unpacking file 'rhvm-backup.tgz' Restoring: - Files - Engine database 'engine' - Cleaning up temporary tables in engine database 'engine' - Updating DbJustRestored VdcOption in engine database - Resetting DwhCurrentlyRunning in dwh_history_timekeeping in engine database - Resetting HA VM status ------------------------------------------------------------------------------ Please note: The engine database was backed up at 2019-08-27 06:17:35.000000000 -0400 . Objects that were added, removed or changed after this date, such as virtual machines, disks, etc., are missing in the engine, and will probably require recovery or recreation. ------------------------------------------------------------------------------ - DWH database 'ovirt_engine_history' You should now run engine-setup. Done.
Run the engine-setup command with the --accept-defaults option to ensure that ovirt-engine service is correctly configured. Use the --offline mode for this example:
[root@rhvm ~]#engine-setup --accept-defaults --offline[ INFO ] Stage: Initializing ...output omitted... --== END OF SUMMARY ==-- [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20190827064927-pq0980.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20190827065453-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully
Exit from the RHV Manager instance, and then set the maintenance mode back to normal on hosta.
[root@rhvm ~]$exitlogout Connection to rhvm.lab.example.com closed.[student@workstation ~]$ssh root@hosta.lab.example.com
Using hosted-engine, set the maintenance mode back to none.
[root@hosta ~]#hosted-engine --set-maintenance --mode=none
Confirm that everything is working again, and that the restoration from backup was successful.
On workstation, open Firefox and log in to the Administration Portal as the admin user with the internal profile using redhat as password.
Apply updates to the RHV-H hosts for cluster1 in your environment.
Use the provided rhvh_updates.repo file from http://materials.example.com/yum.repos.d/rhvh_updates.repo to access the existing Yum update repositories.
From workstation, use a terminal and with ssh to log in to hostb.lab.example.com using the root user and redhat as password.
[student@workstation ~]$ssh root@hostb.lab.example.com...output omitted...
Download the rhvh_updates.repo file from http://materials.example.com/yum.repos.d/rhvh_updates.repo and place it in the /etc/yum.repos.d/ directory to enable those repositories.
[root@hostb ~]#curl http://materials.example.com/yum.repos.d/rhvh_updates.repo \-o /etc/yum.repos.d/rhvh_updates.repo
From workstation, open a terminal. With the ssh command, log in to hostc.lab.example.com using the root user and redhat as password.
[student@workstation ~]$ssh root@hostc.lab.example.com...output omitted...
Download the rhvh_updates.repo file from http://materials.example.com/yum.repos.d/rhvh_updates.repo and place it in the /etc/yum.repos.d/ directory to enable those repositories.
[root@hostc ~]#curl http://materials.example.com/yum.repos.d/rhvh_updates.repo \-o /etc/yum.repos.d/rhvh_updates.repo
On workstation open Firefox, go to the RHV-M web interface.
Click on the Administration Portal link and log in to the web interface as the admin user with the internal profile using redhat as password.
Click Compute >> Hosts menu to access the Compute >> Hosts page.
From the list of available RHV-H hosts, highlight the hostb.lab.example.com host. Click the button, followed by Check for Upgrade.
When the Upgrade Host dialog window opens, click to confirm the upgrade check.
Notice that after a while, a new Action Item comes up in the same line as the RHV-H host. This new icon is a notification that an upgrade for this host is available.
Right-click the hostb.lab.example.com host. From the displayed menu, choose Installation, followed by Upgrade.
In the Upgrade Host dialog window, click the button to start the upgrade.
If you start the upgrade process of hostc before the hostb status changes to Up again, you will change the datacenter1 from the active state to the nonresponsive state.
There needs to be at least one active host for a data domain to be in the active state.
From the list of available RHV-H hosts, right-click the hostc.lab.example.com host. From the displayed menu, choose Installation followed by Check for Upgrade.
When the Upgrade Host dialog window opens, click to confirm the upgrade check.
Notice that after a while, a new Action Item comes up on the line with the RHV-H host. This new icon is a notification that an upgrade for this host is available.
Click the Compute >> Virtual Machines tab and verify that a virtual machine is running on the hostc host.
If a virtual machine is running, power that machine off using any of the available methods.
Click the Hosts tab.
Right-click the hostc.lab.example.com host. From the displayed menu, choose Installation, followed by Upgrade.
In the Upgrade Host dialog window, click the button to start the upgrade.
Wait and watch the upgrade procedure take place.