Bookmark this page

Orientation to the Classroom Environment

In this course, the main computer system that is used for hands-on learning activities (exercises) is workstation.

The workstation machine has a standard user account, student with student as the password. Although no exercise in this course requires you to log in as root, if you must, then the root password on the workstation machine is redhat.

From the workstation machine, you type the oc commands to manage the OpenShift cluster, which comes preinstalled as part of your classroom environment.

Also from the workstation machine, you run the commands that are required to complete the exercises for this course.

If exercises require you to open a web browser to access any application or website, then you must use the graphical console of the workstation machine and use the Firefox web browser from there.

Note

During the initial start of your classroom environment, the OpenShift cluster takes more time to become fully available. The lab command at the beginning of each exercise checks and waits as required.

If you try to access your cluster by using either the oc command or the web console without first running a lab command, then your cluster might not yet be available. If the cluster is not available, then wait a few minutes and try again.

Log in to OpenShift from the Shell

To access your OpenShift cluster from the workstation machine, use https://api.ocp4.example.com:6443 as the API URL, for example:

[student@workstation ~]$ oc login -u admin -p redhatocp \
  https://api.ocp4.example.com:6443

Besides the admin user, who has cluster administrator privileges, your OpenShift cluster also provides a developer user, with developer as the password, with no special privileges.

Accessing the OpenShift Web Console

If you prefer to use the OpenShift web console, then open a Firefox web browser on your workstation machine and access the following URL:

https://console-openshift-console.apps.ocp4.example.com

Click Red Hat Identity Management and provide the login credentials for either the admin or the developer user.

The Classroom Environment

Every student gets a complete remote classroom environment. As part of that environment, every student gets a dedicated OpenShift cluster for administration tasks.

This environment contains all the necessary resources for the course. Because the course does not access resources outside the classroom environment, any failures of public resources, such as Git repositories or container image registries, should not affect the course.

The classroom environment runs entirely as virtual machines in a large Red Hat OpenStack Platform cluster, which is shared among many students.

Red Hat Training maintains many OpenStack clusters, in data centers across the globe, to provide lower latency to students from many countries.

Figure 0.1: DO380 classroom architecture

All machines on the Student, Classroom, and Cluster networks run Red Hat Enterprise Linux 9 (RHEL 9), except those machines that are nodes of the OpenShift cluster. These cluster nodes run RHEL CoreOS.

The bastion, utility, idm, registry, and classroom systems must always be running. These systems provide infrastructure services that the classroom environment and its OpenShift cluster require. For most exercises, you are not expected to interact with any of these services directly.

Usually, the lab commands from the exercises access these machines to set up your environment for the exercise, and require no further action from you.

For the few exercises where you must access a system other than workstation, primarily the utility system, you receive explicit instructions and connection information as part of the exercise.

All systems in the Student network are in the lab.example.com DNS domain, and all systems in the Classroom network are in the example.com DNS domain.

The masterXX and workerXX systems are nodes of the OpenShift 4 cluster that is part of your classroom environment.

All systems in the Cluster network are in the ocp4.example.com DNS domain.

Table 1. Classroom Machines

Machine nameIP addressesRole
workstation.lab.example.com 172.25.250.9Graphical workstation for system administration
classroom.example.com 172.25.254.254Router to link the Classroom network to the internet
bastion.lab.example.com 172.25.250.254Router to link the Student network to the Classroom network
utility.lab.example.com 172.25.250.253Router to link the Student and Cluster networks
ceph.ocp4.example.com 192.168.50.30Server with a Red Hat Storage Ceph preinstalled cluster
idm.ocp4.example.com 192.168.50.40Red Hat Identity Management system
registry.ocp4.example.com 192.168.50.50Server with Quay and GitLab
sso.ocp4.example.com 192.168.50.60Red Hat Single Sign-On
rhds.ocp4.example.com 192.168.50.70Red Hat Directory Server system
master01.ocp4.example.com 192.168.50.10Control plane node
master02.ocp4.example.com 192.168.50.11Control plane node
master03.ocp4.example.com 192.168.50.12Control plane node
worker01.ocp4.example.com 192.168.50.13Compute node
worker02.ocp4.example.com 192.168.50.14Compute node
worker03.ocp4.example.com 192.168.50.15Compute node

The Dedicated OpenShift Cluster

The Red Hat OpenShift Container Platform 4 cluster inside the classroom environment is preinstalled by using the Pre-existing Infrastructure installation method. All nodes are treated as bare metal servers, even though they are virtual machines in an OpenStack cluster.

OpenShift cloud provider integration capabilities are not enabled, and some features that depend on that integration, such as machine sets and autoscaling of cluster nodes, are not available.

Your OpenShift cluster is in the state from running the OpenShift installer with default configurations, except for some day-2 customizations:

  • The cluster provides a default storage class that is backed by a Network File System (NFS) storage provider.

  • The cluster also provides storage classes that are backed by Ceph.

  • The cluster uses an LDAP identity provider that is configured to use Red Hat Identity Management that runs on the idm.ocp4.example.com system.

The Troubleshooting Access to your OpenShift Cluster section provides information about how to access the utility machine.

Restoring Access to your OpenShift Cluster

If you suspect that you cannot log in to your OpenShift cluster as the admin user any more because you incorrectly changed your cluster authentication settings, then run the lab finish command from your current exercise and restart the exercise by running its lab start command.

If running a lab command does not resolve the issue, then you can follow the instructions in the next section to use the utility machine to access your OpenShift cluster.

Troubleshooting Access to Your OpenShift Cluster

The utility machine ran the OpenShift installer inside your classroom environment, and it is a useful resource to troubleshoot cluster issues. You can view the installer manifests and logs in the /home/lab/ocp4 directory of the utility machine.

Logging in to the utility server is rarely required to perform exercises. If your OpenShift cluster is taking too long to start, or is in a degraded state, then you can log in to the utility machine as the lab user to troubleshoot your classroom environment.

The student user on the workstation machine is already configured with SSH keys that enable logging in to the utility machine without a password.

[student@workstation ~]$ ssh lab@utility

In the utility machine, the lab user is preconfigured with a .kube/config file that grants access as system:admin without first requiring the oc login command.

You can then run troubleshooting commands, such as the oc get node command, if they fail from the workstation machine.

You should not require SSH access to your OpenShift cluster nodes for regular administration tasks, because OpenShift 4 provides the oc debug command. The lab user on the utility server is preconfigured with SSH keys to access all cluster nodes if necessary. For example:

[lab@utility ~]$ ssh -i ~/.ssh/lab_rsa core@master01.ocp4.example.com

In the preceding example, replace master01 with the name of the intended cluster node.

Approving Node Certificates on Your OpenShift Cluster

Red Hat OpenShift Container Platform clusters are designed to run continuously, 24x7, until they are decommissioned. Unlike a production cluster, the classroom environment contains a cluster that was stopped after installation, and that stops and restarts several times before you finish this course. This scenario requires special handling unlike a production cluster.

The control plane and compute nodes in an OpenShift cluster communicate with each other. All communication between cluster nodes is protected by mutual authentication based on per-node TLS certificates.

The OpenShift installer handles creating and approving TLS certificate signing requests (CSRs) for the full stack automation installation method. The system administrator manually approves these CSRs for the Pre-existing Infrastructure installation method.

All per-node TLS certificates have a short expiration life of 24 hours (the first time) or 30 days (after renewal). When the affected cluster nodes are about to expire, they create CSRs, and the control plane automatically approves them. If the control plane is offline when the TLS certificate of a node expires, then a cluster administrator must approve the pending CSR.

The utility machine includes a system service that approves CSRs from the cluster when you start your classroom, to ensure that your cluster is ready when you begin the exercises. If you create or start your classroom and begin an exercise too quickly, then your cluster might not be ready. In this case, wait a few minutes while the utility machine handles CSRs, and then try again.

Sometimes, the utility machine fails to approve all the required CSRs, for example because the cluster took too long to generate all the required CSRs requests and the system service did not wait long enough. Also, some OpenShift cluster nodes might not have waited long enough for approval of their CSRs, and might issue new CSRs that superseded previous ones.

In these cases, your cluster takes too long to come up, and your oc login or lab commands fail. To resolve the problem, you can log in to the utility machine, as explained previously, and run the sign.sh script to approve any additional and pending CSRs.

[lab@utility ~]$ ./sign.sh

The sign.sh script loops a few times, in case that your cluster nodes issue new CSRs that supersede previous certificates.

After you approve, or the system service in the utility machine approves all CSRs, OpenShift must restart some cluster operators. It takes a few moments before your OpenShift cluster is ready to answer requests from clients. To help you handle this scenario, the utility machine provides the wait.sh script, which waits until your OpenShift cluster is ready to accept authentication and API requests from remote clients.

[lab@utility ~]$ ./wait.sh

In the unlikely event that neither the service on the utility machine nor running the sign.sh and wait.sh scripts makes your OpenShift cluster available to begin exercises, you can run troubleshooting commands from the utility machine, re-create your classroom, or open a customer support ticket.

Controlling Your Systems

You are assigned remote computers in a Red Hat Online Learning (ROLE) classroom. Self-paced courses are accessed through a web application that is hosted at . Log in to this site with your Red Hat Customer Portal user credentials.

Controlling the Virtual Machines

The virtual machines in your classroom environment are controlled through web page interface controls. The state of each classroom virtual machine is displayed on the Lab Environment tab.

Figure 0.2: An example course Lab Environment management page

Table 2. Machine States

Virtual machine stateDescription
buildingThe virtual machine is being created.
activeThe virtual machine is running and available. If it just started, it still might be starting services.
stoppedThe virtual machine is shut down. On starting, the virtual machine boots into the same state that it was in before shutdown. The disk state is preserved.

Table 3. Classroom Actions

Button or actionDescription
CREATE Create the ROLE classroom. Creates and starts all the virtual machines that are needed for this classroom.
CREATING The ROLE classroom virtual machines are being created. Creation can take several minutes to complete.
DELETE Delete the ROLE classroom. Destroys all virtual machines in the classroom. All saved work on those systems' disks is lost.
START Start all virtual machines in the classroom.
STARTING All virtual machines in the classroom are starting.
STOP Stop all virtual machines in the classroom.

Table 4. Machine Actions

Button or actionDescription
OPEN CONSOLE Connect to the system console of the virtual machine in a new browser tab. You can log in directly to the virtual machine and run commands, when required. Normally, log in to the workstation virtual machine only, and from there, use ssh to connect to the other virtual machines.
ACTIONStart Start (power on) the virtual machine.
ACTIONShutdown Gracefully shut down the virtual machine, preserving disk contents.
ACTIONPower Off Forcefully shut down the virtual machine, while still preserving disk contents. This action is equivalent to removing the power from a physical machine.
ACTIONReset Forcefully shut down the virtual machine and reset associated storage to its initial state. All saved work on that system's disks is lost.

At the start of an exercise, if instructed to reset a single virtual machine node, click ACTIONReset for only that specific virtual machine.

At the start of an exercise, if instructed to reset all virtual machines, click ACTIONReset on every virtual machine in the list.

If you want to return the classroom environment to its original state at the start of the course, then click DELETE to remove the entire classroom environment. After the lab is deleted, then click CREATE to provision a new set of classroom systems.

Warning

The DELETE operation cannot be undone. All completed work in the classroom environment is lost.

The Auto-stop and Auto-destroy Timers

The Red Hat Online Learning enrollment entitles you to a set allotment of computer time. To help to conserve your allotted time, the ROLE classroom uses timers, which shut down or delete the classroom environment when the appropriate timer expires.

To adjust the timers, locate the two + buttons at the bottom of the course management page. Click the auto-stop + button to add another hour to the auto-stop timer. Click the auto-destroy + button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours, and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are working, so that your environment is not unexpectedly shut down. Be careful not to set the timers unnecessarily high, which could waste your subscription time allotment.

Revision: do380-4.14-397a507