Bookmark this page

Orientation to the Classroom Environment

In this course, the main computer system that is used for hands-on learning activities (exercises) is workstation.

The workstation machine has a student standard user account, with student as the password. No exercise in this course requires that you log in as root, but if you must, the root password on the workstation machine is redhat.

From the workstation machine, you type the oc commands to manage the OpenShift cluster, which comes preinstalled as part of your classroom environment.

Also from the workstation machine, you run the required shell scripts and Ansible Playbooks to complete the exercises for this course.

If exercises require you to open a web browser to access any application or website, then you must use the graphical console of the workstation machine and use the Firefox web browser from there.

Note

The first time that you start your classroom environment, the OpenShift cluster takes a little longer to become fully available. The lab command at the beginning of each exercise checks and waits as required.

If you try to access your cluster by using either the oc command or the web console without first running a lab command, then your cluster might not yet be available. In that case, wait a few minutes and try again.

Log in to OpenShift from the Shell

To access your OpenShift cluster from the workstation machine, use https://api.ocp4.example.com:6443 as the API URL, for example:

[student@workstation ~]$ oc login -u admin -p redhatocp \
  https://api.ocp4.example.com:6443

Besides the admin user, which has cluster administrator privileges, your OpenShift cluster also provides a developer user, with developer as the password, with no special privileges.

Accessing the OpenShift Web Console

If you prefer to use the OpenShift web console, then open a Firefox web browser on your workstation machine and access the following URL:

  • https://console-openshift-console.apps.ocp4.example.com

Click htpasswd_provider and provide the login credentials for either the admin or the developer user:

Table 1. User credentials

Username Password
admin redhatocp
developer developer

The Classroom Environment

Every student gets a complete remote classroom environment. As part of that environment, every student gets a dedicated OpenShift cluster for administration tasks.

The classroom environment runs entirely as virtual machines in a large Red Hat OpenStack Platform cluster, which is shared among many students. Red Hat Training maintains many OpenStack clusters, in different data centers across the globe, to provide lower latency to students from many countries.

Figure 0.1: DO316 classroom architecture

All machines in the Student, Classroom, Server, and Cluster networks run Red Hat Enterprise Linux 8 (RHEL 8), except those machines that are nodes of the OpenShift cluster, which run RHEL CoreOS.

The bastion, utility, ceph, server, and classroom systems must always be running. These systems provide required infrastructure services for the classroom environment and its OpenShift cluster. For most exercises, you are not expected to interact with any of these services directly.

Usually, the lab commands from the exercises access these machines when it is required to set up your environment for the exercise, and require no further action from you.

For the few exercises that require you to access a system other than workstation, primarily the utility system, you receive explicit instructions and the necessary connection information as part of the exercise.

All systems in the Student network are in the lab.example.com DNS domain, and all systems in the Classroom network are in the example.com DNS domain.

The masterXX and workerXX systems are nodes of the OpenShift 4 cluster that is part of your classroom environment.

All systems in the Cluster network are in the ocp4.example.com DNS domain.

All systems in the Server network are in the srv.example.com DNS domain.

Table 2. Classroom Machines

Machine name IP addresses ROLE
workstation.lab.example.com 172.25.250.9Graphical workstation for system administration
classroom.example.com 172.25.254.254Router to link the Classroom network to the internet
bastion.lab.example.com 172.25.250.254Router to link the Student network to the Classroom network
utility.lab.example.com 172.25.250.253Router to link the Student, Classroom, and Server networks
server.srv.example.com 192.168.51.40Server and router to link the Cluster network to a private DHCP network
ceph.ocp4.example.com 192.168.50.30Server with a Red Hat Storage Ceph preinstalled cluster
master01.ocp4.example.com 192.168.50.10Control plane node
master02.ocp4.example.com 192.168.50.11Control plane node
master03.ocp4.example.com 192.168.50.12Control plane node
worker01.ocp4.example.com 172.25.250.13Compute node
worker02.ocp4.example.com 172.25.250.14Compute node

The Dedicated OpenShift Cluster

The Red Hat OpenShift Container Platform 4 (RHOCP) cluster inside the classroom environment is preinstalled by using the Pre-existing Infrastructure installation method. All nodes are treated as bare metal servers, despite being virtual machines in an OpenStack cluster.

OpenShift cloud-provider integration capabilities are not enabled, and some features that depend on that integration, such as machine sets and autoscaling of cluster nodes, are not available.

Your OpenShift cluster is in the state from running the OpenShift installer with default configurations, except for some day-2 customizations:

  • The cluster provides a default storage class that is backed by a Network File System (NFS) storage provider. Thus, applications that require persistent storage volumes perform comparably to running on a cluster that is installed by using the Full-stack Automation installation method.

  • An HTPasswd Identity Provider (IdP) is preconfigured with the admin and developer users.

The Troubleshooting Access to your OpenShift Cluster section describes how to access the utility machine.

Restoring Access to Your OpenShift Cluster

If you suspect that you cannot log in to your OpenShift cluster as the admin user any more because you incorrectly changed your cluster authentication settings, then run the lab finish command from your current exercise and restart the exercise by running its lab start command.

If running a lab command is not sufficient, then you can follow instructions in the next section to use the utility machine to access your OpenShift cluster.

Troubleshooting Access to Your OpenShift Cluster

The utility machine ran the OpenShift installer inside your classroom environment, and it is a useful resource to troubleshoot cluster issues. You can view the installer manifests and logs in the /home/lab/ocp4 directory of the utility machine.

You rarely need to log in to the utility server to perform exercises. If your OpenShift cluster is taking too long to start, or is in a degraded state, then you can log in to the utility machine as the lab user to troubleshoot your classroom environment.

The student user on the workstation machine is already configured with SSH keys that enable logging in to the utility machine without a password.

[student@workstation ~]$ ssh lab@utility

On the utility machine, the lab user is preconfigured with a .kube/config file that grants access as system:admin without first requiring the oc login command.

You can then run troubleshooting commands, such as oc get node, if they fail from the workstation machine.

You should not require SSH access to your OpenShift cluster nodes for regular administration tasks, because OpenShift 4 provides the oc debug command. The lab user on the utility server is preconfigured with SSH keys to access all cluster nodes if necessary. For example:

[lab@utility ~]$ ssh -i ~/.ssh/lab_rsa core@master01.ocp4.example.com

In the preceding example, replace master01 with the name of the chosen cluster node.

Approving Node Certificates on Your OpenShift Cluster

Red Hat OpenShift Container Platform clusters are designed to run continuously, 24x7, until they are decommissioned. Unlike a production cluster, the classroom environment contains a cluster that was stopped after installation and will be stopped and restarted several times before you finish this course. This scenario requires special handling, unlike a production cluster.

The control plane and compute nodes in an OpenShift cluster often communicate with each other. All communication between cluster nodes is protected by mutual authentication based on per-node TLS certificates.

The OpenShift installer handles creating and approving TLS certificate signing requests (CSRs) for the Full-stack Automation installation method. The system administrator manually approves these CSRs for the Pre-existing Infrastructure installation method.

All per-node TLS certificates have a short expiration life of 24 hours (the first time) and 30 days (after renewal). When the certificates are about to expire, the affected cluster nodes create new CSRs, and the control plane automatically approves them. If the control plane is offline when the TLS certificate of a node expires, then a cluster administrator must approve the pending CSR.

The utility machine includes a system service that approves CSRs from the cluster when you start your classroom, to ensure that your cluster is ready when you begin the exercises. If you create or start your classroom and begin an exercise too quickly, then your cluster might not be ready. If so, wait a few minutes while the utility machine handles CSRs, and then try again.

Sometimes, the utility machine fails to approve all required CSRs, for example because the cluster took too long to generate all the required CSRs requests, and the system service did not wait long enough. Also, the OpenShift cluster nodes might not have waited long enough for their CSRs to be approved, and new CSRs might have been issued that superseded previous ones.

If these issues arise, then your cluster takes too long to come up, and your oc login or lab commands fail. To resolve the problem, you can log in to the utility machine, as explained previously, and run the sign.sh script to approve any additional and pending CSRs.

[lab@utility ~]$ ./sign.sh

The sign.sh script loops a few times if your cluster nodes issue new CSRs that supersede the ones that were approved.

After you approve, or after the system service in the utility machine approves all CSRs, OpenShift must restart some cluster operators. It takes some moments before your OpenShift cluster is ready to answer requests from clients. To help you handle this scenario, the utility machine provides the wait.sh script, which waits until your OpenShift cluster is ready to accept authentication and API requests from remote clients.

[lab@utility ~]$ ./wait.sh

Although unlikely, if neither the service on the utility machine nor running the sign.sh and wait.sh scripts makes your OpenShift cluster available to begin exercises, then open a customer support ticket.

Note

You can run troubleshooting commands from the utility machine at any time, even if you have control plane nodes that are not ready. Some useful commands are as follows:

oc get node

Verify whether all your cluster nodes are ready.

oc get csr

Verify whether your cluster still has any pending, unapproved CSRs.

oc get co

Verify whether any of your cluster operators are unavailable, in a degraded state, or are progressing through configuration and rolling out pods.

If these verifications fail, then you can try deleting and re-creating your classroom as a final step before creating a customer support ticket.

Controlling Your Systems

You are assigned remote computers in a Red Hat Online Learning (ROLE) classroom. Self-paced courses are accessed through a web application that is hosted at . Log in to this site with your Red Hat Customer Portal user credentials.

Controlling the Virtual Machines

The virtual machines in your classroom environment are controlled through web page interface controls. The state of each classroom virtual machine is displayed on the Lab Environment tab.

Figure 0.2: An example course Lab Environment management page

Table 3. Machine States

Virtual machine state Description
buildingThe virtual machine is being created.
activeThe virtual machine is running and available. If it just started, it still might be starting services.
stoppedThe virtual machine is shut down. On starting, the virtual machine boots into the same state it was in before shutdown. The disk state is preserved.

Table 4. Classroom Actions

Button or action Description
CREATE Create the ROLE classroom. Creates and starts all the virtual machines that are needed for this classroom.
CREATING The ROLE classroom virtual machines are being created. Creation can take several minutes to complete.
DELETE Delete the ROLE classroom. Destroys all virtual machines in the classroom. All saved work on those systems' disks is lost.
START Start all virtual machines in the classroom.
STARTING All virtual machines in the classroom are starting.
STOP Stop all virtual machines in the classroom.

Table 5. Machine Actions

Button or action Description
OPEN CONSOLE Connect to the system console of the virtual machine in a new browser tab. You can log in directly to the virtual machine and run commands, when required. Normally, log in to the workstation virtual machine only, and from there, use ssh to connect to the other virtual machines.
ACTIONStart Start (power on) the virtual machine.
ACTIONShutdown Gracefully shut down the virtual machine, preserving disk contents.
ACTIONPower Off Forcefully shut down the virtual machine, while still preserving disk contents. This action is equivalent to removing the power from a physical machine.
ACTIONReset Forcefully shut down the virtual machine and reset associated storage to its initial state. All saved work on that system's disks is lost.

At the start of an exercise, if instructed to reset a single virtual machine node, click ACTIONReset for only that specific virtual machine.

At the start of an exercise, if instructed to reset all virtual machines, click ACTIONReset on every virtual machine in the list.

If you want to return the classroom environment to its original state at the start of the course, then click DELETE to remove the entire classroom environment. After the lab is deleted, then click CREATE to provision a new set of classroom systems.

Warning

The DELETE operation cannot be undone. All completed work in the classroom environment is lost.

The Auto-stop and Auto-destroy Timers

The Red Hat Online Learning enrollment entitles you to a set allotment of computer time. To help to conserve your allotted time, the ROLE classroom uses timers, which shut down or delete the classroom environment when the appropriate timer expires.

To adjust the timers, locate the two + buttons at the bottom of the course management page. Click the auto-stop + button to add another hour to the auto-stop timer. Click the auto-destroy + button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours, and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are working, so that your environment is not unexpectedly shut down. Be careful not to set the timers unnecessarily high, which could waste your subscription time allotment.

Revision: do316-4.14-d8a6b80