In this course, the main computer system that is used for hands-on learning activities (exercises) is workstation.
The workstation machine has a standard user account, student with student as the password.
Although no exercise in this course requires you to log in as root, if you must, then the root password on the workstation machine is redhat.
From the workstation machine, you type the oc commands to manage the OpenShift cluster, which comes preinstalled as part of your classroom environment.
Also from the workstation machine, you run the commands that are required to complete the exercises for this course.
If exercises require you to open a web browser to access any application or website, then you must use the graphical console of the workstation machine and use the Firefox web browser from there.
During the initial start of your classroom environment, the OpenShift cluster takes more time to become fully available. The lab command at the beginning of each exercise checks and waits as required.
If you try to access your cluster by using either the oc command or the web console without first running a lab command, then your cluster might not yet be available.
If the cluster is not available, then wait a few minutes and try again.
To access your OpenShift cluster from the workstation machine, use https://api.ocp4.example.com:6443 as the API URL, for example:
[student@workstation ~]$oc login -u admin -p redhatocp\https://api.ocp4.example.com:6443
Besides the admin user, who has cluster administrator privileges, your OpenShift cluster also provides a developer user, with developer as the password, with no special privileges.
If you prefer to use the OpenShift web console, then open a Firefox web browser on your workstation machine and access the following URL:
https://console-openshift-console.apps.ocp4.example.com
Click and provide the login credentials for either the admin or the developer user.
Every student gets a complete remote classroom environment. As part of that environment, every student gets a dedicated OpenShift cluster for administration tasks.
This environment contains all the necessary resources for the course. Because the course does not access resources outside the classroom environment, any failures of public resources, such as Git repositories or container image registries, should not affect the course.
The classroom environment runs entirely as virtual machines in a large Red Hat OpenStack Platform cluster, which is shared among many students.
Red Hat Training maintains many OpenStack clusters, in data centers across the globe, to provide lower latency to students from many countries.
All machines on the Student, Classroom, and Cluster networks run Red Hat Enterprise Linux 9 (RHEL 9), except those machines that are nodes of the OpenShift cluster. These cluster nodes run RHEL CoreOS.
The bastion, utility, idm, registry, and classroom systems must always be running.
These systems provide infrastructure services that the classroom environment and its OpenShift cluster require.
For most exercises, you are not expected to interact with any of these services directly.
Usually, the lab commands from the exercises access these machines to set up your environment for the exercise, and require no further action from you.
For the few exercises where you must access a system other than workstation, primarily the utility system, you receive explicit instructions and connection information as part of the exercise.
All systems in the Student network are in the lab.example.com DNS domain, and all systems in the Classroom network are in the example.com DNS domain.
The master and XXworker systems are nodes of the OpenShift 4 cluster that is part of your classroom environment.XX
All systems in the Cluster network are in the ocp4.example.com DNS domain.
Table 1. Classroom Machines
| Machine name | IP addresses | Role |
|---|---|---|
workstation.lab.example.com
| 172.25.250.9 | Graphical workstation for system administration |
classroom.example.com
| 172.25.254.254 | Router to link the Classroom network to the internet |
bastion.lab.example.com
| 172.25.250.254 | Router to link the Student network to the Classroom network |
utility.lab.example.com
| 172.25.250.253 | Router to link the Student and Cluster networks |
ceph.ocp4.example.com
| 192.168.50.30 | Server with a Red Hat Storage Ceph preinstalled cluster |
idm.ocp4.example.com
| 192.168.50.40 | Red Hat Identity Management system |
registry.ocp4.example.com
| 192.168.50.50 | Server with Quay and GitLab |
sso.ocp4.example.com
| 192.168.50.60 | Red Hat Single Sign-On |
rhds.ocp4.example.com
| 192.168.50.70 | Red Hat Directory Server system |
master01.ocp4.example.com
| 192.168.50.10 | Control plane node |
master02.ocp4.example.com
| 192.168.50.11 | Control plane node |
master03.ocp4.example.com
| 192.168.50.12 | Control plane node |
worker01.ocp4.example.com
| 192.168.50.13 | Compute node |
worker02.ocp4.example.com
| 192.168.50.14 | Compute node |
worker03.ocp4.example.com
| 192.168.50.15 | Compute node |
The Red Hat OpenShift Container Platform 4 cluster inside the classroom environment is preinstalled by using the Pre-existing Infrastructure installation method. All nodes are treated as bare metal servers, even though they are virtual machines in an OpenStack cluster.
OpenShift cloud provider integration capabilities are not enabled, and some features that depend on that integration, such as machine sets and autoscaling of cluster nodes, are not available.
Your OpenShift cluster is in the state from running the OpenShift installer with default configurations, except for some day-2 customizations:
The cluster provides a default storage class that is backed by a Network File System (NFS) storage provider.
The cluster also provides storage classes that are backed by Ceph.
The cluster uses an LDAP identity provider that is configured to use Red Hat Identity Management that runs on the idm.ocp4.example.com system.
The Troubleshooting Access to your OpenShift Cluster section provides information about how to access the utility machine.
If you suspect that you cannot log in to your OpenShift cluster as the admin user any more because you incorrectly changed your cluster authentication settings, then run the lab finish command from your current exercise and restart the exercise by running its lab start command.
If running a lab command does not resolve the issue, then you can follow the instructions in the next section to use the utility machine to access your OpenShift cluster.
The utility machine ran the OpenShift installer inside your classroom environment, and it is a useful resource to troubleshoot cluster issues.
You can view the installer manifests and logs in the /home/lab/ocp4 directory of the utility machine.
Logging in to the utility server is rarely required to perform exercises.
If your OpenShift cluster is taking too long to start, or is in a degraded state, then you can log in to the utility machine as the lab user to troubleshoot your classroom environment.
The student user on the workstation machine is already configured with SSH keys that enable logging in to the utility machine without a password.
[student@workstation ~]$ ssh lab@utilityIn the utility machine, the lab user is preconfigured with a .kube/config file that grants access as system:admin without first requiring the oc login command.
You can then run troubleshooting commands, such as the oc get node command, if they fail from the workstation machine.
You should not require SSH access to your OpenShift cluster nodes for regular administration tasks, because OpenShift 4 provides the oc debug command.
The lab user on the utility server is preconfigured with SSH keys to access all cluster nodes if necessary.
For example:
[lab@utility ~]$ ssh -i ~/.ssh/lab_rsa core@master01.ocp4.example.comIn the preceding example, replace master01 with the name of the intended cluster node.
Red Hat OpenShift Container Platform clusters are designed to run continuously, 24x7, until they are decommissioned. Unlike a production cluster, the classroom environment contains a cluster that was stopped after installation, and that stops and restarts several times before you finish this course. This scenario requires special handling unlike a production cluster.
The control plane and compute nodes in an OpenShift cluster communicate with each other. All communication between cluster nodes is protected by mutual authentication based on per-node TLS certificates.
The OpenShift installer handles creating and approving TLS certificate signing requests (CSRs) for the full stack automation installation method. The system administrator manually approves these CSRs for the Pre-existing Infrastructure installation method.
All per-node TLS certificates have a short expiration life of 24 hours (the first time) or 30 days (after renewal). When the affected cluster nodes are about to expire, they create CSRs, and the control plane automatically approves them. If the control plane is offline when the TLS certificate of a node expires, then a cluster administrator must approve the pending CSR.
The utility machine includes a system service that approves CSRs from the cluster when you start your classroom, to ensure that your cluster is ready when you begin the exercises.
If you create or start your classroom and begin an exercise too quickly, then your cluster might not be ready.
In this case, wait a few minutes while the utility machine handles CSRs, and then try again.
Sometimes, the utility machine fails to approve all the required CSRs, for example because the cluster took too long to generate all the required CSRs requests and the system service did not wait long enough.
Also, some OpenShift cluster nodes might not have waited long enough for approval of their CSRs, and might issue new CSRs that superseded previous ones.
In these cases, your cluster takes too long to come up, and your oc login or lab commands fail.
To resolve the problem, you can log in to the utility machine, as explained previously, and run the sign.sh script to approve any additional and pending CSRs.
[lab@utility ~]$ ./sign.shThe sign.sh script loops a few times, in case that your cluster nodes issue new CSRs that supersede previous certificates.
After you approve, or the system service in the utility machine approves all CSRs, OpenShift must restart some cluster operators.
It takes a few moments before your OpenShift cluster is ready to answer requests from clients.
To help you handle this scenario, the utility machine provides the wait.sh script, which waits until your OpenShift cluster is ready to accept authentication and API requests from remote clients.
[lab@utility ~]$ ./wait.shIn the unlikely event that neither the service on the utility machine nor running the sign.sh and wait.sh scripts makes your OpenShift cluster available to begin exercises, you can run troubleshooting commands from the utility machine, re-create your classroom, or open a customer support ticket.
You are assigned remote computers in a Red Hat Online Learning (ROLE) classroom. Self-paced courses are accessed through a web application that is hosted at . Log in to this site with your Red Hat Customer Portal user credentials.
The virtual machines in your classroom environment are controlled through web page interface controls. The state of each classroom virtual machine is displayed on the tab.
Table 2. Machine States
| Virtual machine state | Description |
|---|---|
| building | The virtual machine is being created. |
| active | The virtual machine is running and available. If it just started, it still might be starting services. |
| stopped | The virtual machine is shut down. On starting, the virtual machine boots into the same state that it was in before shutdown. The disk state is preserved. |
Table 3. Classroom Actions
| Button or action | Description |
|---|---|
| Create the ROLE classroom. Creates and starts all the virtual machines that are needed for this classroom. | |
| The ROLE classroom virtual machines are being created. Creation can take several minutes to complete. | |
| Delete the ROLE classroom. Destroys all virtual machines in the classroom. All saved work on those systems' disks is lost. | |
| Start all virtual machines in the classroom. | |
| All virtual machines in the classroom are starting. | |
| Stop all virtual machines in the classroom. |
Table 4. Machine Actions
| Button or action | Description |
|---|---|
Connect to the system console of the virtual machine in a new browser tab.
You can log in directly to the virtual machine and run commands, when required.
Normally, log in to the workstation virtual machine only, and from there, use ssh to connect to the other virtual machines. | |
| → | Start (power on) the virtual machine. |
| → | Gracefully shut down the virtual machine, preserving disk contents. |
| → | Forcefully shut down the virtual machine, while still preserving disk contents. This action is equivalent to removing the power from a physical machine. |
| → | Forcefully shut down the virtual machine and reset associated storage to its initial state. All saved work on that system's disks is lost. |
At the start of an exercise, if instructed to reset a single virtual machine node, click → for only that specific virtual machine.
At the start of an exercise, if instructed to reset all virtual machines, click → on every virtual machine in the list.
If you want to return the classroom environment to its original state at the start of the course, then click to remove the entire classroom environment. After the lab is deleted, then click to provision a new set of classroom systems.
The operation cannot be undone. All completed work in the classroom environment is lost.
The Red Hat Online Learning enrollment entitles you to a set allotment of computer time. To help to conserve your allotted time, the ROLE classroom uses timers, which shut down or delete the classroom environment when the appropriate timer expires.
To adjust the timers, locate the two buttons at the bottom of the course management page. Click the auto-stop button to add another hour to the auto-stop timer. Click the auto-destroy button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours, and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are working, so that your environment is not unexpectedly shut down. Be careful not to set the timers unnecessarily high, which could waste your subscription time allotment.