Deploy a replicated web server by using a deployment and verify that all web server pods share a PV; and deploy a replicated MySQL database by using a stateful set and verify that each database instance gets a dedicated PV.
Outcomes
In this exercise, you deploy a web server with a shared persistent volume between the replicas, and a database server from a stateful set with dedicated persistent volumes for each instance.
Deploy a web server with persistent storage.
Add data to the persistent storage.
Scale the web server deployment and observe the data that is shared with the replicas.
Create a database server with a stateful set by using a YAML manifest file.
Verify that each instance from the stateful set has a persistent volume claim.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that all resources are available for this exercise.
[student@workstation ~]$ lab start storage-statefulsets
Instructions
Create a web server deployment named web-server.
Use the registry.ocp4.example.com:8443/redhattraining/hello-world-nginx:latest container image.
Log in to the OpenShift cluster as the developer user with the developer password.
[student@workstation]$ oc login -u developer -p developer \
https://api.ocp4.example.com:6443
...output omitted...Change to the storage-statefulsets project.
[student@workstation]$ oc project storage-statefulsets
Now using project "storage-statefulsets" on server ...output omitted...Create the web-server deployment.
[student@workstation ~]$ oc create deployment web-server \
--image registry.ocp4.example.com:8443/redhattraining/hello-world-nginx:latest
deployment.apps/web-server createdVerify the deployment status.
[student@workstation ~]$ oc get pods -l app=web-server
NAME READY STATUS RESTARTS AGE
web-server-7d7cb4cdc7-t7hx8 1/1 Running 0 4sAdd the web-pv persistent volume to the web-server deployment.
Use the default storage class and the following information to create the persistent volume:
| Field | Value |
|---|---|
| Name |
web-pv
|
| Type |
persistentVolumeClaim
|
| Claim mode |
rwo
|
| Claim size |
5Gi
|
| Mount path |
/usr/share/nginx/html
|
| Claim name |
web-pv-claim
|
Add the web-pv persistent volume to the web-server deployment.
[student@workstation ~]$ oc set volumes deployment/web-server \
--add --name web-pv --type persistentVolumeClaim --claim-mode rwo \
--claim-size 5Gi --mount-path /usr/share/nginx/html --claim-name web-pv-claim
deployment.apps/web-server volume updatedBecause a storage class was not specified with the --claim-class option,
the command uses the default storage class to create the persistent volume.
Verify the deployment status. Notice that a new pod is created.
[student@workstation ~]$ oc get pods -l app=web-server
NAME READY STATUS RESTARTS AGE
web-server-64689877c6-mdr6f 1/1 Running 0 5sVerify the persistent volume status.
[student@workstation ~]$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
web-pv-claim Bound pvc-42...63ab 5Gi RWO nfs-storage 29sThe default storage class, nfs-storage, provisioned the persistent volume.
Add data to the PV by using the exec command.
List pods to retrieve the web-server pod name.
[student@workstation ~]$ oc get pods
NAME READY STATUS RESTARTS AGE
web-server-64689877c6-mdr6f 1/1 Running 0 17mThe pod name might differ in your output.
Use the exec command to add the pod name that you retrieved from the previous step to the /usr/share/nginx/html/index.html file on the pod.
Then, retrieve the contents of the /var/www/hmtl/index.html file to confirm that the pod name is in the file.
[student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mdr6f \
-- /bin/bash -c \
'echo "Hello, World from ${HOSTNAME}" > /usr/share/nginx/html/index.html'[student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mdr6f \
-- cat /usr/share/nginx/html/index.html
Hello, World from web-server-64689877c6-mdr6fScale the web-server deployment to two replicas and confirm that an additional pod is created.
Scale the web-server deployment to two replicas.
[student@workstation ~]$ oc scale deployment web-server --replicas 2
deployment.apps/web-server scaledVerify the replica status and retrieve the pod names.
[student@workstation ~]$ oc get pods
NAME READY STATUS RESTARTS AGE
web-server-64689877c6-mbj6g 1/1 Running 0 2s
web-server-64689877c6-mdr6f 1/1 Running 0 17mThe pod names might differ from your output.
Retrieve the content of the /usr/share/nginx/html/index.html file on the web-server pods by using the oc exec command to verify that the file is the same in both pods.
Verify that the /usr/share/nginx/html/index.html file is the same in both pods.
[student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mbj6g \
-- cat /usr/share/nginx/html/index.html
Hello, World from web-server-64689877c6-mdr6f[student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mdr6f \
-- cat /usr/share/nginx/html/index.html
Hello, World from web-server-64689877c6-mdr6fNotice that both files show the name of the first instance, because they share the persistent volume.
Create a database server with a stateful set by using the statefulset-db.yml file in the /home/student/DO180/labs/storage-statefulsets directory.
Update the file with the following information:
| Field | Value |
|---|---|
metadata.name
|
dbserver
|
spec.selector.matchLabels.app
|
database
|
spec.template.metadata.labels.app
|
database
|
spec.template.spec.containers.name
|
dbserver
|
spec.template.spec.containers.volumeMounts.name
|
data
|
spec.template.spec.containers.volumeMounts.mountPath
|
/var/lib/mysql
|
spec.volumeClaimTemplates.metadata.name
|
data
|
spec.volumeClaimTemplates.spec.storageClassName
|
lvms-vg1
|
Open the /home/student/DO180/labs/storage-statefulsets/statefulset-db.yml file in an editor.
Replace the <CHANGE_ME> objects with values from the previous table:
apiVersion: apps/v1 kind: StatefulSet metadata: name:dbserverspec: selector: matchLabels: app:databasereplicas: 2 template: metadata: labels: app:databasespec: terminationGracePeriodSeconds: 10 containers: - name:dbserverimage: registry.ocp4.example.com:8443/redhattraining/mysql-app:v1 ports: - name: database containerPort: 3306 env: - name: MYSQL_USER value: "redhat" - name: MYSQL_PASSWORD value: "redhat123" - name: MYSQL_DATABASE value: "sakila" volumeMounts: - name:datamountPath:/var/lib/mysqlvolumeClaimTemplates: - metadata: name:dataspec: accessModes: [ "ReadWriteOnce" ] storageClassName:"lvms-vg1"resources: requests: storage: 1Gi
Create the database server by using the oc create -f /home/student/DO180/labs/storage-statefulsets/statefulset-db.yml command.
[student@workstation ~]$ oc create -f \
/home/student/DO180/labs/storage-statefulsets/statefulset-db.yml
statefulset.apps/bdserver createdWait a few moments and then verify the status of the stateful set and its instances.
[student@workstation ~]$ oc get statefulset
NAME READY AGE
dbserver 2/2 10s[student@workstation ~]$ oc get pods -l app=database
NAME READY STATUS ...
dbserver-0 1/1 Running ...
dbserver-1 1/1 Running ...Use the exec command to add data to each of the stateful set pods.
[student@workstation ~]$ oc exec -it pod/dbserver-0 -- /bin/bash -c \
"mysql -uredhat -predhat123 sakila -e 'create table items (count INT);'"
mysql: [Warning] Using a password on the command line interface can be insecure.[student@workstation ~]$ oc exec -it pod/dbserver-1 -- /bin/bash -c \
"mysql -uredhat -predhat123 sakila -e 'create table inventory (count INT);'"
mysql: [Warning] Using a password on the command line interface can be insecure.Confirm that each instance from the dbserver stateful set has a persistent volume claim.
Then, verify that each persistent volume claim contains unique data.
Confirm that the persistent volume claims have a Bound status.
[student@workstation ~]$ oc get pvc -l app=database
NAME STATUS ... CAPACITY ACCESS MODE ...
data-dbserver-0 Bound ... 1Gi RWO ...
data-dbserver-1 Bound ... 1Gi RWO ...Verify that each instance from the dbserver stateful set has its own persistent volume claim by using the oc get pod command.pod-name -o json | jq .spec.volumes[0].persistentVolumeClaim.claimName
[student@workstation ~]$ oc get pod dbserver-0 -o json | \
jq .spec.volumes[0].persistentVolumeClaim.claimName
"data-dbserver-0"[student@workstation ~]$ oc get pod dbserver-1 -o json | \
jq .spec.volumes[0].persistentVolumeClaim.claimName
"data-dbserver-1"Application-level clustering is not enabled for the dbserver stateful set.
Verify that each instance of the dbserver stateful set has unique data.
[student@workstation ~]$ oc exec -it pod/dbserver-0 -- /bin/bash -c \
"mysql -uredhat -predhat123 sakila -e 'show tables;'"
mysql: [Warning] Using a password on the command line interface can be insecure.
------------------
| Tables_in_sakila |
------------------
| items |
------------------[student@workstation ~]$ oc exec -it pod/dbserver-1 -- /bin/bash -c \
"mysql -uredhat -predhat123 sakila -e 'show tables;'"
mysql: [Warning] Using a password on the command line interface can be insecure.
------------------
| Tables_in_sakila |
------------------
| inventory |
------------------Delete a pod in the dbserver stateful set.
Confirm that a new pod is created and that the pod uses the PVC from the previous pod.
Verify that the previously added table exists in the sakila database.
Delete the dbserver-0 pod in the dbserver stateful set.
Confirm that a new pod is generated for the stateful set.
Then, confirm that the data-dbserver-0 PVC still exists.
[student@workstation ~]$ oc delete pod dbserver-0
pod "dbserver-0" deleted[student@workstation ~]$ oc get pods -l app=database
NAME READY STATUS RESTARTS AGE
dbserver-0 1/1 Running 0 4s
dbserver-1 1/1 Running 0 5m[student@workstation ~]$ oc get pvc -l app=database
NAME STATUS ... CAPACITY ACCESS MODE ...
data-dbserver-0 Bound ... 1Gi RWO ...
data-dbserver-1 Bound ... 1Gi RWO ...Use the exec command to verify that the new dbserver-0 pod has the items table in the sakila database.
[student@workstation ~]$ oc exec -it pod/dbserver-0 -- /bin/bash -c \
"mysql -uredhat -predhat123 sakila -e 'show tables;'"
mysql: [Warning] Using a password on the command line interface can be insecure.
------------------
| Tables_in_sakila |
------------------
| items |
------------------