Abstract
| Goal |
Implement CI/CD Workflows by using Red Hat OpenShift Pipelines |
| Objectives |
|
| Sections |
|
| Lab |
|
Continuous integration and continuous delivery/deployment (CI/CD) are fundamental practices for DevOps. Cloud and microservices-based architectures involve a high degree of automation and operational complexity. Implementing DevOps techniques, and in particular, CI/CD, enables fast delivery cycles, high reliability, short feedback loops, and low lead times.
Continuous integration involves the frequent integration and verification of code changes into the main development branch. Continuous delivery automates the creation of releases, in a way that the code is always ready for deployment. Continuous deployment automates the deployment process, so that code changes are available to users quickly and efficiently.
From a CI/CD perspective, a pipeline is a series of actions that typically build, test, release, and deploy applications. Red Hat OpenShift provides you with a complete collection of features to implement and automate CI/CD pipelines: Red Hat OpenShift Pipelines.
The foundation of OpenShift Pipelines is Tekton, an open source, cloud-native framework for implementing CI/CD pipelines.
OpenShift Pipelines extends this framework to integrate it with OpenShift.
By incorporating the Tekton capabilities into OpenShift, users can define, manage, and run pipelines by using multiple options, such as the oc and tkn CLIs, and the OpenShift web console.
You can download the Tekton CLI (tkn) from the screen of the OpenShift web console, or from the Tekton website, which is included in the references section.
To enable the CI/CD capabilities in OpenShift, install the Red Hat OpenShift Pipelines operator from the Operator Hub.
OpenShift Pipelines is built around the collection of custom resources and pipeline controller workloads that constitute the Tekton core. The Tekton custom resources are the building blocks required to define, manage, and run CI/CD pipelines.
Tekton decouples pipeline definition from pipeline execution by defining custom resources for each stage.
For the definition stage, Tekton introduces two key custom resources: Pipeline and Task.
The following diagram shows an example of the use of these two resources to define a pipeline that tests and builds a Node.js project.
As well as tasks and steps, OpenShift Pipelines introduces other important concepts. Some of these concepts are custom resources, and some others are properties or features of those resources. The following list introduces the most important of these concepts:
A task represents an action that runs in a pod, usually as part of a pipeline, such as testing, or building your application.
A task defines a series of steps that run in order, as containers that belong to the task pod.
In the preceding example, the test task runs the npm run install step first, then npm run lint, and finally npm run test.
The Task custom resource represents a task.
A step is a single operation that runs as part of a task, such as executing an npm command or any other script.
Each step is associated with a container image, so that the action runs inside a specific container.
OpenShift Pipelines does not expose a custom resource definition for steps.
You must define steps as part of the Task resource.
A pipeline is a workflow that consists of tasks.
In the preceding example, the pipeline contains the test and build tasks.
Pipelines dictate the dependencies between tasks, such as whether one task should run before another, or whether two tasks can run in parallel.
Pipelines can define Tasks inline or can refer to other Task resources that already exist.
The Pipeline custom resource represents a pipeline.
A task run is a task in the execution stage.
A task run represents a single execution of a task, and includes a reference to the corresponding task definition, as well as other inputs, such as parameter values or storage claims.
When you run a task, OpenShift Pipelines creates the TaskRun custom resource, which represents a specific execution of a Task resource.
A pipeline run is a pipeline in the execution stage.
A pipeline run represents a single execution of a pipeline, and includes a reference to the corresponding pipeline definition, as well as other inputs, such as parameter values or storage claims.
When you run a pipeline, OpenShift Pipelines creates the PipelineRun custom resource, which represents a specific execution of a Pipeline resource.
Pipeline runs also create the necessary TaskRun resources to run the tasks that the pipeline contains.
Usually, you do not need to interact with TaskRun or PipelineRun objects manually.
For example, you can use the OpenShift web console or the tkn CLI to start a task or a pipeline.
OpenShift Pipelines creates the required TaskRun or PipelineRun objects for you.
If you need to troubleshoot a particular run, then you can also use the web console or the tkn CLI to inspect the logs of the run.
The Pipeline and Task resources can declare parameters, such as an environment flag or a Git branch name.
In pipelines and tasks, you can declare parameters by setting properties such as the name and type of the parameter.
In the PipelineRun and TaskRun resources, you can pass the values for the parameters that the pipeline or task requires or supports.
A workspace represents storage. Workspaces can be useful for sharing state between tasks, or as a way to mount or store inputs or outputs.
The Pipeline and Task resources can declare workspaces in specific directories inside the pod.
At runtime, the PipelineRun and TaskRun resources define the particular storage requirements for a workspace, for example, by mapping a workspace to a persistent volume claim.
You do not need to define workspaces to share the state between the steps of a task. Because all the steps run in the same task pod, they share aspects such as storage and resource limits.
Triggers can run tasks and pipelines based on external events, such as a Git push action.
You can define triggers by combining the Trigger, TriggerBinding, and TriggerTemplate custom resources.
Alternatively, you can use the OpenShift web console to streamline the creation of these resources.
When a trigger starts a pipeline or a task, OpenShift Pipelines also creates the corresponding PipelineRun and TaskRun resources.
The following diagram depicts the high-level workflow that you can use in OpenShift Pipelines:
First, define your pipelines and tasks by creating the corresponding Pipeline and Task resources.
Typically, you can define tasks as independent Task resources.
This approach enables you to reuse the tasks in multiple pipelines.
Pipelines can reference tasks that are already defined.
The next section explains the syntax for creating tasks and pipelines.
In fact, you can reuse external tasks that are publicly available in the Tekton Hub, at https://hub.tekton.dev/.
The Tekton Hub offers a collection of community-contributed tasks and pipelines.
To install one of these items in your cluster, click the item, and then click .
The window that displays provides you with two installation options: a kubectl command and a tkn command.
If you wish to use the oc CLI, then copy the kubectl command and replace kubectl with oc.
After your tasks and pipelines exist in the cluster, then you can run them.
You can manually start them and also define triggers that start the pipelines and tasks under certain events.
To start a pipeline, or a task, you can use the tkn CLI or the section of the Web console, available both in the developer and administrator perspectives.

When a user or a trigger starts a pipeline or a task, OpenShift Pipelines creates the required PipelineRun and TaskRun resources.
The OpenShift Pipelines Tekton controller watches these resources, and manages the lifecycle of the pods required for each pipeline and task.
Jenkins has been, for many years, the most popular solution for CI/CD. Jenkins is a centralized automation server, which manages a number of secondary servers. Usually, the central server orchestrates the jobs that run in secondary servers, which function as workers.
Arguably, OpenShift Pipelines potentially offers better integration opportunities than Jenkins in OpenShift environments. However, Jenkins, albeit not a cloud-native solution, can also integrate with Kubernetes and OpenShift. As cloud and microservices-based architectures have become more popular, the Jenkins community has developed plugins to run jobs in containers and integrate with Kubernetes and OpenShift.
The following table lists some key differences between Jenkins and OpenShift Pipelines:
| Jenkins | OpenShift Pipelines | |
|---|---|---|
| Solution type | General-purpose automation server. | Cloud-native CI/CD solution. |
| Architecture | Centralized. Controller node that orchestrates jobs in other nodes. | Distributed. Tasks run as pods. |
| Container support | Requires plug-ins to support containers. The central server orchestrates the execution. | Container-first. Every step is executed as a container in a pod. |
| Extensibility | Based on plug-ins. An active community maintains them at https://plugins.jenkins.io/. | Reusable tasks and pipelines available at https://hub.tekton.dev/. |
| Task entity |
stage object in Jenkinsfile
|
Task custom resource |
| Pipeline entity |
pipeline object in Jenkinsfile
|
Pipeline custom resource |
| Runs | Builds in Jenkins UI |
PipelineRun / TaskRun
|
For more information, refer to the Understanding OpenShift Pipelines section in the Pipelines chapter in the Red Hat OpenShift Container Platform 4.12 CI/CD documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/cicd/index#understanding-openshift-pipelines