Create multi-step CI/CD workflows for multi-container applications by using Red Hat OpenShift Pipelines.
Like most other actions in OpenShift, you create pipelines and tasks by defining resources. Additionally, the runs of pipelines and tasks are represented as resources. This means that you can store, edit, and apply manifests that define both the definition of a pipeline and its instantiations.
Defining custom tasks enables you to aggregate steps as a reusable task. Doing so keeps pipelines organized and reduces repetition because many of the steps within a pipeline utilize the same images and runtimes.
For example, the following task fetches a specified URL:
--- apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: fetch-urlspec: results: - name: output
params: - name: URL
description: The URL to fetch. type: string steps: - name: curl-run
image: example.com/example/image:latest
script: | curl $(params.URL) | tee $(results.output.path)
The name of the task, which is used by pipelines to reference it | |
Declares a result called | |
Defines a string parameter called | |
The name of the step within the task | |
The container image used to run the step | |
The script runs the |
Tasks are loosely comparable to the concept of a function within most programming languages.
Like functions, tasks can accept arguments via the params structure, and return outputs via the results structure.
Additionally, tasks can perform side effects on attached storage in the form of workspaces, which are discussed below.
Unlike functions, which often have a single return value, tasks can set multiple results throughout their execution, not just at the end.
Cluster tasks are a set of included tasks that provide common functionality. Although normal tasks are scoped to their namespace or project, cluster tasks resolve at the cluster scope.
OpenShift Pipelines includes preinstalled cluster tasks, such as git-clone, buildah, or maven, among others.
Cluster tasks are now deprecated in favor of resolvers. However, cluster tasks are still relevant because OpenShift currently uses them to provide preinstalled, reusable tasks.
A resolver is a Tekton feature for referencing tasks and pipelines from remote locations. Each resolver implements a specific resolution mechanism. For example, if you want to use a task from another namespace, then you must use the Cluster Resolver. OpenShift Pipelines includes resolvers to reference various locations, such as the Tekton Hub, a Git repository, or a different cluster namespace.
In contrast to cluster tasks, which can only reference tasks in the same cluster, resolvers can reference tasks and pipelines from various locations. You can also extend the resolution capabilities of your pipelines by implementing custom resolvers.
To use a cluster task within a pipeline, you must set the kind of the task to ClusterTask.
For example, the following pipeline excerpt calls a cluster task called git-clone:
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
spec:
...output omitted...
tasks:
- name: fetch-repository
taskRef:
name: git-clone
kind: ClusterTask
params:
- name: url
value: $(params.GIT_REPO)
...output omitted...Pipelines primarily consist of a list of tasks. However, they can also define their own pipeline arguments and workspaces. Whenever you start a pipeline, you specify these arguments and instruct how the pipeline should find the storage backing the workspace.
For example, the following example pipeline uses the provided git-clone cluster task to retrieve a repository.
It then calls a custom task called linter to analyze the code for common errors.
The tasks share a file system by using a workspace called app-build.
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-pipeline
spec:
params:
- name: GIT_REPO
type: string
default: "example.com/app/repo"
workspaces:
- name: app-build
tasks:
- name: fetch-repository
taskRef:
name: git-clone
kind: ClusterTask
params:
- name: url
value: $(params.GIT_REPO)
workspaces:
- name: output
workspace: app-build
- name: run-lint
taskRef:
name: linter
kind: Task
params:
- name: DIRECTORY
value: "path/to/code"
workspaces:
- name: source
workspace: app-build
runAfter:
- fetch-repository 
Parameters for the pipeline that are requested from the user upon starting the pipeline | |
Declares a shared workspace called | |
References the task to run by name | |
Defines the parameters to pass to the task | |
Uses the value of the | |
Binds the shared workspace called | |
Specifies that the |
Upon starting the pipeline, the user is expected to provide a value for the GIT_REPO pipeline parameter.
Alternatively, they can accept the default value, which is example.com/app/repo in this case.
In addition to substituting values from parameters, you can reference values from other tasks, attached workspaces, and the pipeline itself. For a complete list of available substitutions, refer to the Tekton documentation.
Additionally, the user must attach a form of storage that the pipeline uses to back the app-build workspace.
Workspaces provide common storage between tasks in a pipeline. The actual storage for workspaces can vary and each workspace in a pipeline can have a different form of backing.
For simplicity, this course only uses the volume claim template form. By providing a workspace as a volume claim template, the cluster creates a persistence volume claim specifically for the bound workspace.
For example, the following volume claim template defines the attributes of persistent volume claims for a workspace:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-storage
volumeMode: FilesystemThe Tekton CLI (tkn) provides several commands for managing tasks and pipelines.
These commands take the following form:
$ tkn <type> <command>
The type refers to the type of object, which is most often either task or pipeline.
You can use t and p as short names, respectively.
For example, to view the list of available tasks, use the tkn t list command.
With the name of a task, you can view details about it by using the describe command.
For example, the following command retrieves information about the example fetch-url task, which includes the parameters that the task accepts:
[user@host ~]$ tkn t describe fetch-url
...output omitted...
Params
NAME TYPE DESCRIPTION DEFAULT VALUE
URL string The URL to fetch. ---
...output omitted...You can inspect cluster tasks in the same ways as tasks.
However, the commands follow the pattern tkn ct instead of tkn t.
Pipelines provide a similar set of commands as tasks.
For example, the following command retrieves information about the pipeline called my-pipeline:
[user@host ~]$ tkn p describe my-pipeline
Name: my-pipeline
Namespace: pipelines-creation
Params
NAME TYPE DESCRIPTION DEFAULT VALUE
GIT_REPO string example.com/app/repo
Workspaces
NAME DESCRIPTION OPTIONAL
app-build false
Tasks
NAME TASKREF RUNAFTER ... PARAMS
fetch-repository git-clone ... url: example.com/app/repo
run-lint linter fetch-repository ... DIRECTORY: path/to/codeBecause tasks and pipelines are OpenShift resources, you can also use the normal commands for viewing and updating them.
For example, oc get task/fetch-url -o yaml retrieves the YAML for the example fetch-url task.
The OpenShift Pipelines operator uses resources of type TaskRun and PipelineRun to represent execution instances, called runs, of tasks and pipelines.
For a pipeline run, each of the composing tasks has a corresponding task run.
For example, if a pipeline outlines four tasks, running that pipeline three times produces a total of twelve task runs.
Creating a task run from the following example task run manifest starts a task called git-clone that fetches an application:
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: fetch-app
spec:
taskRef:
kind: ClusterTask
name: git-clone
params:
- name: url
value: https://git.example.com/app
- name: revision
value: main
- name: subdirectory
value: ""Similarly, creating a pipeline run from the following example pipeline run manifest starts a pipeline called my-pipeline:
---
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: fetch-pipeline
spec:
pipelineRef:
name: my-pipeline
params:
- name: GIT_REPO
value: https://example.com/app/repoAs the cluster runs the task or pipeline, it updates the run resource with status information.
Although you can manually create the task run and pipeline run resources to start them, the tkn tool makes doing so faster.
If the task has any parameters, then the command prompts for them.
You can also use the -p option to specify parameters in the command itself.
For example, the following command starts a task called fetch-url:
[user@host ~]$ tkn t start fetch-url
? Value for param `URL` of type `string`?
TaskRun started: fetch-url-run-gzxpn
...output omitted...The command to run pipelines follows the same pattern, such as the following command that starts a pipeline called my-pipeline:
[user@host ~]$ tkn p start my-pipeline
? Value for param `GIT_REPO` of type `string`?
Please give specifications for the workspace: app-build
? Name for the workspace :
...output omitted...Because the pipeline defines a workspace, the command prompts for details about the backing to use for the workspace, if one is not provided. Alternatively, provide a configuration for creating persistent volume claims (PVC) as part of the command, such as in the following example:
[user@host ~]$ tkn p start my-pipeline \
-w name=app-build,volumeClaimTemplateFile=pvc-template.yaml
...output omitted...In the resulting pipeline run, the cluster creates a PVC by using the template and binds the PVC to the pipeline's app-build workspace.
After a task or pipeline has run at least one time, you can use the list sub-command to view the list of runs.
Instead of using t or p to specify a task or pipeline, respectively, use tr and pr to view task runs and pipeline runs.
For example, the following command displays the list of all pipeline runs in the OpenShift project:
[user@host ~]$ tkn pr list
NAME STARTED DURATION STATUS
my-pipeline-4 10 minutes ago 6m47s Succeeded
my-pipeline-3 1 day ago 2m0s Succeeded
my-pipeline-2 1 day ago 3m35s Succeeded
my-pipeline-1 4 days ago 3m12s SucceededIf a task run or pipeline run is not provided as part of the command, the tool prompts for one by listing available options. You can also view the logs of a pipeline run with the following command:
[user@host ~]$ tkn pr logs my-pipeline-1
[fetch-repository : clone] + '[' false = true ']'
...output omitted...
[npm-install : npm-run]
...output omitted...To view live logs, add the -f option after the log command.
Tekton Task versus ClusterTask
Variable Substitutions Supported by Tasks and Pipelines
For more information, refer to the Creating CI/CD solutions for applications using OpenShift Pipelines chapter in the Red Hat OpenShift Container Platform 4.12 CI/CD documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/cicd/index#creating-applications-with-cicd-pipelines