Red Hat OpenShift uses the build configuration resource to manage builds.
You can create and manage a build configuration by using the oc command-line utility (CLI).
You can use the oc new-app command to create a build configuration.
Consider the following oc new-app command:
[user@host ~]$oc new-app --name java-application \ --build-env BUILD_ENV=BUILD_VALUE \--env RUNTIME_ENV=RUNTIME_VALUE \
-i redhat-openjdk18-openshift:1.8 \
--context-dir java-application \
https://git.example.com/example/java-application-repository
--> Found image 11c20bc (23 months old) in image stream ...output omitted...
Environment variables for the pods that perform the application build. | |
Environment variables for the runtime pods. | |
Image stream that references the S2I builder image. | |
The location of the application directory in the git repository. | |
The location of the git repository. |
The oc CLI generates a build configuration similar to the following example:
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: java-application
spec:
output:
to:
kind: ImageStreamTag
name: java-application:latest
source:
contextDir: java-application
git:
uri: https://git.example.com/example/java-application-repository
type: Git
strategy:
sourceStrategy:
env:
- name: BUILD_ENV
value: BUILD_VALUE
from:
kind: ImageStreamTag
name: redhat-openjdk18-openshift:1.8
namespace: openshift
type: SourceNote that the preceding example does not include the RUNTIME_ENV variable.
The oc CLI generates a deployment that uses the output of the build configuration, and configures the RUNTIME_ENV variable in that deployment object.
You can use a YAML file to create a build configuration directly, without using the oc new-app command.
In such cases, you must ensure that other resources that depend on the build configuration, or on which the build configuration depends, exist.
For example, if a build configuration uses the ImageStreamTag output, then the image stream must exist.
Similarly, you must define a pod controller, such as deployment, to deploy the application after your application build finishes.
Use the --strategy option to use the Docker build strategy, for example:
[user@host ~]$ oc new-app --name java-application \
--strategy Docker \
--context-dir java-application \
https://git.example.com/example/java-application-repository
--> Found image 11c20bc (23 months old) in image stream ...output omitted...When you use build configuration to start a build, the build configuration creates a Build object for each build.
You can start a build by using the oc start-build command, for example:
[user@host ~]$ oc start-build buildconfig/app
build.build.openshift.io/app startedThen, you can view the build objects:
[user@host ~]$ oc get builds
NAME TYPE FROM STATUS STARTED DURATION
app-1 Source Git@1448dc3 Complete 4 minutes ago 49s
app-2 Source Git Pending About a minute ago 39sThis is useful to view and manage the status of each build.
You can cancel all running builds by using the oc cancel-build command:
[user@host ~]$ oc cancel-build buildconfig/app
build.build.openshift.io/app-3 marked for cancellation, waiting to be cancelled
build.build.openshift.io/app-3 cancelledAlternatively, you can specify the builds to cancel:
[user@host ~]$ oc cancel-build app-build-3
build.build.openshift.io/app-3 marked for cancellation, waiting to be cancelled
build.build.openshift.io/app-3 cancelledTo troubleshoot failed builds or builds that produce applications that behave in unexpected ways, developers commonly do the following:
Increase log level
Manually verify the state of the pod
You can increase the build configuration log level by using the BUILD_LOGLEVEL environment variable, for example:
apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: app spec: ...output omitted... strategy: sourceStrategy: env: - name: BUILD_LOGLEVEL value: 3 ...output omitted...
To modify an existing build config, you can also use the oc set env command:
[user@host ~]$ oc set env buildconfig/app BUILD_LOGLEVEL="3"
buildconfig.build.openshift.io/app updatedYou can view build logs by using the Build object:
[user@host ~]$ oc logs build/app-1
Adding cluster TLS certificate authority to trust store
Cloning "https://git.example.com/repository/app" ...
...output omitted...To view the logs of the latest build, use the build configuration directly:
[user@host ~]$ oc logs buildconfig/app
Adding cluster TLS certificate authority to trust store
Cloning "https://git.example.com/repository/app" ...
...output omitted...Additionally, you can add the -f flag to the oc logs command to follow the logs as the build generates them, for example oc logs -f buildconfig/app.
If the failure occurs after the build finishes, you can start an interactive shell session by using the oc debug command.
For example, consider the following application state:
[user@host ~]$ oc get po
NAME READY STATUS RESTARTS AGE
app-1-build 0/1 Completed 0 61m
app-2-build 0/1 Completed 0 58m
app-546cf7db59-8vbsk 0/1 CrashLoopBackOff 16 (67s ago) 58mIf the build and application logs are not helpful, you can start an interactive shell that uses the application container image.
In the following example, the app pods are managed by the app deployment.
[user@host ~]$ oc debug deploy/app
Warning: would violate PodSecurity "restricted:v1.24" ...output omitted...
Starting pod/app-debug ...
Pod IP: 10.8.0.86
If you don't see a command prompt, try pressing enter.
sh-4.2$Consequently, you can manually verify the container state, and find out what the root cause of the problem is.
For more information about managing application builds, refer to the Performing Basic Builds chapter of the Builds documentation for Red Hat OpenShift Container Platform 4.12 at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/cicd/index#builds