Serengeti logo BLACK white bg w slogan
Serengeti logo GREEN w slogan

Introduction to OpenShift and Jenkins DevOps - PART TWO

Saša Ivičević, Senior Software Developer

From the perspective of OpenShift, Jenkins is a tool that can automate a developer’s OpenShift tasks. It’s that simple. There are several ways to automate tasks, which we explained in part one. Just like before, developers will need to do some configuration to get Jenkins working with OpenShift. It’s not like they can magically talk to each other. They’ll have to exchange some credentials and permissions.

What Is Jenkins?

Jenkins offers a simple way to set up a continuous integration or continuous delivery (CI/CD) environment for almost any combination of languages and source code repositories using pipelines, as well as automating other routine development tasks. While Jenkins doesn’t eliminate the need to create scripts for individual steps, it does give you a faster and more robust way to integrate your entire chain of build, test, and deployment tools in comparison with what you can easily build yourself.

Regardless, once everything is set up, developers can automate the tasks they perform in OpenShift using Jenkins’ web interface. Now that we have a general understanding of Jenkins, let’s see what a Jenkins Pipeline is.

What Is Jenkins Pipeline?

In Jenkins, a pipeline is a collection of events or jobs which are interlinked with one another in a sequence.

It’s a combination of plugins that support the integration and implementation of continuous delivery pipelines using Jenkins.

In other words, a Jenkins Pipeline is a collection of jobs or events that brings the software from version control into the hands of the end users by using automation tools. It is used to incorporate continuous delivery in our software development workflow.

In a Jenkins Pipeline, every job has some sort of dependency on at least one or more jobs or events.

Continuous delivery pipeline in Jenkins (Source: JavaTpoint)

The above diagram represents a continuous delivery pipeline in Jenkins. It contains a collection of states such as build, deploy, test and release. These jobs or events are interlinked with each other. Every state has its jobs, which work in a sequence called a continuous delivery pipeline.

A continuous delivery pipeline is an automated expression to show your process for getting software for version control. Thus, every change made in your software goes through a number of complex processes on its way to being released. It also involves developing the software in a repeatable and reliable manner and progressing the built software through multiple stages of testing and deployment.

A Jenkins Pipeline can be defined by a text file called a JenkinsFile. You can implement the pipeline as code using a JenkinsFile, and this can be defined by using a DSL (Domain Specific Language). With the help of a JenkinsFile, you can write the steps required for running a Jenkins Pipeline.

The benefits of using a JenkinsFile

  • You can make pipelines automatically for all branches and can execute pull requests with just one JenkinsFile.
  • You can review your code on the pipeline.
  • You can review your Jenkins pipeline.
  • This is the singular source for your pipeline and can be customized by multiple users.

A JenkinsFile can be defined by using either Web UI or with a JenkinsFile itself.

Pipeline Syntax

Two types of syntax are used for defining your JenkinsFile.

Declarative: Declarative pipeline syntax offers a simple way to create pipelines. It consists of a predefined hierarchy to create Jenkins pipelines. It provides you with the ability to control all aspects of a pipeline execution in a simple, straightforward manner.

Scripted: Scripted Jenkins pipeline syntax runs on the Jenkins master with the help of a lightweight executor. It uses very few resources to convert the pipeline into atomic commands.

Both scripted and declarative syntax are different from each other and are defined totally differently.

Why Use Jenkins Pipeline?

Jenkins is a continuous integration server which has the ability to support the automation of software development processes. You can create several automation jobs with the help of use cases, and run them as a Jenkins pipeline.

Here are the reasons why you should use a Jenkins pipeline:

  • A Jenkins pipeline is implemented as a code that allows several users to edit and execute the pipeline process.
  • Pipelines are robust. So, if your server undergoes an unpredicted restart, the pipeline will be automatically resumed.
  • You can pause the pipeline process and make it wait to continue until there is input from the user.
  • Jenkins Pipelines support big projects. You can run many jobs, and even use pipelines in a loop.

A typical pipeline might include these six stages:

  1. Preamble: Confirm all the right repositories and projects are in use. Define all necessary variables that will be used in other stages of pipeline.
  2. Build: Access the application code to prepare the application. (Building varies from application to application. Is this a Python project? Java?)
  3. Complexity: Using something like SonarQube to check code readability and formatting.
  4. Unit Tests: Make sure existing unit tests are still successful as part of the new change.
  5. Deploy: Launch the application!
  6. Health Checks: Verify that the application is running after the deployment is complete.

Without going into further details, we can present a Jenkins pipeline code like this:

pipeline { 
    agent any 
    stages { 
            stage ('Preamble') { 
            stage ('Build') { 
            stage ('Complexity') { 
            stage ('Test') { 
            stage ('Deploy') { 
            stage ('Health Check') { 

Without a pipeline, developers would have to manually go through all these steps in OpenShift after they make changes to the application’s code in the GitHub Repository. With a Jenkins pipeline, all a developer needs to do is go to the Jenkins web interface and run the pipeline. By running the pipeline, Jenkins will access OpenShift and perform all these tasks. If there are any errors, developers will be able to quickly identify where things went wrong and make code changes. Jenkins pipelines ensure that applications are held to a standard of testing and functionality.

Let’s see the whole picture!

OpenShift and Jenkins pipeline diagram (Source:

Using Jenkins, developers can automate some of the tasks they perform to run the application on OpenShift. One way to automate tasks is through a pipeline. Developers can write pipeline code and store it in a GitHub repository (the orange box) to define the stages of a pipeline. Once this is done, developers can use the Jenkins web interface to run the pipeline.

Running the pipeline will go through several stages, which should lead to a successful build. When there are failures, developers can examine logs to identify errors and fix code issues. Moreover, Jenkins can even notify developers over email or Slack with the latest result of running the pipeline. Jenkins saves developers time in the long run, as they no longer need to repeat the same six steps in the pipeline every time there is a change to the app code in the GitHub repository.

Final Thoughts

In these two posts, we only scratched the surface of the complex subject of OpenShift and Jenkins integration. Nevertheless, we can draw some conclusions. First of all, using OpenShift has many advantages, without going into details, they are:

  • Open-source platforms like OpenShift have advantageous developer/user communities that can typically assist in quicker bug fixes and increase functionality.
  • OpenShift enables the development team to focus on doing what they do best – designing and testing applications. When they are freed from spending excessive time managing and deploying containers, they can speed up the development process and get products to market more rapidly.
  • Deploying and managing containers at scale is a complicated process. OpenShift enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling, and management.
  • A company’s IT needs can vary greatly from one period to the next. Selecting a proprietary container management platform subjects you to the possibility that your vendor won’t be able to provide an acceptable solution if your company’s IT focus changes.
  • The DevOps process relies upon transparent communication between all involved parties. Containerization provides a convenient means of enabling your IT operations staff to test instances of a new app. OpenShift assists this process by making it easy to test apps throughout your IT architecture without being impeded by framework conflicts, deployment issues, or language discrepancies.

Assembling the proper tools to create applications on your system architecture can be a challenge, especially at the enterprise level. OpenShift makes the process easy by allowing for the integration of the tools you use most across your entire operating environment.


Let's do business

Projekt je sufinancirala Europska unija iz Europskog fonda za regionalni razvoj. Sadržaj emitiranog materijala isključiva je odgovornost tvrtke Serengeti d.o.o.