From a developer’s perspective, the process of software development is not simple. We create applications, complex systems, algorithms, modules, services, etc. – and in most cases, we initially start with a local environment (our PCs). But, when a certain version (a part or an increment) of software is finished, things are far from over. We usually need to run our product somewhere (or somehow) and let other interested parties see what are we doing. That’s where server environments come into play.
There are many variants, less or more complex systems, that have a single purpose to run the software we created. But what if we need more than that? What if we need a platform that will not only run our application but will take care of other aspects such as scaling and dispatching resources, automated build, and delivery, etc. One of those platforms is OpenShift. So, let’s briefly explain what exactly OpenShift is and how it works.
What Is OpenShift?
If we tried to explain OpenShift in a slightly simplified way, we could say: “OpenShift is a Red Hat open-source container application platform for developing and hosting enterprise-grade applications.” Nice, but that does not explain too much.
Maybe we can say something more accurate, like this: OpenShift is a cloud-based service that allows you to run containerized applications and workloads, and it is powered by Kubernetes under the covers. It’s an open-source technology that helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud.
OpenShift supports a very large variety of applications, which can easily be developed and deployed on the OpenShift cloud platform. OpenShift basically supports three kinds of platforms for developers and users.
Infrastructure as a Service (IaaS)
In this format, the service provider provides hardware-level virtual machines with a pre-defined virtual hardware configuration. There are multiple competitors in this space starting from AWS Google Cloud, Rackspace, and many more.
The main drawback of having IaaS after a long procedure of setup and investment is that one is still responsible for installing and maintaining the operating system and server packages, managing the network of infrastructure, and taking care of the basic system administration.
Software as a Service (SaaS)
With SaaS, we don’t have to worry about the underlying infrastructure. It’s as simple as plug and play, where the user merely has to sign up for the services and start using them. The main drawback of this setup is that we can only perform a minimal amount of customization, which is allowed by the service provider. One of the most common examples of SaaS is Gmail, where the user just needs to log in and start using it. The user can also make some minor modifications to their account. However, this is not very useful from a developer’s point of view.
Platform as a Service (PaaS)
This can be considered as the middle layer between SaaS and IaaS. For developers, the primary target of a PaaS evaluation is that the development environment can be spun up with a few commands. These environments are designed in such a way as to satisfy all the development needs, right from having a web application server with a database. To do this, you just require a single command and the service provider does the stuff for you.
Why Use OpenShift?
OpenShift provides a common platform for enterprise units to host their applications on the cloud without having to worry about the underlying operating system. This makes it very easy to use, develop, and deploy applications on the cloud. One of the key features is that it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developers have the freedom to design their required environment with specifications.
The IT landscape has evolved a lot in recent years. We now have DevOps, microservices, containers, cloud, and Kubernetes. OpenShift combines all of those things in one platform you can easily manage. So, it actually fits right on top of all of that. Let us go through the main features of an OpenShift environment.
Developers can quickly and easily create applications and deploy them. For example, with S2I (Source-to-Image), a developer can even deploy his code without needing to create a container first. Operators can leverage placement and policy to orchestrate environments that meet their best practices. It makes your development and operations work fluently together when combining them in a single platform.
Since it deploys Docker containers, it gives you the ability to run multiple languages, frameworks and databases on the same platform. You can easily deploy microservices written in Java, Python or other languages.
Build automation: OpenShift automates the process of building new container images for all of your users. It can run standard Docker builds based on the Dockerfiles you provide, and it also provides a “Source-to-Image” feature which allows you to specify the source from which to generate your images. This allows administrators to control a set of base or “builder images” and then users can layer on top of these. The build source could be a Git location, it could also be a binary like a WAR/JAR file. Users can also customize the build process and create their own S2I images.
Deployment automation: OpenShift automates the deployment of application containers. It supports rolling deployments for multi-container apps and allows you to roll back to an older version.
Continuous integration: It provides built-in continuous integration capabilities with Jenkins and can also tie into your existing CI solutions. The OpenShift Jenkins image can also be used to run your Jenkins masters and slaves on OpenShift.
As an open-source platform, OpenShift allows users, partners, customers, and contributors to collaborate and work together in order to utilize or to extend the OpenShift platform.
Red Hat has been vocal about the importance of open standards in the containers space and why they were the founding members of both the Open Container Initiative (OCI) and the Cloud Native Computing Foundation (CNCF). They worked extensively with the OCI community to bring the Container Runtime Specification and Image Format Specification – two key efforts aimed at ensuring the future of these key standards.
When you want to start scaling your application, whether it’s from one replica to two or scale it to 2000 replicas, a lot of complexity is added. OpenShift leverages the power of containers and an incredibly powerful orchestration engine to make that happen. Containers make sure that applications are packed up in their own space and are independent from the OS. This makes applications incredibly portable and hyper-scalable. OpenShift’s orchestration layer, Google Kubernetes, automates the scheduling and replication of these containers meaning that they’re highly available and able to accommodate whatever your users can throw at it. This means that your team spends less time in the weeds and keeping the lights on, and more time being innovative and productive.
There are multiple versions of OpenShift but they are all based on OpenShift Origin. Origin provides an open-source application container platform. All source code for the Origin project is available under the Apache License (Version 2.0) on GitHub.
OpenShift fully behaves like a product that integrates into infrastructure with minimum complexity and offers transparent proxy support. As previously described, as a platform that incorporates IaaS, SaaS, and PaaS, OpenShift is an enterprise-grade product that has the following characteristics: extensibility, maintainability, interoperability, portability.
How to Use OpenShift?
As mentioned above, one of the most important features of OpenShift is its ability to automate the process of building new container images. Let’s check out what exactly this means…
First of all, what is a build? A build is the process of transforming input parameters into a resulting object. Most often, this process is used to transform input parameters or source code into a runnable image.
OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry.
Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time.
The OpenShift Container Platform build system provides extensible support for build strategies that are based on selectable types specified in the build API. There are three primary build strategies available:
Source-to-Image (S2I) build
Pipeline build (The Pipeline build strategy is deprecated in OpenShift Container Platform 4. An equivalent and improved functionality is present in OpenShift Pipelines based on Tekton. Jenkins images on OpenShift are fully supported and users should follow Jenkins user documentation for defining their Jenkinsfile in a job or store it in a Source Control Management system.)
The Docker build strategy invokes the Docker build command, and it expects a repository with a Dockerfile and all required artifacts in it to produce a runnable image.
Source-to-Image (S2I) build
Source-to-Image (S2I) is a tool for building reproducible, Docker-formatted container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image (the builder) and the built source and is ready to use with the build run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, etc.
The Custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process.
A Custom builder image is a plain Docker-formatted container image embedded with build process logic, for example for building RPMs or base images.
Custom builds run with a very high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds.
By default, Docker builds and S2I builds are supported.
The resulting object of a build depends on the builder used to create it. For Docker and S2I builds, the resulting objects are runnable images. For Custom builds, the resulting objects are whatever the builder image author has specified.
The Pipeline build strategy allows developers to define a Jenkins pipeline for execution by the Jenkins pipeline plugin. The build can be started, monitored, and managed by the OpenShift Container Platform in the same way as any other build type.
Pipeline workflows are usually defined in a specific file (we will talk about this later), either embedded directly in the build configuration or supplied in a Git repository and referenced by the build configuration.
Additionally, the Pipeline build strategy can be used to implement sophisticated workflows: continuous integration and continuous deployment (delivery).
Since every software development project is a continuous effort of delivering new functionalities and improving existing ones, we need to make sure that this process allows multiple contributors to participate in the development and every cycle of building and deploying images has to be continuous. Therefore, we use CI/CD (Continuous Integration/Continuous Delivery) tools and practices to enable and exploit the full power of OpenShift. With this process, you can provide the source code and choose a builder image (technology) while OpenShift builds your application Docker image from that source code, and then deploys it.
One of the widely adopted technologies for continuous integration and continuous delivery (CI/CD) is Jenkins. It’s used to build, test, and deploy application projects continuously. You can build pipelines to promote any application across environments such as Development, QA, and Production, thus enabling DevOps. So, let’s see what Jenkins really is and what we can do with it.
The project was co-financed by the European Union from the European Regional Development Fund. The content of the site is the sole responsibility of Serengeti ltd.
Get a Quote
To get an accurate quote, please provide as many details as possible. One of our key account managers will contact you back with a custom quote for your project.
Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.