Skip to main contentIBM Cloud Pak Playbook

OpenShift Platform Day2 - Application Management

Application Management Overview

Developing applications in a modern environment like OpenShift calls for modern application development practices like DevOps. With this comes increased Developer responsibility to understand and utilize advanced techniques for their application. In creating services that compose the application Developers use templates that provide guide rails to assure services adhere to established architectural designs. Advanced development may utilize things such as a service mesh. The Developer uses such basic building blocks to deliver functional code for the application. In collaboration the SRE and/or DevOps team delivers the non functional requirements that addresses reliability and availability requirements of the service. These two roles deliver the overall application management through the life cycle of an application.

Application Life Cycle

When establishing an application life cycle in an OpenShift environment there will be a need to manage certain aspects of that application such as routing its traffic, setting up developers to work with a given service, and adding new components to an application or service. There are multiple methods to accomplish these tasks within OpenShift depending on the complexity of the application. For more complex applications usage of templates may be desirable. These templates can be based on standards established during the architecture stage.

After initial application creation the DevOps and SRE perform routine maintenance tasks or service management. This includes critical functions such as monitoring, logging and builds.

For Application Management there are a plethora of choices to implement and maintain your application. Key to modern application management calls for DevOps and SREs to work together. There are many topics throughout this document that describes how to use various tools to create applications.

Guidance for on-boarding a new application

The following is a checklist of what needs to be considered when you are onboarding a new application in the OpenShift environment. Some of the topics in this checklist can be automated. Most of the chapters in this repository are related to this topic, and links have been provided.

  • Does the application require to be deployed on separate resources? Or do you need to create specific resources for the application? The resources that you might want to consider are:
    • namespace. Does it need to run on an isolated namespace? please refer to Build and Deploy chapter.
    • subnetwork. Do you have any security or performance concerns? Please refer to Network chapter.
    • Do you need specific ingress policy or route defined? More information can be found in the Network chapter as well.
    • resource limit. This will ensure that the application will not consume too much cluster resources than allocated. Please refer to our Capacity chapter.
    • storage. Will the cluster storage be able to support the application requirement? Please see our Storage chapter.
    • role, group, user, service accounts, and scc . These are described in the User Management chapter.
    • Are there any security related resources that need to be created? Check our Security chapter.
  • As a recommended practice perform the following:
    • Externalized its environment variables. This will help the maintenance and reconfiguration of the application. It will also make the migration from different environments (dev, test, pre-prod, prod) easier.
    • Define the liveness and the readiness probe. More information can be found in the Kubernetes Documentation.
    • Validate your backup by performing restore operations. Backups chapter describes more on this activity.
    • Verify that the cluster’s Logging and Monitoring tools will pick up the related information from the application.
    • Ensure that you have your pruning policy for the application data defined.
    • Does the application require specific compute resources? Does it need to be run on a specific node located at a specific zone? If yes, then you need to define the node placement, which is covered in the Node chapter.
  • The following are optional components that you might want to consider:
    • Once you have ensured that Prometheus is picking up the metrics, then you might want to use the metering component to start the application usage collection and reporting. Please refer to our Metering chapter.
    • If your cluster has configured the service mesh component, then you might want to consider to use it for the application.
  • The following are more related to building the application itself rather than deploying it; however, it is listed to ensure that the DevOps team helps to enable better operation of the application:
    • Have you exposed the metrics of the application? More information about Build to Manage can be found in the Build and Deploy chapter.
    • Pods are not privileged and should not run as root. In fact, OpenShift 4.x will not allow privileged user pod to run.
    • Do not use a Docker community-contributed build. Only use certified and trusted containers. This is discussed in the Build and Deploy chapter.

References