Skip to main contentIBM Cloud Pak Playbook

OpenShift Platform Day2 - Metering

Metering Overview

Metering is an essential tool for organizations that use capacity from a Kubernetes cluster to run their services, as well as for IT departments that manage these clusters. In the past, many departments tended to overestimate their resource needs. This could easily result in wasted capacity and wasted capital.

Management teams want to understand more concretely where budget is spent and by whom, and for which service. Metering provides that information, providing an understanding of how much it costs to run specific services, while also providing usage information that can lead to improved budgeting and capacity planning. With this information, IT Operation can also internally bill departments to reflect the costs directly associated with their actual infrastructure usage, driving accountability for service costs. This helps to eliminate some of the more manual IT “plumbing” work in tallying costs and usage by hand or managing spreadsheets. By using metering, IT teams can free up their time to tackle bigger problems and even drive business-wide innovation.

Here are some examples of how metering could be applied in the real world:

  • Cloud budgeting: Teams can gain insight into how cloud resources are being used, especially in autoscaled clusters or hybrid cloud deployments.
  • Cloud billing: Resource usage can be tracked by billing codes or labels that reflect your internal hierarchy.
  • Telemetry/aggregation: Service usage and metrics can be viewed across many namespaces or teams, such as a Postgres Operator running hundreds of databases.

OpenShit Metering feature is managed by the metering operator. The metering operator depends on two components.

  • a database engine Presto. Presto is a distributed SQL database written in Java. Presto can use several connectors, for the metering operator, it is configured to use Hive, a data warehousing application with distributed storage. See the OpenShift Document for detailed information.
  • a reporting framework reporting operator. This is the reporting engine making use of Presto data.

Day 1 Platform

Metering can be a Day 1 or Day 2 activity depending on whether the business requirement for metering comes at the beginning of the OpenShift install or later. If metering is considered as important by the business, it will be included in the design phase; then, it is a Day 1 activity. However, some enterprises may only consider metering after they go to production. In this case, metering is a Day 2 activity.

OpenShift provides a metering operator; however, it is not installed by default. You can install the metering operator via the OpenShift web console.

Day 1 Platform task for Metering:

  • Install the metering operator

Day 2 Platform

Depending on the defined data source, metering can report either usage of the OpenShift platform, the application, or both. For platform metering, you can use the data source from the OpenShift configured Prometheus. So before you start using the metering operator, you need to verify Prometheus is collecting data. You can then enable the metering operator’s report on the cluster, node, pod, PVC resources for the request, usage, and other collected information for a different period. For example, the following list some of the available platform report:

  • cluster CPU usage
  • node allocatable CPU cores
  • node allocatable memory bytes
  • node capacity CPU cores
  • persistent volume claim request bytes
  • persistent volume claim usage bytes
  • pod memory request raw
  • pod persistent volume claim request info
  • pod request CPU cores
  • pod request memory bytes
  • pod usage CPU cores

To summarize these are the activities that you need to perform for Day 2 Platform on metering:

Day 1 Application

If metering is included as part of the cluster’s design and installation, to prepare for Day 2 activities, please ensure that you obtain the business requirement documentation on metering.

As the application’s data does not come preconfigured with the OpenShift configured Prometheus, you need to configure Prometheus to collect the required data.

Day 1 Application task for Metering:

  • Obtain the business requirement documentation on metering

Day 2 Application

The following are the Day 2 activities in configuring the metering

Mapping to Personas

To summarize, the following lists the topics that you may want to consider for Day 2 Operations on Metering:

Personatask
SREVerify Prometheus is collecting data
SREEnable the metering operator’s report
SRE, DevOps EngineerCapture and document the business requirement
DevOps EngineerDevelop the metering report
DevOps EngineerTroubleshoot the metering operator
DevOps EngineerUninstall the metering operator

Platform tasks

Verify Prometheus is collecting data: [SRE]

Setting up metering is covered in the Application tasks section later in the chapter. Metering requires data, and for OpenShift, the data collection to generate report comes predominantly from the Prometheus that comes with OpenShift. You can only get a good metering report if the data are valid and comes regularly. Therefore an important activity for Day 2 platform is to ensure that Prometheus is collecting data.

Information about Prometheus can be found in the Monitoring section.

Enable the metering operator’s report: [SRE]

The metering operator is not installed by default. So if this activity has not been performed, you need to install the metering operator first.

Once you have a good stream of metrics data from Prometheus, you need to enable the metering operator.

OpenShift Metering make use of the following 4 CRD (Custom Resource Definition)

  • MeteringConfig to contain the configuration option.
  • Reports specifies the ReportDataSources to use, when, and how often the ReportQueries to run.
  • ReportDataSources specifies the database connection.
  • ReportQueries specifies the queries to use.

Before you configure your metering, you need to ensure that you have enough compute capacity such as StorageClass and worker node to perform metering. You can find the detailed information in the OpenShift Documentation.

Once you ascertain that you had the required compute capacity, you can perform the installation. The installation steps are detailed in the OpenShift Documentation, here is the summary:

The following steps install the metering function.

  • Install the Metering Operator
  • Install the Metering Stack including creating the MeteringConfig CRD, configuring the persistent storage and configuring the hive meta store
  • Verify the installation

The following are the configuration options that may need to be considered on metering:

  • Resource Request and Limit
  • Node Selector
  • Configuring the persistent storage
  • Configuring the hive metastore
  • Configuring the reporting-operator
  • Optional: Correlating Cluster usage to Billing Information

All the above information is well described in the OpenShift 4.3 Metering documentation

Application tasks

Capture and document the business requirement: [ SRE, DevOps Engineer ]

At the end of the day, the reason that you perform metering is to produce a report per the business requirement. OpenShift provides the metering operator. The metering operator comes with the reporting operator. The reporting operator comes with some metering queries and reports. However, the business requirement might require reports in a certain format that is different from what is provided out of the box. A lot of the Day 2 activities on metering will be to generate queries and reports to fulfill the business requirement.

As part of the activities in fulfilling your business requirement, you need to verify that you can get the data from the source of data for your report. Metering focuses primarily on in-cluster metric data using Prometheus as a default data source. This enables users of metering to do reporting on pods, namespaces, and most other Kubernetes resources. You might need to look at the Prometheus monitoring metrics first, which are discussed further in this repository Monitoring chapter.

Once you identify the data, you put that into the configuration stored in the custom resource definition of the operator. The operator will use this information to create tables and view in Presto, the database component of the metering operator. The user then creates SQL queries to extract this data and create the required metering report.

As in any software development activities, when you develop your report design, it is essential to document the business requirement, design decision, and the design of your queries and reports.

Develop the metering report: [ DevOps Engineer ]

The first step in getting the metering report is to install and configure the metering operator if it is not installed yet. See the Platform tasks for the activities. It is recommended for you to generate the platform metering report first, as the required Prometheus data source has been preconfigured. A good platform metering report ensures that the metering operator works using the default configuration.

Once you verify that your metering operator works, you most likely will spend most on your day-2 activities on metering producing reports as per the business requirement.

OpenShift comes with preconfigured ReportDataSources and ReportQueries that the user can start using. Report CRD is used to define the schedule of the report run.

The user can write a custom report, and an example of a custom report can be found in the documentation.

Troubleshoot the metering operator: [ DevOps Engineer ]

These are basic causes for metering not to work:

  • Not enough compute resources
  • StorageClass is not configured
  • Secret not configured correctly

Some of the debugging steps that can be performed:

  • Reporting Operator Logs
  • Query Presto using presto command-line interface
  • Query Hive using beeline
  • Go through the Hive web UI and HDFS UI. You first need to configure port-forwarding to expose the UI first.
  • Access the Ansible Logs. Metering uses Ansible, and Ansible logs might provide some information.

Please refer to the Troubleshooting Guide for more information.

Uninstall the metering operator: [ DevOps Engineer]

You can uninstall the metering operator by going to the web console as a cluster administrator, then select from the menu Operators > Installed Operators. Find the metering operator and then choose Uninstall Operator from the action menu.

Please refer to the Uninstalling metering for more information.

Implementing Metering

The Metering Operator is the technology that works with the Kubernetes as well as the OpenShift cluster.

Kubernetes

The following are collection of links related to the Metering Operator.

OpenShift

The information we provided in the previous sections in this document is based on the OpenShift Metering feature.
The following is a link related to OpenShift metering.

On IBM Cloud (Managed OpenShift)

n/a

With IBM Cloud Pak for MCM

The following is a link related to IBM Cloud Pak for Multicloud Management Metering feature.

Others

n/a

Other consideration

n/a