BigData Analytics for Predictive Maintenance: The Role of Domain Knowledge

Author: John Soldatos
Category: Enterprise Asset Management

Industrial organizations are increasingly taking advantage of leading edge digital technologies to improve the effectiveness, accuracy and reliability of their enterprise maintenance systems. BigData technologies provide opportunities to develop and deploy predictive maintenance systems, which enable the timely and accurate estimation of lifecycle parameters for assets and equipment, such as RUL (Remaining Useful Life) and EoL (End of Life).

BigData technologies provide the means to collect, store and process large amounts of data that indicate the condition of the equipment, such vibration, acoustic, ultrasonic, temperature, power consumption and oil analysis datasets, as well as data from thermal images of the equipment.

But gathering the data is only the first step. To derive useful maintenance insights from these datasets, plant operators and their maintenance solutions integrators employ data mining and machine learning processes.

The bias challenges of data mining processes

Data mining processes are usually iterative. They aim to evaluate different models in terms of their predictive power and robustness.  As part of these processes, data scientists test different machine learning and statistical methods options against their effectiveness in predicting parameters or deriving insights on the maintenance process at hand. For this process, data scientists divide the available data into two datasets: one for training and one for testing.

Why must training dataset be different from the testing dataset? Testing an algorithm on the data used to train it would automatically give good results. Moreover, an understanding of the datasets and their structure is essential towards identifying models that could potentially provide the required predictive insights. Data scientists are usually in charge of reviewing the datasets and proposing candidate machine learning algorithms. The latter need to be evaluated based on the available datasets.

One of the main challenges of the data mining process concerns the availability of proper datasets. For example, the identification of a failure state for a tool or machine based on sensor data requires several occurrences of the failure to be present in the datasets.

In general, tool failures are more frequent and therefore they are easier to find in relevant datasets. This is not always the case with machine failures, as plant owners or operators are not likely to have or keep historical data about failures of a given machine. Moreover, datasets can be dispersed across different systems, which makes the process of collecting them and ensuring they are suitable to be processed by a machine learning algorithm quite difficult.

Even in cases where proper data sets are available, the data mining process remains challenging, mainly in terms of associating insights found in the datasets with the real-life maintenance problems.

Specifically, all data mining processes are subject to bias problems, which stem from the fact that data scientists tend to tailor their programs to what is available in the datasets, in a way that achieves optimal performance on the test datasets.

This process, while optimal from a mathematical perspective, fails in several cases to produce effective results as it completely ignores the parameters of the business problem at hand.

For instance, the role of a specific parameter (e.g., temperature or humidity) on a degradation pattern could be short-lived or seasonal, rather than a decisive indicator that must be monitored at all times.

As another example, some failure indicators may relate to specific lots of a tool or part and should not be considered as permanent indicators of some failure mode.

Such seasonal or short-lived factors cannot be understood and excluded by a machine algorithm: As soon as they are consistently present in a dataset, the algorithm will likely consider them significant contributors to the failure.

Furthermore, these factors will be taken into account proportionally to the times they appear in the dataset, without considering their seasonality or randomness of appearance.

Domain knowledge to your rescue

To alleviate the overfitting bias, maintenance solution providers have to consider domain knowledge. The latter refers to knowledge and insights that are unique to the target industry and enterprise for which the solution is destined.

You must consider such knowledge during the implementation of any analytics project; Without domain knowledge, the predictive analytics solution will fail to address the real maintenance problem.  

Likewise, no data-driven predictive maintenance solution can be deployed without involving experts in the maintenance solution development process.

In the maintenance practice, domain knowledge is reflected in an organization’s FMEA (Failure Mode and Effects Analysis) and FMECA (Failure Modes, Effects and Criticality Analysis methodologies, which incorporate knowledge that:

In practice, FMEA and FMECA processes provide expert knowledge in terms of the assets, their failure modes, the effects and causes of the various failures, as well as current controls and recommended actions. Moreover, they include some methods for assessing risks associated with the issues that are identified during the analysis, including prioritization of corrective actions.

The latter is based on methodologies such as assigning Risk Priority Numbers (RPNs).  RPNs are assigned for each failure mode based on the severity of a failure, the likelihood of occurrence for each cause of failure, and the likelihood of the prior detection for each cause of failure. In particular, RPNs are calculated by obtaining the product of the above listed ratings (i.e. Severity, Occurrence, Detection).

In the scope of the data mining, RPNs can then be used to prioritize algorithms and alleviate overfitting. To this end, data scientists do not select the algorithms that are the optimal from the mathematical viewpoint. On the contrary, they can select algorithms that best reflect the RPNs of the highest priority, in addition to considering their statistical presence in the dataset.

As a prominent example, data scientists should focus on predictive algorithms and attributes that focus on the most critical failures that must be avoided, as they are the most expensive and time consuming to repair, while leading to loss of prominent equipment functions.

The combination of insights from the FMECA/FMEA with knowledge derived from the data sets can lead to systems that better solve today’s maintenance problems. At the same time, they lead to systems with better value for the investment, meaning they contribute to many organizations’ top maintenance goals.

Taking advantage of domain knowledge

To take advantage of domain knowledge in BigData predictive maintenance systems, the following best practices and recommendations should be taken into account:

Overall, successful BigData analytics for predictive maintenance requires that business goals and expert knowledge are well understood, alongside the maintenance datasets. The importance of domain knowledge is proven, not only in the case of enterprise maintenance, but also in a variety of use cases for other industrial sectors.

It’s essential that you structure such domain knowledge and that you engage relevant experts in your next enterprise maintenance BigData project. We hope that the above best practices will help you make this project a success.

Author: John Soldatos

John Soldatos holds a Phd in Electrical & Computer Engineering. He is co-founder of the open source platform OpenIoT and has had a leading role in over 15 Internet-of-Things & BigData projects in manufacturing, logistics, smart energy, smart cities and healthcare. He has published more than 150 articles in international journals, books and conference proceedings, while he has authored numerous technical articles and blogs posts in the areas of IoT, cloud computing and BigData. He has recently edited and co-authored the book “Building Blocks for IoT Analytics”.

Similar Posts