Using predictive analytics in anomaly detection and fault recognition

Published on: 

Predictive analysis is an advanced branch of data engineering which generally predicts some occurrence or probability. The process involves an analysis of historic data and based on that analysis to predict the future occurrences or events using Predictive Analytics modeling techniques. The form of these predictive models varies depending on the data they are using.

In anomaly detection, a model is built using ‘good/normal’ operating data that typically represents a wide range of operation. Each new point is evaluated against the model, and if the residuals are outside statistical limits, the point is considered as an outlier and the process could be seen as drifting outside its normal operating region.

This article contains excerpts from, “Worldwide Deployment of Predictive Asset Management at Air Liquide” by Cyril Defaye, Frederic Verpillat, James Huber, Ann Attaway and Paul Gerke of Air Liquide.

Anomaly detection is an unsupervised learning problem: it is the task of finding hidden patterns in unlabeled data. It determines that something unusual is occurring when conditions deviate from “normal” conditions of operation. Anomaly detection tasks are relevant when there are a large number of negative samples (normal operations) and a few positive samples (failure data). They work best when the failure is due to several factors, all of which cannot be modeled beforehand. For predictive maintenance of machines, anomaly detection tasks are the most relevant. Examples of mathematical concepts for unsupervised learning include PCA (Principal component analysis), SOM (Self organizing maps), Neural Networks, k-means clustering etc.

● Pros: This method can generally detect the potential (and unknown) failures

● Cons: The definition of the “normal” condition can be challenging (in particular for new processes).


Moreover, when an anomaly is detected, you do not know what it is: equipment failure, new “normal” condition or a sensor fault?

Fault recognition captures the faint but precise sensor patterns from the very beginning of machine degradation and captures the stronger patterns as the condition develops and the machine operation deteriorates towards failure. Once the pattern of a fault signature pattern is captured, it can be used as a monitoring profile, and if the pattern ever emerges again, you will now know exactly what is happening, and an immediate warning can prompt action well before damage.

Fault recognition is a supervised learning problem the task of inferring a function from labeled training data. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). If outputs/supervisors/KPIs cannot be naturally identified in the data, then an artificial one is created.

Examples of artificial outputs are (i) time to failure or (ii) likelihood of failure in a given time period. Supervised learning tasks are meaningful when there are a large number of both positive and negative samples in the data (equal likelihood of good or bad data). They also work best when future positive samples (e.g. time to failure) are likely to have similar characteristics of the positive samples in the learning data. Such assumptions are unlikely for the monitoring tasks we are looking to accomplish, where all the faults cannot be expected to have the same root cause. Supervised learning methodologies include linear regression, decision trees, partial least squares, Bayesian networks, neural networks etc. ● Pros: Provide simultaneously Predictive and Prescriptive information ● Cons: A given failure can be detected only if it occurred in the past with a similar signature.

Thus, this method performance may be limited when failures are rare or varied. Fault detection algorithms may provides an automatic diagnostic and may be considered as more prescriptive compared to anomaly detection. However, the implementation of a fault diagnostic approach requires a large number of failure records. The most important rotating equipment, such as main air compressors, used in air separation units, are often tailor-made machines specific to each plant. For each type of machine, an exhaustive record of all pattern of failure is not available. Several softwares based on anomaly detection algorithms are commercially available.