Recently, I heard someone say that for sixty percent of heart failures, the No.1 symptom was death. Although this is an interesting and serious statistic, one has to acknowledge the possibility that certain symptoms were ignored or simply went unnoticed. We all suffer, to a certain extent, from the fear of receiving bad news. Subconsciously, many of us are reluctant to investigate what might be bad news, such as a mild but persistent pain or discomfort, due to this fear. This is because we know that such investigations will likely have one of three possible outcomes:

  1. The condition was benign, petty, inconsequential or imaginary

  2. The condition was real but controllable by medication or surgery

  3. The condition was real but inoperable and terminal in severity

If the first outcome occurs, we are blessed with restored peace of mind. Outcome No. 2 is disruptive, but not life-threatening. Outcome No. 3, which may occur in only one or two percent of the cases, is the dreaded condition that is feared the most. Because the news of outcome No. 3 is so grim, many people, men in particular, postpone a visit to the doctor … that is, until the visit is expedited by a ride in an ambulance and an (often involuntary) trip to the emergency room.

Neurosis or Happy State of Ignorance?
Oil analysis and maintenance have always had many human health parallels, including behavioral analogies. For instance, it is often said that expensive and disruptive improvements in maintenance practices are best proposed only after an expensive and disruptive machine failure occurred. Similarly, many people don’t make needed lifestyle changes (diet, exercise, smoking) until after they’ve survived a life-threatening health condition.

Consider the hypochondriacal behavior of certain people who go to the other extreme, making frequent and unnecessary visits to their doctor or emergency room. This too has a maintenance parallel relating to the excessive use of condition monitoring technologies and machine inspections. Machine condition monitoring information and data do come at a price and therefore should be used prudently. Many failure occurrences are often unintentionally induced by human agency, a frequent consequence of making unnecessary internal machine inspections (if it ain’t broke …).

On one extreme is the happy state of ignorance (no monitoring or inspections), and on the other is the wasteful state of neurosis associated with condition monitoring overkill. Defining the optimum point between these extremes depends greatly on machine and application factors. A simplistic representation of this is shown in Table 1.


Table 1. Balancing Risk with Machine
Condition Information (Balanced,
light green; Unbalanced, white)

Watch Out for Data Blind Spots
One good way to start the process of achieving a balanced condition monitoring program is by listing the questions you want answered. Prioritize this list. Poll other stakeholders and compile their collective opinions relating to machines to be monitored. Next, design your oil analysis program to deliver the information that best answers the questions. Program design features include sampling methods, frequency, test slate, alarms and data interpretation strategy. 

Next, give your oil analysis program peripheral vision by avoiding blind spots. Blind spots relate to difficult or impossible-to-answer questions because of program design deficiencies. These generally relate to unanswerable questions that are outside of the primary program objectives (field of view). Some blind spots may seem trivial or unimportant, while others may relate to problems that could cripple a machine or process if undetected. Oil analysis blind spots could occur for the following reasons:

  • Wrong sampling frequency. The potential for rapid failure development periods (RUL descents) needs to be factored into the sampling frequency and location.

  • Wrong or inadequate tests performed. Some oil analysis test programs are too streamlined to effectively answer both primary and periphery questions.

  • Data masking, sensitivity, resolution or calibration errors. It is vitally important that oil analysis labs, instruments, procedures, etc. are carefully selected to produce accurate information in the target range of sensitivity.

Hair Triggers
While you’re designing your oil analysis program and setting alarms/limits, beware of weak signals. Unlike most blind spots where data is not within view, weak signals are associated with data interpretation blindness. One version of a weak signal is sometimes referred to as “hair triggers”. More to the point, a hair trigger relates to commonly overlooked oil analysis data or seemingly benign data that actually reveals a condition that could rapidly escalate to operational tragedy.

For instance, seemingly mild coolant leaks, oxide insolubles and viscosity excursions are often misdiagnosed as unimportant or simply not picked up by the preset alarms. Then machine failure occurs, happening only days later and to everyone’s dismay. Another example is the movement of certain wear metals by only a few parts per million, which is sometimes sufficient to justify immediate investigation.

Not all weak signals are hair triggers. Many provide the earliest indication of a progressive but correctable problem. Catching and correcting incipient (early-stage) problems is virtuous in the world of maintenance and reliability. There is no better strategy for this than designing an oil analysis program to have good eyes (no blind spots) and good ears (it can hear the weak signal).