Surviving the Oil Analysis Data Jungle

Jim Fitch, Noria Corporation
Tags: oil analysis

For many companies with oil analysis programs, the routine arrival of paper reports from the laboratory creates a kind of eye glaze among recipients. Sometimes hundreds, even thousands of samples are analyzed each month generating formidable stockpiles of data. Without an efficient means to reduce, manage, and interpret the data with quantifiable savings achieved, the program may lose critical support from management and interest from those involved.

One obvious approach to managing the “data jungle” is to deploy the use of computers and software. Many oil analysis software products are on the market offering a vast array of data processing capabilities. Some of these software products are, in fact, offshoots from highly sophisticated programs used in vibration analysis, CMMS, and other maintenance applications. Indeed, there is an interesting advantage obtained when oil analysis steals the learning curve from an advanced field like vibration. Not only is the evolution of more than thirty years of software development leveraged but also the approach and presentation style remains familiar to those who first entered the PdM world through vibration.

Software also helps oil analysis users efficiently manage and respond to exceptions in a number of ways. Consider that on average only a small percentage (say, 10-20%) of samples analyzed will report non-conforming data. Therefore, it only makes sense to use the power of the computer to quickly and efficiently eliminate “conforming” data from human view. This is routinely accomplished by defining limits, or alarms, for all data sets and allowing the computer to compare current results to these preset limits. Most oil analysis software will provide this capability albeit considerable time may be needed during the setup stage. Once complete, when new data arrives electronically from the laboratory, only the flagged samples, by auto-exception, are viewed by the onsite technician. The technician can then analyze data from the problem sample in both tabular or graphical form by quickly drilling down for a closer look.

With this capability, top performing practitioners in oil analysis have learned not to rely on outside laboratories to define limits, set baselines, or even interpret non-conforming results. Still, practically speaking, the notion that computers can do it all is many years away - perhaps decades. One reason is, as those skilled in the field will often state, the activity of interpreting data and troubleshooting is far more art than science. This presents a daunting task for those who are just entering the field. Still, through training and the many enabling tools provided by these software programs, the task today is more effective and easier than its has ever been in the past.

And speaking of training, it would be remiss to fail to emphasize its importance when it comes to surviving the data jungle. Even with the aid of a computer, technical data can be quite intimidating. With training though, technicians gain confidence and acquire the skill to maneuver briskly through the reports. More importantly, they become fascinated in the challenge of detecting, diagnosing and solving problems. And with it, a pride is attained in providing a value-building service in the world of reliability.

About the Author

Jim Fitch, a founder and CEO of Noria Corporation, has a wealth of experience in lubrication, oil analysis, and machinery failure investigations. He has advised hundreds of companies on developing their lubrication and oil analysis programs. Contact Jim at

Create your survey with SurveyMonkey