Re-Engineering Oil Analysis with Information Technology

Paul Higgins, Dingo Maintenance Systems
Tags: oil analysis

Today, global competition and thinning margins in most industries mean that cost control is high on every manager’s objectives list. Most organizations have squeezed costs hard already, but are seeking areas where costs can be reduced further with a minimal level of risk.

Information will continue to be the most important strategic weapon of the twenty-first century. Applying information technology to Maintenance Practices to improve the bottom line is smart business. A Brisbane, Australia company has been working with the industry for over eight years, re-engineering Oil Analysis using information technology to increase the effectiveness of this predictive maintenance technique. Simply put, the resulting system has allowed Maintenance staff to confidently make better maintenance decisions that help avoid expensive equipment failures.

Background
Oil Analysis was first used in the US rail industry in the 1950s. The techniques and tests have evolved over time, but the management of the Oil Analysis on site has remained relatively unchanged until the early 90’s. Because of the cost and complexity of owning and operating the equipment required to perform meaningful tests, only the largest or most remote sites could justify these investments. The large majority instead rely on well equipped oil analysis laboratories operated by oil suppliers, equipment manufacturers or independent operators.

A small sample bottle of oil taken from the equipment is generally hand labelled, then sent by mail to the laboratory where the oil undergoes several tests. The results are then compiled into a report which is mailed, or in the case of an alert - faxed directly to the site.

Around 1990, a number of proactive mining and industrial organizations began examining ways to apply IT to further improve the Oil Analysis Management process. A local maintenance IT supplier also received Federal Government funding to work with these organisations to develop an Oil Analysis Management system based on emerging maintenance and Oil Analysis best practice regimes. The improvements resulting from this long-term collaborative re-engineering effort are summarised in Table 1 and discussed below.

Process Step Before Re-Engineering After Re-Engineering
Sample labels Hand written
Error prone
Time consuming
Computer generated
Error free
Takes seconds
Laboratory processing Manual tests
Reports compiled by hand
Automated tests
Software collects and compiles report
Communication Results posted by mail or fax Results posted by internet
Reporting Paper overload Reports on demand
Alarm limits Lab alarm limits based on oil or machine User alarm limits based on user data
Access to information Limited by volume and location Shared to all over network
Interpretation of trends and history Slow numerical paper reports
Alarm conditions notified by fax / phone
Limited history
No trending
Filing cabinet
Time consuming
Fast computer graphs
Automated color-coded exception reports
Full history
Automated trends
Electronic, online
Instant recall

Table 1

Sample Management
The process of manually labelling oil sample bottles is both error prone and time consuming. This can lead to inaccurate diagnosis, and in some cases, can actually cause downtime, as “falsely accused” equipment is re-sampled and / or inspected. One other issue is sample tracking -which samples have been sent and which have been received back. Both of these manual processes lend themselves perfectly to re-engineering using database technology.

This problem has been addressed by using software to automatically generate sample labels. These labels can be printed either individually or as a batch to match varying sample intervals for different equipment components. Sample labels are accurate every time, conforming to OEM’s recommended oil testing schedules or site requirements. Printed sample labels do not wear off or smudge and can be relied on for accurate component description to the testing laboratory. Advanced systems also track top up oil and unit hours to enable wear rate trending. These batches can be saved and used again, reducing the time to create individual sample labels for each component.

Rodney Roberts, senior maintenance scheduler at Peabody’s Moura Mine in Queensland Australia estimates that re-engineering this process has reduced sample management time from two hours per day to one hour per week.

Laboratory Processing
Laboratories test oil on numerous different scientific machines with fancy names like FTIR, Gas Chromatographs, Viscometers and Inductively Coupled Plasma (ICP) machines. Oil Analysis laboratories have evolved and now use IT extensively to improve both accuracy and speed of results. Laboratory Information Management Systems (LIMS) track each sample and collate the results in electronic files for each customer daily. This results in faster turnaround time for samples.

Communication Speed
More innovative laboratories have been consistently improving the communication speed by applying new technology; first by mailing floppy disks, then by bulletin boards, and now by e-mail and the internet. The bottom line is that results are getting from the lab to site faster and cheaper than ever before. McCoy / Cove, a large gold and silver mine operated by Echo Bay Minerals in Nevada now receives sample results at the mine site for interpretation at 2:30 pm the day after the samples are shipped. Problem investigation and corrective action occur 12 - 24 hours earlier than previously and the weekend backlog has been eliminated. Re-engineering potentially saves 24 hours of machine downtime and gives a days extra notice of impending problems.

Interpretation of Trends and History
The technical nature of the laboratory environment leads to large amounts of numerical data being generated. This makes interpretation or context difficult and complex. It can be similar to hearing a spot share price without knowing which way the stock is trending.

Powerful trending tools now enable matching current results against historical data to give a complete picture of a component’s current health relative to both its own sample history and that of other like components. User defined graph settings ensure only the predictors of importance are highlighted for specific Equipment Items. In addition to trending actual test results, wear rate calculations can be used to help indicate whether results are
normal or outside expected ranges.

Paper Overload
Once the lab has the data in a report, it is usually mailed or faxed back to the customer who originally sent the sample. This process in itself causes problems. Take a fleet of 100 machines running continuously, with an average of 10 components, two being sampled every 250 hours, eight every 500 hours. This equates to about 1,500 individual sample reports per year. Some customers are doing over 25,000 samples per year, which is over 60 reams of paper for every 12 months! The cost of collecting, reading, analysing and storing of this quantity of paper based information is substantial, and tends to render a good technique almost completely impractical.

Through re-engineering, computerized capture and interpretation of results eliminates:

- Time spent reading and filing oil analysis results for equipment that does not have a problem requiring attention. Software automatically stores this information away and only flags results requiring attention.

- Filing time for all paper results

- Paper retrieval time - ie time spent collecting and analyzing the last few samples of a particular piece of equipment to determine trends. The software graphs trends for equipment with one or more predictors (e.g., both iron and copper).

Alarm Limits
A key benefit of database technology is that information can be more easily dissected and analysed. Custom databases now provide information relevant to very specific applications. Information technology is helping maintenance managers to develop their own “alarm thresholds” by performing statistical analysis of their oil analysis sample results. This provides a powerful addition to generic information provided by external suppliers. Results can be filtered and alarms viewed for a date range for a specific piece of equipment or alarms of a set level. For larger organizations with multiple sites, this provides even more competitive advantage as internal company benchmarks can be established and alarm limits can be based on group wide data.

Access to Information
Paper analysis reports often remain locked in archive files placed on the top shelf in a Planner’s office to be referenced only after a machine has failed and damage has been done. Existing investments in computers and networks can be leveraged to provide summarized “exception” data to all those interested, so better decisions can be made.

Terry Dixon from Robe River’s Cape Lambert site recently re-engineered his Oil Analysis program using
information technology. “The most significant benefit from changing to the Electronic System, from the old Paper System, is the sharing of information. As soon as the Oil Analysis Results are sent by e-mail from the lab to us, and are transferred to our Oil Analysis Program, they are immediately available to everyone who has access to it. Each of the separate Maintenance Area Supervisors, Maintenance Department Heads, Maintenance Planners and the Shift Fitters now all have access to latest information as soon as it is downloaded from the lab on their Desk-top computers. Another major benefit we have found, is that anyone with access can, at any time of the day, find out details of specific Gear boxes, or details of Lubricants, easily, on-screen, without having to search for a book or person, to obtain that information.”

The Bottom Line?
Industry has to continue to innovate to further reduce costs in a competitive global market. By working with Information Technology suppliers, traditional cost saving programs can be re-engineered to take advantage of technology to enhance maintenance effectiveness, and improve the bottom line.