Estimating Values in the Absence of Real Data - Deploying the Delphi Method

Drew Troyer
Tags: oil analysis, oil oxidation

Many would-be reliability improvements stall due to a lack of data, and as a result, folks are unwilling or unable to make a decision that will allow the project to move forward. However, decisions must be made, and improvements and change must proceed despite the absence of reliable data. This is true in all facets of life, and the plant’s reliability and oil analysis program is no exception. Making decisions is a skill that can be honed, and improved upon with the use of helpful techniques and technologies.

All decisions that lead to action represent a leap of faith to one degree or another. Data, or facts, reduce the level of uncertainty one faces with respect to the decision, thus closing the gap. Think of all the variables that affect an oil analysis decision as a matrix, which includes the oil, the machine, operational details, the environment, etc. If one could accurately identify and clearly define all of those variables, decisions would be not really be decisions, but rather programmed responses designed to optimize, minimize or maximize, depending upon one’s objectives. Decision makers and decision-making would become obsolete.

Unfortunately, many decisions can’t be neatly boxed into a matrix for optimization. Complete data simply doesn’t exist. So decisions must be made using subjective analysis, or based upon “gut” feeling. Engineers and other technical people are trained to analyze deterministic (or near deterministic) data, and act based upon the conclusion of that action. Those trained in the use of data and systematic analysis methods are typically uncomfortable with this process. But the fact remains that decisions that keep things moving must be made. Even a decision not to decide, by definition, is a decision, so there really is no way out.

Where facts are sketchy, they must be interpolated or fleshed-out using estimates. An entire stream of operations research is dedicated to making decisions under uncertainty. This has produced several tools like Monte Carlo simulation, sensitivity analysis, etc. In plant reliability engineering, a lot of systematic tools are used that require estimating values, ranging from the cost of a failure to machine criticality. In many cases these systematic tools, like the reliability penalty factor (RPF) wizard in Figure 1, appear to be quantitative, but they are in fact quasi-quantitative. A number is selected to represent an opinion. The number is supposed to ordinally express the magnitude or strength of one’s opinion, but the number is dimensionless, and is therefore quasi-quantitative.

Where facts are sketchy and opinion rules the day, good decisions can be made by tapping into the collective memory banks of the entire staff. Despite its inferiority to the computer in numbers-crunching and storing records with great detail, the human mind is unmatched in its ability to converge multiple, seemingly unrelated variables and abstract concepts, both deductively and inductively. That is to say that humans have the power of reasoning. This remarkable capability gives us problem-solving skills and the ability to create.

It turns out that for producing quasi-quantitative estimates, groups generally do a better job than individuals. This fact has been well researched. However, groups often have funny dynamics. Suppose a group that includes the corporate vice president (VP) of equipment reliability, the plant’s equipment reliability managers, two plant reliability engineers and six plant reliability technicians get together to score 25 machines for criticality using the RPF wizard. Some discussion about each machine precedes a public scoring of each variable, starting with the VP. How much do you think the VP’s opinion will affect the opinions of the other nine people? Research suggests that the most powerful person (formal or informal power) will bias the findings.

To tap into the collective knowledge of a group of experts without biasing influences, individuals at the RAND Corporation (a contraction of the words Research And Development) developed a method whereby the collective knowledge and experience of a targeted group of experts to answer complex questions for which little or no empirical data exists. This is called the Delphi Approach. The Delphi Approach employs an anonymous methodology, thereby eliminating group bias. Essentially, participants are asked a set of questions. The collective responses are assembled (such as average, range, etc.) and returned to the individuals to review, but individual responses are held anonymous. The participants are asked to review the collective findings and change any answers based upon the collective response. The process goes through these iterations until convergence is reached. The major assumption of the method is that when polled anonymously and systematically, a group of people will reach a consensus that is relatively free of individual biases and of the biases produced by conventional techniques for group decision making, and that the convergent answer is the best possible based onavailable data. One could assume that the amount of variation among the respondents in the final answer could lead to a statement of confidence about the collective opinion.

Two primary limitations of the classical Delphi Approach are the time required to complete it with individuals scattered (e-mail and the Web help to overcome this), and the loss of the dynamic creativity that results when group discussion ensues, especially live groups. Often, this dynamic is required upfront to scope the nature and parameters of the process or the question.

To overcome the limitations of the classical Delphi Approach for use in business settings, numerous modifications have been devised. Commonly, a group is pulled together for the purpose of arriving at a decision, or to fill in missing information. Discussion about the problem follows the rules of brainstorming (you can provide as much positive input as you like, but you can’t negatively comment on another’s input) and is usually moderated by a neutral individual who may or may not have subject matter expertise. Once the problems and/or questions are reduced to a set with defined responses (such as, true options, rate the parameter 1 to 10, etc.), the participants are asked to anonymously respond. The moderator collects the results and reports the statistics to the group. The group may elect to run another iteration of the process. This decision usually depends upon the amount of variation, but two iterations are usually sufficient. Once a consensus is reached, the process ends and the outcome accepted.

This powerful technique can be used to create numeric estimates in the absence of empirical data, make any group decisions, forecast the future or recreate the past. For maintenance, it can be used for RCM analysis, cost benefit analysis, etc. The author has effectively employed a modified Delphi technique to assist a client in selecting a corporate lubricant supplier. There, the process began with a clear definition of the variables that were important to the client (such as services, product range, product quality, distributor network, etc.), followed by a Delphi-based weighting of the variables. Then, the technique was used to evaluate prospective suppliers based upon their presentation. A positive side-effect of the process was that by defining and weighting the important variables, the overall value of the vendor presentation process improved because the purchasing company’s participants were asking clear, targeted questions. They had a bulleted list from which to work.

While subjective estimation is uncomfortable to technical people, the world goes on . . . so decisions must be made. Look to modified variations of the Delphi Approach to add some structure to the gut-feeling decision process, improve decision quality and increase your confidence in decisions. The organization can’t execute with confidence if there are doubts about the decision that led to action.

Instructions for Using the Modified Delphi Approach

1. Gather a group of informed stakeholders and a neutral facilitator to manage the process. The facilitator assures the rules of brainstorming are followed so that no one person exerts too much influence over the group, and manages the mechanics of the process.

2. The facilitator describes the process and discusses the rules that must be observed to assure a valid outcome.

3. The variables being estimated are revealed and discussed. The manner and order in which they are discussed are dictated by the situation. In some cases, the variables must be developed through a group brainstorming session.

4. At estimation points, the facilitator will ask for discussion to cease and for group members to privately cast their estimates/opinions.


5. The facilitator captures the information and displays the results, identifying the average and range (and the standard deviation if applicable). The group is instructed to continue avoiding discussion of the facts, opinions or findings.

6. If there is significant variance, the participants are asked to recast their opinions, giving the individuals a chance to rethink their answer relative to the group’s collective opinion.

7. The results are again tabulated and reported by the facilitator. The process can continue until a consensus is met, or variance reduction ceases. Usually, two iterations are sufficient.

8. Accept the findings and move to the next question or problem.