Approach front-end process design quantitatively and you increase your ability to optimize its impact on profits.

Donald G. Reinertsen

0895-6308/99/$5.00 © 1999 Industrial Research Institute, Inc.

The portion of the new product development cycle between when work on a new idea could start and when it actually starts--the so-called Fuzzy Front End--is often lengthy, typically poorly understood, and usually full of opportunities for improvement. Consequently, there are substantial benefits to analyzing this stage quantitatively. In particular, such analysis sheds light on many important decisions regarding the structural design of the early development process. A key implication of such analysis is that front-end process structure should differ depending on the underlying economics of the specific situation. This in turn suggests that there are no universally applicable "best practices" for optimizing the Fuzzy Front End.

Sierra Online is a highly successful player in the computer games market. Product requirements in this market are difficult to forecast and new game designs are difficult to assess in their early stages. Thus, Sierra cannot afford to lose much time in the front end of its development process, and it doesn't. Sierra's speed of decision-making would amaze many larger companies. This speed does not come as a result of being sloppy, but rather from a front-end process design that is well-tuned to its market.

Sierra moves products through two progressive filters. About 100 ideas are proposed by people from all areas of the company at a semi-annual pitch session. The 8--10 ideas that look most promising are given limited budgets to be turned into rough prototype games. These prototypes have enough of the look and feel of the final game to permit making sound judgments about whether or not to invest in them. Interestingly, although there are annual R&D plans, they don't rigidly allocate resources for the following year. The limited investments to make working prototypes occur before any detailed business plan is prepared. At the same time, the number of ideas in the system is carefully controlled to prevent overloads from delaying idea flow.

Such deviations from traditional management doctrine are becoming increasingly common. Companies are realizing that traditional textbook front-end processes are not well-suited to today's development environment. It is the purpose of this article to provide a deeper understanding of the underlying logic of these changes.

Although the term Fuzzy Front End (FFE) first appeared in 1985, it rose to wide visibility in the early 1990s (1,2). Much of the current discussion has been oriented toward qualitative aspects of this key process stage (3). This article will adopt a more quantitative approach to key FFE process design decisions. I shall begin by framing the economics of this process stage and then examine some of the key process design decisions in light of this economic context. I believe that such a quantitative approach is critical to achieving a good process design, and that it yields specific insights that are not obvious to the qualitative observer.

Donald Reinertsen is president of Reinertsen & Associates in Redondo Beach, California. He consults to companies on the management of new product development processes, and has developed a variety of analytical techniques for assessing the product development process, including the quantification of the financial value of development speed while a consultant at McKinsey & Co. in 1983. He is co-author, with Preston G. Smith, of "Developing Products in Half the Time" (John Wiley, 1998). His latest book is Managing the Design Factory: A Product Developer's Toolkit (Free Press, 1998). He writes and speaks frequently on techniques for shortening development cycles, and teaches an executive course at California Institute of Technology on "Streamlining the Product Development Process." DonReinertsen@compuserve.com

The Quantitative View

The FFE is a step in a larger process, and like any subprocess it can be optimized. To do this optimization, we need to identify the precise outcome we are trying to optimize and how different process design choices will affect it.

It is useful to think of the FFE as a precursor to a betting process. At the end of the FFE we will put our investment in product development at risk in return for a chance to earn profits. From an economic perspective, the purpose of the FFE is to alter the economic terms of the bets that we place on product development. The expected value of these bets is dictated by the probability of success, the upside of success, and the downside associated with failure, as shown in Figure 1. We can influence the economics of our bet by altering any of these three factors.

Figure 1.--The expected value of a bet on a new product does not depend just on the probability of success, but also on the magnitude of upside gain and downside loss.

Like any subprocess, the FFE can be described in terms of its economics, which is the key to optimizing its profit impact. We can achieve optimization by identifying measures of performance for the FFE and then assessing how changes in these measures affect profits. This permits us to assess the profit impact of Front End process design choices.

If we view the FFE as an opportunity processor, we can define three key measures of performance for it:

  • The expense to screen an opportunity.
  • The time to screen an opportunity.
  • The effectiveness of the screening process.

The first two measures are obvious, but the third is more subtle. The screening process can make two types of errors, either incorrectly rejecting a good idea or incorrectly accepting a bad idea. Incorrect acceptance has a cost because it can trigger an investment that later proves worthless. In contrast, an incorrect rejection has little cost when an organization has more good opportunities than resources, which is typical of most development organizations. These processing paths and their economic consequences are depicted in Figure 2.

Figure 2.--Front End processing errors can misclassify good ideas as bad or bad ideas as good, the latter doing the greatest economic damage.

To optimize process economics, we need to know how these measures of performance influence profitability. For example, consider the Front End process shown in Figure 3. This process looks at 150 ideas each year. Assume that with perfect hindsight we would conclude that 50 of these ideas are good and 100 bad. Our imperfect screening process passes through 10 good ideas and 2 bad ones. The average screening cost for each idea is $5,000. All passed-through ideas turn into projects, but bad ones are caught at the next checkpoint and killed. Therefore bad ideas incur only the additional expenses required to get to the next checkpoint, which averages $200,000. (Assume that there is no cost to rejecting good ideas as long as you have more good ideas coming out of the process than you have resources to pursue.) The average cycle time through the process is six months and the average computed cost of delay for projects is $100,000 per month (4).

Figure 3.--Total process costs can be calculated by considering the expense of screening, cost of errors, and the cost of delay.

What are the economics of such a process? As Figure 3 shows, the total process cost is $7.15 million. This is composed of three costs: The screening expense is $750,000; the cost of ineffective screening is $400,000; the cost of delaying the 10 good projects by 6 months each is $6,000,000. Thus, total process cost is dominated by the cost of delay.

Once we understand our Front End economics we can consider optimizing them. For example, consider the economic effect of doubling the performance of each of the key measures of performance. Doubling the effectiveness of our rejection of bad ideas would permit us to invest in one bad idea instead of two, saving $200,000. Doubling the efficiency with which we screen ideas would cut our screening expense in half, saving $375,000. Doubling the speed on the Front End process would cut our processing time in half, saving $3.0 million. Thus, in this particular example, improving FFE processing speed is a more sensible design objective than improving its screening efficiency.

Rational Process Design

The key value in taking a quantitative view of FFE design is that it leads us to making better process design decisions. Let us take a look at eight of these decisions using this quantitative frame:

1. Required success rate for new product development.--Year after year, studies lament the low success rate for new product development. Only 1 out of 3,000 new product ideas succeed (5). Some observers conclude that this means that 99.97 percent of the money that they spend on product development is wasted, and maintain that higher success rates are desirable. Such views display a poor understanding of product development economics.

In reality, unsuccessful product ideas usually consume a small portion of development spending because most failures are screened out before heavy investment is applied. For example, in the pharmaceutical industry the average cost of a candidate compound screened out before clinical trials is quite low because so many candidates get screened out in the early stages. As cheaper compounding and screening techniques become available, these costs are becoming even lower. In such situations, where failures cost much less than the benefits delivered by product success, lower success rates can make enormous economic sense.

Unfortunately, in the past, some pharmaceutical companies confused maximizing success rate with maximizing economics. Traditionally, they used a high-success-rate strategy, investing in the lead that has the best chance of success. Consider, however, the economics of a situation in which two possible candidates can be brought forward into Phase I clinical trials. The primary candidate has a 30-percent chance of success and the secondary candidate has a 15-percent chance. As Figure 4 illustrates, pursuing the single best lead maximizes the average success rate per candidate, but leads to lower expected profit. Although secondary candidates drag down success rates, multiple candidates increase the chance that one will work. Because the payoff from success is so high, even a small increase in the overall success rate easily pays for the cost of carrying forward multiple candidates. In this case, we can raise profits by lowering average success rates.

Figure 4.--An approach that maximizes average success rate does not also maximize expected profits.

2. Number of filters.--Some observers note that many new products fail and conclude that adding extra filtering stages is the answer. They layer more and more screening criteria to ensure that nothing "bad" gets through. In fact, an additional filter should only be added when its benefits exceed its costs. For example, consider adding a rigorous screening step to the process shown previously in Figure 3. Assume that it would add 60 days of delay to the FFE and that this screen is so effective that it would screen out 100 percent of the bad ideas. Such a filter step would add $2 million in delay cost in return for decreasing the investment in bad ideas by $400,000. The filter's cost far exceeds its benefit. Surprisingly, even perfect filtering may not improve overall economics.

3. Layout of the filters.--There are two possible ways to structure a series of filters in our front-end process: we can place them sequentially or we can run screening processes in parallel. For example, we could assess market feasibility first and then assess technical feasibility. Alternatively, we could do both assessments simultaneously. As Figure 5 illustrates, we can compare the economics of these two approaches. In this case, simultaneous screening can save money, despite the fact that it raises screening costs. Sequential screening is rational when screening expense is high and cost of delay is low. In such cases, sequential screening keeps expenses down because anything rejected by the first screen does not need to go to the next screen. However, when the cost of delay is high, the approach should be concurrent. Although a concurrent approach will require us to spend more money on screening, it reduces expensive critical path time. Put another way, wasting time is sometimes more expensive than wasting money.

Figure 5.--Sometimes concurrent filtering will have lower total costs than sequential filtering.

4. Sequence of filters.--Some companies think that the sequence of filtering steps should be the same for all projects. Figure 6 illustrates the economic effects of sequencing filters in two different ways. In general, the filters that are cheapest (in time and expense dollars) and those that reject the most opportunities, should be sequenced first. For example, in Figure 6 we can improve the economics of the screening process with the right sequence. The efficient sequence generates less than one-third the cost of the inefficient sequence.

Figure 6.--Process economics are better when filters with high filtering rates and low costs are sequenced first.

A key implication of this is that when the costs and probabilities of success for filters differ, then the sequence in which we apply them should also differ. It is only rational to use a fixed sequence if success rates and costs do not change. Venture capitalists have always known this and concentrate on resolving the high-risk areas first, rather than using a fixed sequence.

5. Sizing of process flow rates.--Most developers neglect this important analysis. Instead, they think they have designed a Front End process when they have laid out the sequence of steps. In fact, the sequence of steps is only one piece of process design. Setting flow rates is usually far more important because mismatches in flow rates can create large queues in processes. For example, it is common to see delays in business planning because too many opportunities enter this process stage. Ironically, two Front End processes with identical topology can differ radically in performance simply because of inattention to flow rate mismatches.

6. Size of process queues.--Managing process queues is of extraordinary importance when cycle time is expensive. You want to have enough work in the queue to prevent it from drying out, but not so much that the opportunities get stale while they wait to access resources. At companies like Sierra Online, managers watch these process queues and dynamically adjust resources to manage their size. They realize that the size of these queues determines cycle time through their process.

7. Flow control for the process queues.--Many Front End processes miss the opportunity to control the flow of projects. Instead, they implicitly use a "first-in, first-out" processing strategy. Once a project has started, it remains ahead of all subsequent projects in the pipeline. This demonstrates a poor understanding of process economics. Whenever a process has work queues, we can choose the sequence in which we handle work within these queues. By sequencing high-cost-of-delay projects before those with lower costs of delay, we can improve overall process economics. A key implication of this is that it is a sign of health to see different projects travel through the front end at different speeds. We are most interested in processing time for high-cost-of-delay opportunities. This means that a system for setting priorities can improve process economics by accelerating important opportunities through the process.

8. Batch size of the process.--Too many product developers give no conscious thought to the issue of batch size in their Front End processes. As a result, they dramatically suboptimize process economics. They default to using large process batch sizes, which leads inexorably to slow process cycle times. Let us look at two typical examples of bad batch sizing: annual planning and one-shot funding.

Most planning processes have excessively large batch size. A typical annual planning process may request that new opportunities be submitted in July for the R&D year beginning the following January. As Figure 7 indicates, this adds an average delay of 18 months to the processing of all opportunities. By moving to a quarterly planning process, we can cut this average delay to 6 months. For the process economics discussed earlier, 10 opportunities accelerated by 12 months each would be worth $12 million annually. Put another way, in such a case, if annual planning releases one year of work into the R&D process in a single batch, the batch size is too large--it wastes $12 million per year.

Figure 7.--Classic annual planning processes (top) have large batch sizes that add significant delays in the Fuzzy Front End.

Another common batch size mistake is to fund programs in excessively large increments. Companies either fund the entire development program or none of it. Such an approach makes the decision to begin a project a high-stakes choice. In contrast, piecewise funding lowers the consequences of failure by putting less at risk. And, with less at risk, we don't spend months agonizing over the decision to proceed.

Better than Benchmarking

I would argue that a quantitative approach is not merely helpful to structuring a Front End process but is of paramount importance. None of the critical process design decisions discussed earlier can be made without such quantitative methods.

Not all product developers are capable of using such an approach. Many would prefer to benchmark companies they admire and collect "best practices" to implement in their own process. Unfortunately, this is a bit like designing a car by choosing the tires from one, the transmission from another, and the engine from a third. It will probably result in a functioning vehicle but it will be vastly inferior to a vehicle that benefited from real system design. Instead, approach process design in a methodical and quantitative manner. The foundation of such an approach lies in understanding process economics because they govern process design choices. Guided by a sound understanding of process economics, you can make the Fuzzy Front End a lot less fuzzy.

References and Notes

  1. Reinertsen, Donald G. "Blitzkrieg product development: Cut development time in half." Electronic Business, January 15, 1985.
  2. The first book to popularize this term was the 1991 edition of Developing Products in Half the Time. This book is now in its second edition: Smith, Preston G. and Reinertsen, Donald G. Developing Products in Half the Time: New Rules New Tools. New York: John Wiley & Sons, 1998.
  3. For example, see Khurana, Ani and Rosenthal, Stephen R. "Integrating the Fuzzy Front End of New Product Development." Sloan Management Review, Winter 1997, vol. 38, No. 2, pp. 103--120. This article gives an excellent treatment of qualitative factors and concludes that ". . . not all companies should adopt the same front-end solution."
  4. The standard method for calculating such cost is discussed in Chapter 2 of Developing Products in Half the Time (2).
  5. Stevens, Greg A. and Burley, James. "3000 Raw Ideas = 1 Commercial Success!" Research • Technology Management, May--June 1997, pp. 16--27.
  6. These issues are treated in Chapter 3 of Reinertsen, Donald G., Managing the Design Factory: A Product Developer's Toolkit. New York, The Free Press, 1998.

Using the Right Math

In this article, we have simplified the problem of setting capacity in the Fuzzy Front End. We have treated the Front End as a deterministic process that will not experience delays until it is loaded to 100-percent utilization. In reality, the delays in the Fuzzy Front End will be severe if we try to load to this level of utilization.

In fact, there is a great deal of variation in both the arrival rate of opportunities and the processing time for individual opportunities. This means that the math that applies to Front End process design is the math of stochastic systems (queueing theory), not that of deterministic systems. A key conclusion of such math is that we cannot load the process to full utilization. We must load to less than full utilization to avoid large process queues.

Applying queueing theory to development process design yields many important insights about how to design and control these processes. For example, it demonstrates that we can do a better job of controlling cycle time by monitoring process queues than by monitoring cycle times (6).--D.G.R.

It is useful to think of the Fuzzy Front End as a precursor to a betting process. Wasting time is sometimes more expensive than wasting money. Too many product developers give no conscious thought to the issue of batch size in their Front End processes.