Pulling the Point Estimate Together and Comparing to an Independent Cost Estimate

After each WBS element has been estimated with one of the methods discussed in this chapter, the elements should be added together to arrive at the total point estimate. Having developed the overall point estimate, the cost estimator must then:

  • validate the estimate through a quality control process by looking for errors like incorrect spreadsheet formulas, double-counting, omitted costs, and mismatched costs between documents;
  • perform cross-checks on cost drivers to see if results are similar;
  • perform a sensitivity analysis to examine the effects of changing ground rules and assumptions chapter 11, step 8;
  • conduct a risk and uncertainty analysis to assess the variability in the point estimate (Step 9); and
  • update the model as more data become available or as changes occur and compare the results against previous estimates (Step 12).

These steps help validate the estimate. The cost estimator should also compare the estimate to an independent cost estimate (ICE) and the two estimates should be reconciled. An ICE gives an objective measure of whether the point estimate is reasonable. Differences between the estimates should be examined and discussed to achieve understanding of overall program risk and to adjust risk around the point estimate.

An ICE is considered one of the best and most reliable methods for validating an estimate. ICEs are typically performed by organizations higher in the decision-making process than the office performing the baseline estimate, and that are independent of the acquisition chain of command. An ICE provides an independent view of expected program costs that tests the program office’s estimate for reasonableness. Therefore, ICEs can provide decision-makers with additional insights into a program’s potential costs—in part, because they frequently use different methods and are less burdened with organizational bias. Moreover, ICEs tend to incorporate adequate risk and, therefore, tend to be more conservative by forecasting higher costs than the program office.

The ICE is usually developed from the same technical baseline description and ground rules that the program office used so that the estimates are comparable. An ICE’s major benefit is that it provides an objective and unbiased assessment of whether the program estimate can be achieved, reducing the risk that the program will proceed underfunded. It can also be used as a benchmark to assess the reasonableness of a contractor’s proposed costs, improving management’s ability to make sound investment decisions, and to accurately assess the contractor’s performance.

In most cases, the ICE team does not have insight into daily program events, so it is usually forced to estimate at a higher level or use analogous estimating techniques. It is, in fact, expected that the ICE team will use different estimating techniques and, where possible, different data sources from those used to develop the baseline estimate. It is important for the ICE team and the program’s cost estimate team to reconcile the two estimates, as in case study 15.

Case Study 15: Independent Cost Estimates, from Coast Guard Acquisitions, GAO-18-600

To maintain heavy polar icebreaking capability, the Coast Guard and the Navy collaborated to acquire up to three new heavy polar icebreakers (HPIBs) through an integrated program office. The Navy planned to award a contract in 2019. GAO has found that before committing resources, successful acquisition programs begin with sound business cases, which include plans for a stable design, mature technologies, a reliable cost estimate, and a realistic schedule.

GAO’s review of the heavy polar icebreaker cost estimate—performed by the Naval Sea Systems Command Cost Engineering and Industrial Analysis Group (NAVSEA 05C)—determined it partially met the best practices associated with being credible, in part because the cost estimate was not fully reconciled with a comparable independent cost estimate. While the Naval Center for Cost Analysis performed an independent cost estimate of the HPIB, it used a different methodology from NAVSEA’s, and its estimate was based on an earlier version of the indicative ship design and associated technical baseline. NAVSEA officials told GAO that before the Coast Guard’s ship design team updated the indicative ship design and technical baseline, NAVSEA met twice with Naval Center for Cost Analysis to reconcile their results. However, NAVSEA officials told GAO that due to the speed at which the program was progressing, no reconciliation occurred after the ship design team finalized the indicative ship design. While GAO did not find any specific ground rules and assumptions that differed between the two estimates, some ship characteristics had changed, such as the weight estimates for propulsion and auxiliary systems, among others. The use of two different technical baselines created differences in the two estimates and made them less comparable to one another.

Two potential issues with ICEs are the degree of independence of the estimating team and the depth of the analysis. The degree of independence depends on how far removed the estimating team is from the program office. The greater the independence, the more detached and disinterested the cost estimator is in the program’s success. The basic test for independence is whether the cost estimator can be influenced by the program office. Thus, independence is determined by the position of the cost estimator in relation to the program office and whether there is a common superior between the two. For example, if an independent cost estimator is hired by the program office, the estimator may be susceptible to success-oriented bias. When this happens, the ICE can become overly optimistic.

History has shown a clear pattern of higher cost estimates the further away from the program office that the ICE is created. This is because the ICE team is more objective and less prone to accept optimistic assumptions. To be of value, however, an ICE must not only be performed by entities far removed from the acquiring program office, but must also be accepted by management as a valuable risk reduction resource that can be used to minimize unrealistic expectations. While an ICE reveals to decision-makers any optimistic assumptions or items that may have been overlooked, in some cases management may choose to ignore it because the estimate is too high.

The second issue with an ICE is the depth of the review. The most rigorous independent review is an ICE. Other independent cost reviews address only a program’s high-value, high-risk, and high-interest elements and simply pass through the program office’s estimate for the other costs. While these types of cost reviews are useful to management, not all provide the thoroughness and objectivity necessary to ensure that the estimate going forward for a decision is valid.

After an ICE or independent review is completed, it is reconciled to the baseline estimate to ensure that both estimates are based on the same ground rules and assumptions. A synopsis of the estimates and their differences is then documented, justified, and presented to management. Using this information, decision-makers use the ICE or independent review to validate whether the program estimate is reasonable.

It is important that cost estimators and organizations independent of the program office validate that all cost elements are reliable and can be justified by acceptable estimating methods, adequate and valid data, and detailed documentation. Independent reviewers help ensure that the estimate is free from bias. Validating a cost estimate ensures that a high-quality cost estimate is developed, presented, and defended to management. This process verifies that the cost estimate adequately reflects the program baseline and provides a reasonable estimate of how much it will cost to accomplish all tasks. It also confirms that the program cost estimate is traceable, accurate, and reflects realistic assumptions.

Independent cost estimators typically rely on historical data and therefore tend to estimate more realistic program schedules and costs for state-of-the-art technologies. Moreover, independent cost estimators are less likely to automatically accept unproven assumptions associated with anticipated savings. That is, they bring more objectivity to their analyses, resulting in estimates that are less optimistic and higher in cost. The ICE team is typically outside the acquisition chain, is not associated with the program, and has nothing at stake with regard to program outcomes or funding decisions.

Some ICEs are mandated by law, such as those for DOD’s major acquisition programs—these programs are required to develop ICEs for major program milestones. The history of the myriad of DOD programs clearly shows that ICEs are usually higher, and tend to be closer to actual program cost, than baseline estimates. Thus, if a program cost estimate is close to ICE results, the program is more likely to request funding at a reasonable level.

Finally, as the program matures through its life cycle, as more data become available, or as changes occur, the cost estimator should update the point estimate. The updated point estimate should be compared against previous estimates, and lessons learned should be documented. (More detail is in chapter 15.)