Value Cost Management Report To Evaluate The Contractor’s Estimate At Completion

Value Cost Management Report To Evaluate The Contractor’s Estimate At Completion – defense contractors

David S. Christensen

The earned value cost management report is a valuable management tool for project managers. Its long association with earned value management systems criteria (formerly cost/schedule control systems criteria) and the related technical jargon, however, may have caused some project managers to ignore the information that it can provide about the future performance of their projects. This article is a brief tutorial for project managers and others interested in using the report more effectively. Actual performance data from a failed project and important research results are used to describe three simple analysis techniques for evaluating the contractor’s projected final cost of a project, termed the estimate at completion.

For more than three decades, the Department of Defense (DoD) has required defense contractors to report detailed information about the cost and schedule status of a defense contract on a monthly cost management report, known as the cost performance report (CPR) or the cost/schedule status report (C/SSR). Eamed value, or the budgeted cost of work performed, is a key performance metric on the report. [1] It is the basis for determining cost and schedule variances, and is often used as part of a formula to help estimate the final cost of the contract, termed the estimate at completion (EAC).

To assure the reliability of the CPR, defense contractors were also required to comply with generic management standards known as cost/schedule control systems criteria (C/SCSC). Although the management standards or criteria were sound, policies related to implementing them grew to become an administrative burden for the government and the contractor (Coopers and Lybrand, 1994; General Accounting Office, 1997). In addition, C/SCSC compliance reviews were typically managed by financial personnel and conducted like audits. As a result, the project manager often perceived the CPR as a financial rather than a management report, and did not use it as effectively as possible (Abba, 1995).

Recently, the implementation polices were revised to foster more cooperation between the government and the contractor and to establish the CPR as a management report. Overall responsibility for earned value management was moved from finance to project management in 1989. In 1996, the criteria were revised by industry, accepted by the government, and re-named earned value management systems (EVMS) criteria.

The basic concept of earned value management has not changed since its inception since 1967. In addition, the content of the cost management report has not changed, possibly because its value as a project management tool is widely recognized (Little, 1983 and 1984; DoD IG Audit Report, 1993; Office of Management and Budget, 1997; GAO, 1997). This article describes the basic data provided in the report and identifies a few techniques for converting the data into information useful to contract managers, project managers, and others involved in managing a major project. In particular, three ways to evaluate the reasonableness of the contractor’s EAC are described. Two of these involve comparing the project’s cumulative cost performance with its predicted future performance. The other technique involves comparing a range of estimates found to be accurate on a large number of completed projects with the predicted final cost of the ongoing project.

Although the example is taken from a large, criteria-consistent defense contract, the basic analysis techniques described in this article apply to projects of any size, government or commercial. Versions of the DoD’s EVMS criteria have been used for many years by nondefense agencies, including the Department of Transportation, Department of Energy, and National Aeronautics and Space Administration. In recent years, earned value management systems and the resulting data from those systems have been used to manage commercial projects in the United States and abroad (Abba, 1995).

THE TERMINOLOGY OF EARNED VALUE MANAGEMENT REPORTS

Terminology used in earned value management reports can be confusing. The acronyms alone number in the dozens. Regardless of the kind of project (defense, space, construction, etc.), however, only three basic data elements listed on the earned value management report are central to proper planning, measurement, and analysis: budgeted cost for work scheduled (BCWS), budgeted cost for work performed (BCWP), and actual cost of work performed (ACWP). Nearly all of the other data items may be derived from them.

The BCWS is the budget for work scheduled to be completed. It can be either monthly or cumulative. As a monthly amount, it represents the amount of work scheduled to be completed for that month. As a cumulative amount, it represents the amount of work scheduled to be completed to date. BCWS is also known as “planned value.”

The BCWP is the budget for the completed work. It also can be either monthly or cumulative. Monthly BCWP represents the amount of work completed during a month; cumulative BCWP represents the amount of work completed to date. BCWP is also known as “earned value.”

The ACWP is the actual cost incurred in accomplishing the work within a given period. Like the budgets, both direct and indirect costs are included. [2] To permit meaningful comparisons, the ACWP should be recorded in the same time period as BCWP for a given piece of work.

At the start of a project, work is typically categorized into near-term and farterm effort. The near-term effort is divided into manageable pieces known as work packages. On a large project, there may be more than 100,000 work packages that must be performed before the project is completed. As a result, only the near-term work is planned in detail. The remaining work is known as planning packages. As the project progresses, the planning packages are systematically divided into work packages and planned in detail.

Regardless of the timing of the work, a budget in terms of hours, dollars, or other measurable units is assigned to each work and planning package. By summing their budgets, a time-phased budgetary baseline for the entire project is defined. This baseline, known as the performance measurement baseline (PMB), represents the standard or plan against which the performance (BCWP) and the cost (ACWP) of the project are compared.

Figures 1 and 2 illustrate the typical condition of many projects: behind schedule and over budget. Figure 1 uses nontechnical jargon. Figure 2 uses the technical jargon just described. The PMB represents the plan. Because less work has been completed (in terms of dollar value) than was planned to be completed at this point, a schedule slippage (adverse schedule variance) is identified. Similarly, because actual costs exceed the budget for the completed work, an unfavorable cost variance is identified.

The figures also illustrate a third variance. The variance at completion (VAC) is the difference between the total budget of the project, termed the budget at completion (BAG), and the estimated total cost of the project, termed the estimate at completion (EAC). In this case, an adverse VAC (estimated final overrun) is indicated.

When these variances are judged significant they are immediately investigated by managers who are empowered to take appropriate corrective action. [3] The cost management report summarizes the monthly cost and schedule status of the project by listing the three data elements, the related variances, the BAG, and the revised EAC for all of the major pieces of work on the project.

The report also describes the causes of the variances and the corrective action plans related to the variances. Typical causes of variances include poor initial planning or budgeting, changes to the project’s scope, changes in technology related to the project, changes to the delivery schedule, changes to labor contracts, changes to material costs, inflation, and measurement error. Inaccurate indirect cost allocations can also contribute to cost variances reported on defense contracts. The project’s PMB includes indirect cost as well as direct cost. In addition, ACWP includes indirect costs, and defense contractors must investigate all significant cost variances, including indirect cost variances. The Defense Contract Audit Agency (DCAA) is usually given the responsibility for ensuring that the contractor’s indirect cost management systems are in compliance with the criteria. [4]

Eventually, summarized portions of the cost management report reach the Office of the Secretary of Defense (OSD) and Congress, where they may be used to help determine the continued funding of the project. [5] Only rarely has a large cost overrun on a defense contract resulted in the project’s cancellation. However, Congressional oversight, the threat to cancel funding, and the budget discipline required by the criteria may have limited cost growth on defense projects. Although the average cost overrun on defense contracts has averaged about 20 percent since the mid-1960s, cost overruns on some nondefense projects of comparable size and complexity have been larger (Drezner, Jarvaise, Hess, Hugh, and Norton, 1993, p. xiv).

EVALUATING THE CONTRACTOR’S ESTIMATE AT COMPLETION

From the contractor’s and the government’s perspectives, the contractor’s EAC is one of the more critical numbers on the cost management report. The contractor is required to advise the government of potential significant overruns or underruns. In some cases the EAC is used to adjust progress payments (DCAA, 1996, paras. 11-207, 14-205). Deficiencies in determining or revising the EAC may significantly affect forward pricing proposals, billing requests, and the reliability of the VAC as a control variance (Dahlberg, Colantuono, and Fischer, 1992, p. 9).

Given its importance, the EAC is periodically revised by the contractor and closely monitored by the government. Contractors periodically develop “comprehensive” EACs by estimating and aggregating the costs of incomplete work and planning packages remaining on the contract. [6] In addition, the contractor’s EAC is examined monthly for accuracy and revised as necessary to ensure that resource requirements are realistic and properly phased (DoD, 1996. para. 3-6e). Whenever the EAC is changed, the contractor should explain the rationale in the cost management report that is sent to the government. Frequent revisions of the EAC are not necessarily considered evidence of its unreliability. On a multiyear defense project, an unchanging EAC would be suspicious.

Despite the discipline required by the criteria, a RAND study recently concluded that EACs have been systematically understated for more than 20 years (Drezner et al., 1993). Because a systematic bias in the EAC can adversely affect the resource allocation decisions made by Congress and eventually the effectiveness of defense policy, it is important to know how to identify an understated EAC. The remainder of this article describes three simple analysis techniques that may be useful in evaluating the accuracy of the contractor’s EAC. Cost performance data from the Navy’s A-12 program is used to illustrate the techniques. [7]

The A-12 was the Navy’s premier aviation project. In January 1991, Secretary of Defense Richard Cheney canceled the project, complaining that no one could tell him what the final cost of the project would be. In fact, there were many EACs, some more credible than others. Unfortunately, the more credible EACs were not reported on the CPR or the summary reports sent to the Office of the Secretary of Defense. [8] A Navy investigation of the A-12 cancellation revealed that adverse information about the A-12 may have been suppressed by the Navy Program Office. The Navy’s “inquiry officer” on the cancellation of the A-12 program, C. P. Beach, Jr., concluded that schedule and cost goals for the A-12 were too optimistic and should not have been supported by government managers in the contract and program offices (Beach, 1990, p. 39- 41). Table 1 shows the April 1990 cost performance data for the A-12, six months before the project was canceled.

To evaluate the reasonableness of the contractor’s EAC, three comparisons should be made. First, the overrun to date (cost variance, CV) should be compared to the estimated final overrun (VAC). If the overrun to date is worse than the estimated final overrun, the contractor is predicting a recovery. Thus, in April 1990 the A-12 contractors predicted a recovery of$105 million ($459 million to $354 million). Recoveries from cast overruns on defense contracts are extremely rare, especially when the project is more than 20 percent completed. In this case, the project is about 37 percent complete (BCWPcum/BAC). Analysts should have been extremely dubious about the ability of the A-12 project to finish at $4.4 million. The report on the A-12 cancellation indicated that the Navy cost analyst who was responsible for analysis of the A-12 CPR briefed higher, more realistic EACs to the Navy Program Office. But the program manager chose to rely on a lower, more optimistic EAC and reported it to higher level decision mak ers as the “most likely” EAC (Beach, 1990, p. 13).

The second comparison uses two performance indices: the cost performance index (CPI) and the to-complete performance index (TCPI). As Equation 1 shows, the CPI measures the budgeted cost of completed work against the actual cost. If the CPI is less than one, an unfavorable cost variance is indicated. In this case, the CPI is 0.76 (1491 / 1950), which means that for every dollar spent, $0.76 of work has been completed.

CPI = BCWP/ACWP (1)

As shown in Equation 2, the TCPI measures the budget for the remaining work (BAC-BCWPcum) against the estimated cost to achieve the BAG (EAC– ACWPcum). In this case, the TCPI is 1.04, indicating $1.04 of work to be completed for every dollar spent. Research on completed defense contracts shows that the cumulative CPI does not change by more than 10 percent from its value at the 20 percent completion point, and in most cases only worsens (Christensen and Heise, 1993). Thus, when the cumulative CPI is significantly less than the TCPI, it is highly doubtful that the contract will be completed at the EAC. In the A-12 case, this simple comparison indicates that the EAC was too small. More realistic EACs could have been computed using the simple formula shown in Equation 3.

TCPI = (BAC – BCWPcum) / (EAC – ACWPcum) (2)

EAC = ACWPcum + (BAC – BCWPcum) / Performance factor (3)

The final comparison involves generating a range of “independent” EACs using the generic formula shown in Equation 3. Figure 1 illustrates that the EAC can be estimated by extrapolating ACWPcum to the end of the project. More specifically, Equation 3 indicates that ACWPcum is extrapolated by simply adding ACWPcum to the budget for the remaining work (BAG — BCWPcum), adjusted by applying a performance factor. The performance factor may reflect the analyst’s expectations about the future performance on the contract. Using data from hundreds of completed defense contracts, researchers have concluded that past performance on defense contracts is predictive of the future (Christensen and Heise, 1993; Drezner et al., 1993)? Hence, the performance factor used in Equation 3 is often either the CPI, the schedule performance index (SPI), or some combination of the two indices.

Equation 4 shows the formula for the SPI. An SPI that is less than one indicates an unfavorable schedule variance. [10] In the A-12 case, the SPI is 0.72, indicating that for every dollar of work scheduled to be accomplished, only $0.72 was accomplished. In other words, the contract was behind schedule as well as over budget. Because scheduling problems often require additional funding to correct, an unfavorable SPI may be predictive of future cost overruns (Christensen, Antolini, and McKinney, 1995).

SPI = BCWP / BCWS (4)

Table 2 shows four popular performance factors (CPI, SPI, 0.8 CPI + 0.2 SPI, CPI x SPI) and the resulting EACs using the A-12 data. [11] If this range of EACs is considered reasonable, the $4,400 million EAC reported by the contractors is clearly understated. Note that the smallest and largest EACs were derived from the CPI and the product of CPI and SPI, respectively. This is expected. Research has shown that the EAC derived from the CPI is a reasonable floor to the final cost (Christensen, 1996). Also, the EAC based on the product of CPI and SPI is usually quite large because, historically, most defense contracts finish behind schedule and over budget (Christensen, 1994; Drezner et al., 1993). When a contract is behind schedule and over budget, the SPI and the CPI are each less than one. On April 1990, the SPI and CPI of the A-12 were 0.7168 (=1491 / 2080) and 0.7646 (=1491 / 1950), respectively. When these two performance indices are multiplied together, the product is less than either index by itself (0. 7168 x 0.7647 = 0.5481), and the resulting EAC is very large.

CONCLUSION

The use of earned value is accelerating worldwide. Although it began in industry, it was developed and used primarily on U.S. defense contracts. The association with the DoD’s cost/schedule control systems criteria may have created a misconception that earned value is inappropriate for smaller, nongovernment projects. But earned value and the related cost management reports can be used without the criteria on projects of any size. On large government projects, where the risk of cost growth is often carried by the government, the planning and control discipline fostered by the criteria is essential. On other kinds of projects, a full-scale application of the criteria is not necessary (Fleming and Koppelman, 1996). As a result, other U.S. government agencies, governments of other countries, and companies across the world have accepted earned value as an effective project management tool (Abba, 1995, 1997).

The simple analysis methods described here illustrate one beneficial use of earned value data: evaluating the reasonableness of the contractor’s EAC. Although the acronyms and technical jargon used in project management can be confusing, the cost management report prepared from earned value data can provide project managers with valuable insight into the cost and schedule status of their project. When used properly, the variances and performance indices can help a manager focus attention on emerging problems. The cost management report is not a financial report. It’s a tool for project managers.

David S. Christensen is an associate professor of accounting at Southern Utah University in Cedar City, UT. After receiving a Ph.D. degree from the University of Nebraska in 1987, he joined the faculty at the Air Force Institute of Technology, where he taught undergraduate and graduate courses in earned value cost management for more than 10 years. David is a CPA, CMA, CGFM, and CCE/A, and is active in several professional associations, including the Society of Cost Estimating and Analysis, the American Accounting Association, and the Institute of Management Accounting. Presently, David serves as associate editor for the Journal of Cost Analysis. He has published extensively in the area of earned value cost management in Acquisition Review Quarterly, Project Management Journal, National Contract Management Journal, Journal of Parametrics, National Estimator, and the Journal of Cost Analysis.

ENDNOTES

(1.) More formally, earned value is defined as “the value of completed work expressed in terms of the budget assigned to that work” (DoD, 1996, p. 63).

(2.) Indirect cost may not be allocated to the detailed levels of the work (e.g., control accounts and work packages).

(3.) Historically, a significant variance was one that exceeded a prespecified limit (e.g., percentage of the budget, or a dollar amount, or both). When applied arbitrarily to all levels of work on the contract, it can result in excessive analysis and reporting. Recently, definitions of significance have been modified to reduce the number of frivolous variance investigations and reports.

(4.) Presently, there is no requirement that promising cost assignment methods like activity-based costing (ABC) be used. But the criteria are not incompatible with ABC, and DCAA is not blocking defense contractors from adopting it (Oyer, 1992).

(5.) Cost management data on Acquisition Category I (ACAT I) programs are routinely summarized by program offices and sent to the Office of the Secretary of Defense for inclusion in the Defense Acquisition Executive Summary (DAES) data base. In addition, selected data from the CPR are included in a comprehensive annual report of ACATI programs to Congress.

(6.) The criteria do not specify how frequently the comprehensive EAC should be developed. The Earned Value Management Implementation Guide (DoD, 1996) recommends that the comprehensive EAC be developed “periodically” at the control account level (para 3-6e).

(7.) Each of the Services has had programs with severe cost and schedule problems. According toAbba (1995, p. 1), “Cost problems on the Army’s AAWS-M (Javeline) and the Air Force’s C-17 were all shown to be foreseeable, if not avoidable, using earned value reports from the contractors’ C/SCSC management control systems.”

(8.) The special access nature of the program interfered with higher level oversight of the A–12’s cost performance. For example, the cost management staff at the Office of the Secretary of Defense (OSD) was not cleared for access to the data until March 1990. Adverse cost and schedule variances that would have normally prompted investigations by OSD were delayed by more than a year (Beach, 1990, pp.7-8).

(9.) For example, using a sample of 155 contracts and an alpha of 5 percent, Christensen and Heise reported that the cumulative CPI does not change by more than 10 percent from its value at the 20 percent completion point.

(10.) An unfavorable schedule variance does not necessarily imply that work is behind schedule. By itself, the SPI reveals no critical path information. The SPI should be used in conjunction with other schedule information (Fleming, 1992).

(11.) Christensen et al. (1995) reviewed 25 studies that compared the accuracy of dozens of EAC formulas. The most accurate formulas used the CPI, the SPI, or the CPI x SPI as the performance factor. In addition, defense manuals that describe the EAC calculation, and standard DoD CPR analysis software, invariably include these index-based formulas as standard options in computing the EAC. Because cost and schedule problems can drive future costs, using both CPI and the SPI in the performance factor is popular. Multiplying the CPI by the SPI is a crude but simple attempt to adjust the CPI by the adverse influence that schedule problems can have on cost growth.

REFERENCES

Abba, W. (1995, October). Earned value management rediscovered! [World Wide Web home page for Earned Value Management]. http://www.acq.osd: 80/pm/newpolicy/misc/abba_art.html.

Abba, W. (1997, January-February). Earned value management–Reconciling government and commercial practices. Program Manager, 26, 58-69.

Beach, C. P. Jr. (1990, November). A–12 administrative inquiry. (Report to the Secretary of the Navy). Washington, DC: Department of the Navy.

Christensen, D. S. (1996, Spring). Project advocacy and the estimate at completion problem. Journal of Cost Analysis, 61-72.

Christensen, D. S., & Heise, S. (1993, Spring). Cost performance index stability. National Contract Management Journal, 25, 7-15.

Christensen, D. S. (1994, Winter). Cost overrun optimism–Fact or fiction? Acquisition Review Quarterly, 1, 25-38.

Christensen, D. S., Antolini, R. C., & McKinney, J. W. (1995, Spring). A review of estimate at completion research. Journal of Cost Analysis, 41-62.

Dahlberg, K. E., Colantuono, F., & Fischer, R. W. (1992, Winter) . Linking C/SCSC + MMAS + WIP: A discussion of the primary contractor planning/execution systems and government requirements. Government Accountant’s Journal, 5-11.

Defense Contract Audit Agency. (1996, January). DCAA contract audit manual. Pittsburgh, PA: U.S. Government Printing Office.

Department of Defense. (1996, December 12). Earned value management implementation guide. (Revision 1 was incorporated on Oct. 3, 1997. The OPR is Defense Logistics Agency/AQOF).

Drezner, J. A., Jarvaise, J. M., Hess, R. W., Hugh, P. C., & Norton, D. (1993). An analysis of weapon system cost growth. Santa Monica, CA: RAND.

Fleming, Q. W. (1992). Cost/schedule control systems criteria (Rev. ed.). Chicago, IL: Probus Publishing.

Fleming, Q. W., & Koppelman, J. M. (1996). Earned value project management. Project Management Institute.

General Accounting Office. (1997, May 5). Significant changes in DoD’s earned value management process. Washington, DC.

Little, A. D. Inc. (1984, August 15). Survey relating to the implementation of cost/schedule control systems criteria within the Department of Defense (Report to the Assistant Secretary of Defense [Comptroller] Phase II). Author.

Office of Management and Budget. (1997). Principles of budgeting for capital asset acquisitions. Washington, DC: Government Printing Office.

Oyer, D. J. (1992). Activity-based costing in government contracting. In Barry J. Brinker (Ed.), Handbook of cost management (pp. C4-1-C4-39). Boston: Warren, Gorham & Lamont.

Table 1.

Cost Performance Data for A-12 Project

(April 1990, Millions of Dollars)

Month BCWS BCWP ACWP SV CV BAC EAC VAC

Aprii 2,080 1,491 1,950 (589) (459) 4,046 4,400 (354)

Table 2.

A Range of Estimates at Completion for the A-12

(Derived from the Cumulative Performance Data in Table 1)

Performance factor Performance factor value EAC (Millions)

OPI x SPI 0.5481 $6,612

SPI 0.7168 $5,514

.8 CPI x .2 SPI 0.7551 $5,334

CPI 0.7646 $5,292

[Graph omitted]

[Graph omitted]

COPYRIGHT 1999 Defense Acquisition University Press

COPYRIGHT 2004 Gale Group