Using control charts to help manage accounts receivable

Using control charts to help manage accounts receivable – Nicollet Medical Center; percentage collected method of performance measurement

Nancy M. Bruch

Patient Accounts Management

The relative performance of a healthcare organization’s accounts receivable (AR) department is a critical factor affecting an organization’s financial well-being. Park Nicollet Medical Center (PNMC), Minneapolis, Minnesota, changed the way it measured its AR department’s performance, switching from the rolling averages method of performance measurement to the percentage collected method of performance measurement, and was able to improve its patient accounts management effort.

Park Nicollet Medical Center (PNMC), Minneapolis, Minnesota, is one of the largest group practices in the United States, with more than 360 physicians practicing in 45 specialties and subspecialties. In December 1989, PNMC began learning continuous quality improvement (CQI) techniques and applying them to the analysis of its accounts receivable (AR) department data.

PNMC’s CQI efforts uncovered inaccuracies in the rolling averages method of predicting and measuring AR data. Additional efforts produced a new measurement methodology–percentage of AR collected–that resulted in the following:

* An $868,000–or 12.9 percent–increase in annual collections on receivables in the 0 to 30-day aging category,

* Recognition of a shift of 13.6 percent of total receivables to the earlier (more collectable) aging categories–0 to 90 days from date of service, and

* The ability to more accurately predict the rate of collection from PNMC’s major contract payers.

At the end of calendar year 1992, PNMC had annual net patient revenues of $145 million. Fifty-nine percent of revenue was derived from capitated programs. The remainder came from a combination of various contract and government payers. This remaining 41 percent created AR levels totaling more than $15 million. That level warranted PNMC’s AR CQI technique efforts.

Data analysis

PNMC’s AR department’s first attempt to measure the impact of its CQI-related procedural changes used a standard days outstanding calculation, based on a rolling 12-month average. Attempting to react to the variation the calculation produced, however, was frustrating and appeared to be ineffective. As a result, the AR department decided it needed more useful and accurate data in order to measure its success.

The AR department confirmed that it is not unusual for business indices, such as days outstanding, to be based on moving or rolling averages. As the rolling average of the past 12 months was used to calculate PNMC’s days outstanding, each month’s calculation dropped the oldest value and added the most recent value. This method of calculation caused a statistical problem called positive correlation; 11 of the 12 numbers of each calculation were the same as those used in the prior month.(a) Because of this problem, use of the rolling-average result as an accurate measurement tool was suspect.

This point can be illustrated through the use of control charts. A control chart graphs process variation over time. The boundaries of variation are marked by upper statistical control limits (UCL) and lower statistical control limits (LCL), calculated according to statistical formulas from data collected from the process being measured.

A control chart’s center line shows the average value of the data collected. The spaces between the lines created by statistical calculations (referred to as “zones”) are used to measure variation between points. The variation of the points will be considered either special (extraordinary or unusual) cause or common (natural or normal) cause, depending on where they fall in the various zones.

Those who monitor AR have a tendency to treat a common cause variation as if it were a special cause, when process data shows it to be from a stable system. The inclination to “fix” a variation that is inherent in the stable system (common cause) often results in staff frustration and can make the system worse.

Exhibits 1, 2, and 3, graphing the same data over time, demonstrate the inaccuracies of the measurement tool resulting from the use of rolling averages. Exhibit 1 represents 101 random values graphed in a control chart. Exhibit 2 represents the same values with a four-month rolling average. Exhibit 3 uses a 12-month rolling average to represent the same values.

It is necessary to focus on a specific range of months on the charts in order to understand how management would respond to the results using the four-month and 12-month rolling averages calculations. As illustrated in Exhibit 2, months 16 to 20 show an improvement, or decrease, in days outstanding (below the LCL at approximately 64 days) culminating in a “special” cause. Exhibit 3 illustrates the same range with the opposite result: an increase in days outstanding with months 16 to 20 above the UCL of 69 days (special cause). The charts illustrated in Exhibits 1, 2, and 3 use the same data over the same period of time, but the data are calculated differently. These charts highlight the statistical inaccuracy of rolling average calculations.

Percentage of AR collected method

PNMC’s AR department decided to change its measurement tool after reviewing the flaws of calculating rolling averages. Its new goal would be to measure the time it took to collect outstanding charges.

Data available to help the AR department meet its goal included a monthly report that “ages” AR information by payer category or financial class. The data were transferred to a spreadsheet to calculate days outstanding, based on a 12-month rolling average. The AR department determined that this same data needed to be used in the new measurement, but in a way that avoided the positive correlation flaw of the rolling average calculation. The AR department also decided that to appropriately measure and monitor its newly defined process, it needed to examine the data for trends and other patterns that occur over time. And to remain true to its commitment to CQI, the department also wanted to measure the range of variation built into the system.

The AR department chose control charts as the most appropriate tool for measuring the percentage of AR collected. Next, the AR department converted its chosen methodology to a formula for calculating the percentage of AR collected through the 12 billing cycles in a calendar year. To perform these calculations, the department used a PC-based statistical software package using the monthly billed dollars to calculate the percentage of collected amounts for each month, to calculate the cumulative percentage collected for specified groupings, and to identify statistically significant results. The software allowed the department to produce control charts on any selected data.

Results

The change in the AR department’s measurement and methodology proved beneficial, especially by providing the ability to graph the results of the department’s actual collection efforts, as illustrated in Exhibits 4 and 5. These exhibits show the impact of the new methodology when calculating collections from a third-party insurance carrier. Both exhibits represent the same period in time and indicate that two separate systems are present: the first system is present from month 1 to month 14 and the second system is present from month 15 to month 31. Month 15 represents the first month PNMC received computer tape remittance from this particular payer. Exhibit 4 shows that under the first system, PNMC collected 22.77 percent of its AR in the first month. Under the second system, as indicated in Exhibit 5, PNMC increased its average first-month AR collections to 55.84 percent.

Exhibits 6 and 7 represent the percentage of AR collected from a different third-party insurance carrier. Exhibit 6 shows a stable system until month 19, when the data suggest a new system is present. Exhibit 7 represents the new system with an increase in collections to 54.63 percent. Two changes took place at the point the new system appeared: more consistency in resolving unfiled claims and an effort to enter the information needed to bill the third-party insurance carrier on a more timely basis.

The main benefits realized from using the new measurement and control chart method are:

* The AR department is not reacting to common cause variation,

* The AR department is able to better predict its rate of AR collections,

* The AR department can confirm the magnitude of any problems and,

* The AR department can verify and accurately measure the results of changes in procedures.

More improvement

After implementing the control chart method, PNMC’s AR department was determined to become more efficient in other ways, too. Its revised monthly review process identified whether a system was in statistical control or if there was a need to analyze special cause variations. The monthly review process also analyzed whether the average amount being collected was acceptable. Even though the AR department had a stable control limits’ system, it was felt that by tightening the control limits, the department would better be able to predict when collections would occur in the future and to assess the need for changes in billing and collection procedures. When a change was implemented, it had the data and methodology for measuring the impact.

Providing monthly reports to management and staff was another area that the AR department was able to improve. The department now provides charts of PNMC’s three highest-volume payer groups, charts of total AR, charts of other payer groups with points of interest, and a summary of observations and findings. Additional charts, by payer groups, are provided on a rotating basis for review with staff responsible for the collection of AR. Staff training and involvement in monitoring these charts was critical to the success of the AR department efforts. If staff understands the impact of its work and can measure its results, its work will continually improve.

Conclusion

The new measurement system offers the AR department an opportunity to expand its knowledge of statistical process controls and to understand the need for improving its formulas. By using the control chart measurement of AR, the AR department is able to focus its efforts on AR prediction and process improvements, not common cause variation explanations.

a. Nelson, Lloyd S, “Technical Aids,” Journal of Quality Technology, Vol. 15, No. 2, 1983, pp. 99-100.

About the authors

Nancy M. Bruch, MBA, is director of business services, Park Nicollet Medical Center, Minneapolis, Minnesota, and a member of HFMA’s Minnesota Chapter.

Lynn L. Lewis is manager of business services, Park Nicollet Medical Center.

COPYRIGHT 1994 Healthcare Financial Management Association

COPYRIGHT 2004 Gale Group