Probability adjusted financial impact analysis: Part 1

Probability adjusted financial impact analysis: Part 1

David P. Tarantino

Mid-Atlantic Hospital Center is a 300-bed community hospital located in a highly competitive market. Competitors have purchased, or are in the process of purchasing and implementing, an electronic medical record (EMR).

The selection and implementation of an EMR is a major strategic initiative for Mid-Atlantic Hospital Center. A multi-disciplinary task force of physicians, nurses, administrators, finance personnel, and information systems personnel has been formed to evaluate EMR systems and to make a recommendation for the hospital administration and medical executive committee to bring jointly to the hospital board of trustees. Four key criteria are identified in the evaluation process.

* Software functionality: How does the EMR software handle different workflow and data flow requirements such as billing, scheduling, clinical charting, prescription writing, referrals, etc.?

* Data standards: How does the EMR capture data and what coding systems are used?

* System interoperability: How does the EMR communicate with other systems (e.g., lab, hospital, billing systems)?

* Technical infrastructure: How does the EMR provide for security, multi-user access, multiple user roles, remote access, database options, etc.?

A request for proposals and initial evaluation have resulted in two potential vendors, namely the “Best” EMR and the “Outstanding” EMR. An initial financial analysis shows the systems to have net present values (NPV) of $1.7 million and $2 million, respectively.

Site visits by the task force are conducted to evaluate the two systems in each of the four key criteria. The results of the site visits are shown in Table 1. Based on the financial analysis and site visit results, the team is undecided on which system to recommend.

Despite the higher NPV for the “Outstanding” EMR, the physicians and nurses on the task force argue that the success of the EMR system implementation will be tied to end-user acceptance and adoption of the system. Therefore, based on the site visit results, they believe the “Best” EMR should be chosen. Others, however, believe the higher system interoperability and larger NPV make the “Outstanding” EMR a better choice.

Based on these discussions, the task force determines the criteria with the greatest potential impact on their recommendation are the functionality and interoperability of the system. With this in mind, the NPV analysis for each system is expanded. The results are shown in Tables 2 and 3. Given this new analysis, the task force is unsure how to proceed. Which of the NPV analyses is the correct one for each system? Should they take the average of the projected NPVs?

When we use financial analysis techniques, such as net present value analysis, it assumes we have complete knowledge about the future projections of our cash flows. We take into account the “risk” of projecting our cash flows into the future by our discount rate.

However, we all know that in reality, when we are asked to make substantial business decisions, we often are faced with a perception of the future as one in which ignorance and uncertainty increasingly overpower knowledge. The truth is most of our decisions are made within a spectrum of knowledge that lies somewhere between absolute uncertainty and complete knowledge.

While the NPV adjusts for risk, how can the task force adjust for uncertainty? How can it combine the information from the NPV analyses and its knowledge, though incomplete, about the uncertainties facing them to make an informed recommendation?

States of uncertainty require probabilities to denote the likelihood of their occurrence. So, the first step for the task force is to build a model that takes into account the probabilities for system functionality and system interoperability. The team chooses to build an outcome map for each EMR. (Note: for those familiar with decision analysis, the analysis could be combined into a single decision tree, however, for purposes of illustration, the analysis is done separately for each outcome.)

Setting decision points

They begin by building the outcome map for the “Best” EMR (Figure 1). The first decision point in the map is a “purchase” or “do not purchase” decision. If the team assumes it will purchase an EMR then the probability of purchase is 100 percent, and it needs to go no further on the “do not purchase” arm of the map.

The next decision point involves deciding on functionality. From site visits, the “Best” EMR scored an 8 out of 10 on functionality. As such, the team set the probability of the system having high functionality at 80 percent, and low functionality at 20 percent.

The next decision point involves the system interoperability. Again, using information from the site visits, the task force sets the probabilities for high, moderate, and low system interoperability, as 60 percent, 30 percent and 10 percent, respectively. If two events A and B are independent, then the probability of both A and B occurring is found by multiplying the probability of A occurring by the probability of B occurring. Since functionality and interoperability are independent of each other, the probability of high functionality and high system interoperability is 80 percent and 60 percent respectively.

As a result, the probability that the “Best” EMR has both high functionality and high system interoperability, is 80 percent x 60 percent, or 48%. The probabilities for each of the other branches are completed in the same way (Figure 1). As the task force did for the “Best” EMR, it builds an outcome map for the “Outstanding” EMR.

Having calculated the probabilities, and NPVs (Tables 2 and 3) for both EMRs, the task force can then determine expected values. Expected values differ from averages. Averages refer to data sets of events that have already occurred. They describe historical facts.

Expected values refer to events that have not yet transpired. They attempt to describe the outcome of an uncertain event that is anticipated to happen. Thus, expected values are quantitative measures describing the “expected” outcome of future events. The expected value is a better predictor of the eventual outcome than any single element of the data set.

Expected values are used to compare the general tendencies one can logically anticipate when selecting different alternatives in a decision problem. As such, they provide a means to deal with uncertainty in a rational manner.

Therefore, if the task force multiplies the NPV for high functionality and high interoperability and the probability for high functionality and high interoperability, the expected value, or in this case, probability adjusted financial impact, can be determined. Repeating this for each branch of the tree, gives a probability adjusted financial impact for each combination of functionality and interoperability.

How do they determine the net probability adjusted financial impact? If we have two events A and B and it is not possible for both events to occur, then the probability of A or B occurring, is the probability of A occurring “+” the probability of B occurring. Therefore, the “Net Probability Adjusted Financial Impact” is the sum of the individual probability adjusted financial impacts.

On an outcome map, when moving from left to right we multiply, and when moving down we add. Completing the calculations, the task force determines the net probability adjusted financial impact for the “Best” EMR to be $1,234,000, and that of the “Outstanding” EMR to be $1,010,000.

Based on these results, the task force believes it should recommend the “Best” EMR. Yet, how can they be sure the probabilities they assigned to their uncertainties of functionality and system interoperability are in fact, good choices?

In my next column, we will examine how to take the analyses the task force completed and by using sensitivity analysis and Monte Carlo simulation, test their assumptions.

Reference

1. Skinner DC. Introduction to Decision Analysis: A Practitioner’s Guide to Improving Decision Quality. Probabilistic Publishing, Gainesville, Florida. 2001.

David P. Tarantino MD, MBA

David P. Tarantino, MD, MBA, is CEO of The MD Consulting Group, LLC and president of Lifebridge Anesthesia Associates, LLC in Randallstown, Md. He can be reached at 410-521-2200 or tdoc5@aol.com

[ILLUSTRATION OMITTED]

Table 1 Site Visit Results

(Based on a 0-10 scoring system, where 0 is worst and 10 is best.

Numbers represent average of scores from all sites)

Best EMR Outstanding EMR

Functionality 8 3

Data Standards 8 8

Interoperability 6 7

Technical Infrastructure 7 7

NPV $1.7 million $2.0 million

Table 2 NPV Analysis for the “Best” EMR

High Functionality Low Functionality

High Interoperability $1,700,000 $1,100,000

Moderate Interoperability $1,200,000 $900,000

Low Interoperability ($500,000) ($800,000)

Mean NPV=$601,667

Table 3 NPV Analysis for the “Outstanding” EMR

High Functionality Low Functionality

High Interoperability $2,000,000 $1,500,000

Moderate Interoperability $1,000,000 $(500,000)

Low Interoperability ($1,000,000) ($1,500,000)

Mean NPV=$250,000

Figure 1 Outcomes Map for “Best” EMR

NPV Probability Expected Value

$1,700,000 0.48 $816,000

$1,200,000 0.24 $288,000

($500,000) 0.08 ($40,000)

$1,100,000 0.12 $132,000

$900,000 0.06 $54,000

($800,000) 0.02 ($16,000)

Probability Adjuxted Financial Impact =$1,234,000

Figure 1 Outcomes Map for “Outstanding” EMR

NPV Probability Expected Value

$2,000,000 0.21 $420,000

$1,000,000 0.06 $60,000

($1,00,000) 0.03 ($30,000)

$1,500,000 0.49 $735,000

($500,000) 0.14 ($70,000)

($1,500,000) 0.07 ($105,000)

Probability Adjuxted Financial Impact =$1,010,000

COPYRIGHT 2008 American College of Physician Executives

COPYRIGHT 2008 Gale, Cengage Learning