Fitting the lognormal distribution to surgical procedure times
May, Jerrold H
Fitting the Lognormal Distribution to Surgical Procedure Times*
Minimum surgical times are positive and often large. The lognormal distribution has been proposed for modeling surgical data, and the three-parameter form of the lognormal, which includes a location parameter, should be appropriate for surgical data. We studied the goodness-of-fit performance, as measured by the Shapiro-Wilk p-value, of three estimators of the location parameter for the lognormal distribution, using a large data set of surgical times. Alternative models considered included the normal distribution and the two-parameter lognormal model, which sets the location parameter to zero. At least for samples with n > 30, data adequately fit by the normal had significantly smaller skewness than data not well fit by the normal, and data with larger relative minima (smallest order statistic divided by the mean) were better fit by a lognormal model. The rule “If the skewness of the data is greater than 0.35, use the three-parameter lognormal with the location parameter estimate proposed by Muralidhar & Zanakis ( 1992), otherwise, use the two-parameter model” works almost as well at specifying the lognormal model as more complex guidelines formulated by linear discriminant analysis and by tree induction.
Subject Areas: Hospital Management, Planning and Scheduling, Probability Models, and Statistics.
In an era of cost-constrained health care, health care institutions must schedule elective surgeries efficiently to contain the costs of surgical services and ensure their own survival. Efficient scheduling in a hospital is complicated by the variability inherent in surgical procedures, therefore, accurately modeling time distributions is the essential first step in constructing a planning and scheduling system. Modeling the nature of that variability has been of interest for the past 35 years. Rossiter and Reynolds (1963), for example, noted that the two-parameter lognormal distribution visually appears to fit a waiting time distribution. In the literature, both the normal (Barnoon & Wolfe, 1968; Dexter, 1996) and the two-parameter lognormal (Hancock, Walter, More, & Glick, 1988; Robb & Silver, 1996) distributions have been proposed for describing surgical times.
As part of a larger project, we were provided with a large set of patient data. We wanted to determine the best distribution for each procedure and type of anesthesia. Our criterion for “best distribution” is the one that gives the best overall fit, using an appropriate statistical test. The literature suggested that the normal and lognormal distributions were the only two viable candidate distributions to consider. Scatterplots of the data suggested that the lognormal would be the superior choice. However, minimum surgical procedure times, even for the simplest procedures, are strictly positive. Very common procedures, such as cardiac bypass, require at least several hours in the operating room. A lognormal distribution with a nonzero minimum (also called the origin, threshold, or location parameter) had to be considered, in addition to the usual, two-parameter lognormal. At least three methods to estimate the location parameter have been proposed in the literature. Assuming that our data set is typical of that which appears in at least other medical contexts, we recognized that a thorough analysis of the information could be used to derive rules about when to use a location parameter as part of the modeling process and, if so, which one to use.
The validity of rules extracted from an empirical study depends on the appropriateness of the selection of the data sets used in the study. Muralidhar and Zanakis (1992) used synthetic data, varied the coefficient of variation by 0.1 from 0.1 to 2, and used sample sizes of 10, 20, 30, 50, 100, 200, and 500, holding the mean and location constant, to compare the bias in three different estimators of the location parameter, and had equal sample sizes in all cells of the design matrix. Because our research is based on actual data, the frequencies and characteristics are a function of the population of surgical procedure times. To the extent that our population (we have a census, not a sample of it) mirrors that which might be encountered in real applications, the overall patterns of behavior on which our guidelines are based should be more useful than ones based on synthetic data. If surgical times are fundamentally different from those that might arise in other situations, that might not follow. Muralidhar and Zanakis based their selection procedures on the coefficient of variation of the data. We found the skewness to be important and the coefficient of variation to have little impact. Determining whether the difference in guidelines is due to the difference in objective (theirs being to minimize bias, ours being to maximize goodness of fit) or because of the differences in data used for empirical analysis, requires further research. In the next section, we describe the nature of the surgical data set used for our investigations. Following that, we discuss the way in which we implemented the location parameter estimators. Then, we compare the normal distribution with the best of the four lognormal alternatives (the two-parameter lognormal and the three possible three-parameter lognormals), and show that, in general, a lognormal model fits our data better than the normal model does. Having established that fact, in the following section, we analyze the behavior of the three location parameter estimators as a function of characteristics of the samples and derive decision rules for selecting which one to use, if any, in order to optimize goodness of fit.
THE DATA SET
Our data set consists of 60,643 surgical cases from a large university teaching hospital performed from July l, 1989, until November 1,1995. All data were collected using a previously described computerized system (Bashein & Barna, 1985). Variables collected include the anesthetic agents used; the date and time at which anesthesia began, the patient was ready for surgery, surgery began, surgery ended, and the patient emerged from anesthesia; and the surgical procedures performed (up to three), categorized by Current Procedural Terminology code (CPT) (Kirschner, Burkett, Marcinowski, Kotowicz, Leoni, Malone, O’Heron, O’Hara, Scholten, & Willard,1995). Of the 60,643 surgical records, 779 were omitted from analysis due to incomplete data. Exactly 46,322 cases were coded with only a single procedure code, 10,470 patients had exactly two different procedures, and 2,802 patients had exactly three procedures during surgery.
In this paper, we focus on two durations: the time between anesthesia start and end-the total time; and the time between surgery start and wound closure– the surgical time. Total time is important because it represents the amount of time the patient occupies an operating room, which we need to know in order to build an operating room schedule. Surgical time represents the amount of time the surgeon is with the patient. Because surgeons may operate sequentially on a series of patients in different operating rooms, surgical time is important for scheduling and sequencing patients. We used anesthetic codes to categorize the type of anesthesia administered into six categories: general, local, monitored, pain procedure, regional, and none. Only general, local, monitored, and regional anesthesia occurred often enough to be further analyzed. We categorized the data by procedure and by type of anesthesia. A total of 5,125 different procedure-anesthesia combinations were represented in the 46,322 cases involving exactly one procedure. Although about 13,542 cases involved two or three procedures, frequencies for such cases are typically too small to do meaningful distributional fits at the procedure-anesthesia combination level, and are therefore not discussed in this paper.
The 3,160 procedure-anesthesia combinations vary widely in coefficient of variation (the ratio of the standard deviation to the mean) and skewness, the two characteristics we later use to derive guidelines for choosing a distributional alternative. The observed values of coefficient of variation and skewness are also not equally distributed by the number of observations in each procedure-anesthesia combination, nor is skewness independent of coefficient of variation, as it might be in a designed experiment. Table 1 shows a cross-tabulation of the number of procedures in a procedure-anesthesia combination versus coefficient of variation. The coefficient of variation appears to decrease as sample size increases; the p-value from the chi-square test for independence is .0230. Table 2 shows a cross-tabulation of the number of procedures in a procedure-anesthesia combination versus skewness. Skewness strongly appears to increase with larger sample sizes; the pvalue from the chi-square test for independence is less than .0001. Table 3 shows that skewness tends to increase with coefficient of variation; the p-value for the chi-square test for independence is also less than .0001.
LOCATION PARAMETER ESTIMATION
NORMAL VERSUS LOGNORMAL FITS
Because we want to derive our conclusions from actual data, as opposed to synthetic data from a Monte-Carlo procedure, we must first establish that the data are best fit by a lognormal distribution before we can draw inferences about the best way to estimate the location parameter. Our comparison is based on p-values from a test of goodness of fit. Bratley, Fox, and Schrage (1987, pp.133-134) strongly criticized the approach of using goodness-of fit tests on a variety of distributions to choose a data model, so we limit our consideration to the normal and the lognormal, both of which have been previously proposed in the literature. We measure goodness of fit using the Shapiro-Wilk test, because it has been described as the best omnibus test of its type (D’Agostino,1986, p. 406). The IMSL routine to perform the Shapiro-Wilk test can be used with a sample size as small as 3, but we did not test anything smaller than S because almost any model may appear to fit a sample that small.
Our data are rounded (nominally) to the nearest minute and, in some cases, appeared to be rounded to the nearest five minutes. D’Agostino (1986, p. 405) pointed out that the Shapiro-Wilk test can be affected by rounding. He noted that rounding has a significant effect if the ratio of the standard deviation of the distribution to the rounding interval is 3 or S, but only minimal effect when the ratio is 10. Based on a one-minute rounding scheme, the average ratio of standard deviation to rounding interval for the 3,160 distributions we studied is 54.3. In two cases, the ratio is S or less, and in 45 of the 3,160 distributions the ratio is less than 10. If the data are actually rounded to the nearest five minutes, then the average ratio of standard deviation to the rounding interval is 10.9; in 621 cases the ratio is S or less; and in 1,853 of the distributions, the ratio is less than 10. Because many of the cases have recorded times other than at multiples of five minutes, we believe that the Shapiro-Wilk test is appropriate for our purposes.
We tested for normality using the IMSL routine SPWILK. We tested for lognormality by first determining all the candidate location parameters. If a location parameter was positive, we subtracted it from all the observed times, took the natural logs of the times, and tested the resulting series for normality. Our numerical results support the contention that the lognormal model is superior to the normal model for our data, and that the difference between the models increases as the sample size becomes larger. The second conclusion is not surprising, because goodness-of-fit tests are not particularly powerful for small sample sizes.
We cross-tabulated the observed Shapiro-Wilk p-values for the best lognormal model against sample size in order to see how the lognormal model’s goodness of fit changes as sample size increases. We divided p-values into four categories and sample size into five categories, as shown in Table 4. The row percentages for the column p > .1, for which a good fit by the lognormal model is strongly supported by the test, decrease from about 90% in the row for very small (30 or less) samples to about 52% in the row for large sized (over 200) samples. Correspondingly, the row percentages in the column for p
Tables 5 and 6 show how the best of the lognormal fits perform in direct comparison with the competing model, the normal. Table S tabulates the goodness-of-fit p-values, categorized as before, for the best of the lognormals against the normal for the 2,664 samples with n
The same pattern holds if we consider the Shapiro-Wilk p-values without categorizing them into four groups. For sample sizes of 30 and below, the best of the lognormals yields a goodness-of fit p-value larger than that of the normal 65% of the time (1,743 out of 2,664); for sample sizes of 31 to 60, 74% of the time (191 out of 258); for sample sizes of 61 to 100, 87% of the time (90 out of 104); for sample sizes of 101 to 200, 83% of the time (71 out of 86); and for sample sizes of 201 or more, 77% of the time (37 out of 48). The numerical results strongly suggest that if we could find a way to determine which lognormal distribution to fit (two– parameter or, if a three-parameter, which estimator of the location parameter to use), the resulting model would be superior to using the normal distribution.
The samples that fall into the four corners of Table 6 provide some insight as to ( 1 ) what characteristics of the data could be associated with the model that better fits the data, and (2) how well each model fits the data. The four comers include the samples of size 31 or larger for which neither model fits well (p-values below .01 for both), where one fits but the other does not (one has p > .1, the other has p
OVERALL PERFORMANCE OF THE LOCATION ESTIMATORS
The previous section compared the normal distribution to the best of the lognormal fits. In this section, we discuss the differences among the four different lognormal fit strategies, and the ways in which those differences are related to characteristics of the samples. In the next section, we derive a decision tree to recommend a modeling strategy as a function of sample characteristics.
First, do the four different lognormal alternatives have different goodnessof fit performance? Looking only at the 3,160 Shapiro-Wilk p-values for each of the lognormal alternatives, we ran a one-way ANOVA of those values against the location parameter estimator that was used. That approach may need to be treated with caution because there is no reason to presume that the p-values are normally distributed. In addition, both Cochrane’s C test and Bartlett’s test for homogeneity of variances show that the four groups’ standard deviations are not the same (p is essentially zero for both tests). Nevertheless, the means and 95% Tukey HSD interval plot shown in Figure 3, where 2LN means the two-parameter lognormal, shows that the four groups have significantly different average goodness-of-fit behavior, overall. Surprisingly, although all the samples should have minima strictly bounded away from zero, using the three-parameter lognormal with estimators Al or A2 appears to give poorer performance than ignoring the location parameter altogether, when no other characteristics of the data are taken into account. As shown in Figure 3, the three-parameter lognormal using estimator A3 gives a better overall fit, followed closely by the two-parameter lognormal.
A Kruskal-Wallace test on the same data shows that there may be more to the story, though. The average ranks of the four alternatives are significantly different (pvalue is essentially zero), but a box-and-whisker plot, with the median notched, the mean marked with a plus sign and outliers indicated (displayed in Figure 4), appears to show that the behavior of the three-parameter lognormal using estimator A^sub 2^, especially, may be highly related to factors not accounted for in an overall analysis.
The Venn diagram in Figure 5 shows the frequency of the best fit by modeling approach, where “modeling approach” is limited to the lognormal alternatives. The effect of including the normal distribution as an alternative is discussed in the next paragraph. Each region of Figure 5 shows the number of samples for which the alternative or group of alternatives gives the best goodness of fit. For example, 345 procedure-anesthesia combinations are best fit by the three-parameter lognormal using estimator A^sub 1^ alone; 121 are best fit by all four alternatives (the twoparameter lognormal, the three-parameter lognormal using estimator A^sub 1^, the threeparameter lognormal using estimator A^sub 2^, and the three-parameter lognormal using estimator A^sub3^); and 11 by the two-parameter lognormal, the three-parameter lognormal using estimator A^sub 2^, and the three-parameter lognormal using estimator A^sub 3^. Note that 2,045 times there is a unique best alternative, but only twice is it the three-parameter lognormal using estimator A^sub 2^. Although the samples should be strictly bounded away from zero, 26% (829/3,160) of the time using the threeparameter lognormal with any of the three estimators yields a model strictly inferior to ignoring the location parameter entirely. The single best pure strategy is to use the three-parameter lognormal with estimator A^sub 3^, but it only yields a “best” estimate 61 % of the time and is almost indistinguishable from always ignoring the location parameter. Always using the three-parameter lognormal with estimator A3 results in a best fit 1,941 out of 3,160 times as compared with ignoring the location parameter, which gives a best fit 1,918 out of 3,160 times.
We limited Figure 5 to lognormal alternatives. Without a domain-based argument to the contrary, it is plausible to believe that there is a single model that describes the stochastic process whose realizations are reflected in our data set. The analysis cited in the previous section demonstrates that this model is much more likely to be some form of the lognormal than it is to be the normal. From a statistical perspective, though, it is interesting to consider how Figure 5 would change if we included the normal distribution as an alternative. There are 1,021 samples best fit by the normal. The set of samples best fit by the normal does not overlap that of any of the lognormal alternatives. That is, none of the samples are best fit by both the normal and any lognormal distribution (there are seven samples for which all the methods give equally poor results-all have Shapiro-Wilk p-values essentially zero). The seven regions of the Venn diagram (A^sub 1^, 2LN, 2LN & A^sub 2^, 2LN & A^sub 3^, 2LN & A^sub 1^ & A^sub 3^, 2LN & A^sub 2^ & A^sub 3^, 2LN & A^sub 1^ & A^sub 2^ & A^sub 3^) affected by the explicit inclusion of the normal have their total frequency change from 2,253 (345, 829, 33, 779, 135, 11, and 121, respectively) to 1,225 (344, 359, 0, 457, 62, 2, and 1 ). The number of samples best fit by 2LN drops dramatically. Note, however, that the numbers of CPT-anesthesia combinations best fit by the three-parameter lognormal and only one of the estimators Al, A2, and A3 are essentially unchanged.
We next looked for guidelines that might improve the modeling process by helping to identify when to use which location parameter estimate, if any.
A MODEL SELECTION DECISION TREE AND ITS DERIVATION
We chose to base our decision tree on the coefficient of variation and the skewness for several reasons. Muralidhar and Zanakis (1992) used the coefficient of variation as the basis for recommending location parameter estimates. Both the coefficient of variation and skewness are easy to measure. The 13 different outcomes for the best distributional fit appear to differ significantly in their average coefficient of variation and skewness values. Figure 6 shows the mean and Tukey 95% HSD intervals for the coefficient of variation for the 13 groups, and Figure 7 does the same for skewness. For both one-way ANOVAs, both Cochran’s C test and Bartlett’s test yield p-values of essentially zero. The standard deviations differ by more than a factor of three to one, and sample sizes are not equal, so the p-values and significance levels of the tests may be off significantly. However, the figures do suggest that the coefficient of variation and skewness may be useful in determining modeling guidelines.
We thought that the size, relative or absolute, of the smallest order statistic might be a factor in explaining differences between the modeling approaches. We expected that the samples best fit by the two-parameter lognormal would have smallest order statistics close to zero, and that those best fit by the three-parameter lognormal using estimators A^sub 1^, A^sub 2^, and A^sub 3^ might also differ. Figure 8 shows means and 95% Tukey HSD intervals for a one-way ANOVA of observed minimum value (chi^sub1^) by best fitting lognormal alternative. As before, the tests for homogeneity of variances fail, so formal statistical tests may be questionable, but notice how similar the distributions are for the two-parameter lognormal category, and both the three-parameter lognormal using estimator A^sub 1^ and the three-parameter lognormal using estimator A^sub 3^. The samples best fit by the three-parameter lognormal using estimator A^sub 2^ do have significantly larger values of chi^sub 1^, but there are only two of them. Redoing the analysis displayed in Figure 8, using xl/mean instead of chi^sub 1^, did not separate further the three-parameter lognormal using estimator A^sub 3^ and the twoparameter lognormal, the two most significant contenders.
The Venn diagram in Figure 5 illustrates the difficulty in using a technique for extracting rules for selecting a modeling approach. If the sets of CPT-anesthesia combinations best fit by a particular alternative were disjoint, the task at this point would be to find functions that best separate those sets. The sets have considerable overlap. We applied two different methodologies, linear discriminant analysis (LDA) and a tree induction program (Sees) (Rulequest Research, 1998). Sees is an improved version of C4.5 (Quinlan, 1993) and a descendant of ID3, a machine learning algorithm for discrete classification. Sees uses hyperplanes to define the boundaries between sets, but only chooses hyperplanes parallel to the coordinate axes. At each branch of the tree, it uses an entropy-based measure to identify a new dividing line for the region not yet classified.
Note that both LDA and Sees determine functions, on the basis of which disjoint clusters may be separated. How do we deal with a situation such as ours, in which over 35% of the cases are actually best fit by at least two different alternatives? We considered two options. First, we used extraction techniques on the 2,045 data points for which only one alternative was best, and then evaluated the resulting rules on the entire data set. Second, we assigned each point for which multiple alternatives were optimal to all alternatives that best fit it, extracted rules, and then evaluated them on the entire data set. The second option expanded the data set to 4,666 cases.
The performance on the total data set of the four models derived from using the two classification methods and the two representations is shown in Table 7. The LDA classifiers correctly identify fewer of the model fits than do the corresponding Sees classifiers, but they are better at recognizing samples best fit by shift Al. The representation alternatives are to ignore samples for which more than one strategy is optimal, leaving 2,045 cases for a method to analyze, and to assign such samples to all optimal strategies, resulting in 4,666 cases. The performance of both LDA and Sees are almost identical using both representations. After manually examining the patterns in the Sees classifiers, we found that a simple rule of using no shift if skewness 0.35 does almost as well overall as the more complex rules constructed by Sees. The performance of the single rule is given in the sixth column of Table 7.
Based on a census of surgical times that appear to be lognormally distributed, we found that what minimizes bias also maximizes goodness of fit 61% of the time. Although our data sets should, on conceptual grounds, be strictly bounded away from zero, a strategy of leaving the location parameter out of the model entirely does almost as well as choosing the location parameter so as to minimize bias. Decision rules based on the skewness and coefficient of variation of the data can be used to identify the correct alternative 78% of the time, but do not do any better than a single rule based on the skewness.
It is possible that the existing estimators for the location parameter are also the best when goodness of fit is the criterion of interest, if we could only find the proper way of identifying which one to use. Skewness and the coefficient of variation do not appear to be adequate for that task; neither does the size of the smallest order statistic. The types of data for which the three-parameter lognormal using estimator Al is superior to the three-parameter lognormal using estimator A3 are particularly elusive. As shown in Table 7, the Sees decision trees for the 2,045 and 4,666 case analyses correctly identify, respectively,10 and 8 of the 345 procedure– anesthesia combinations best fit by the three-parameter lognormal using estimator A^sub 1^. The single skewness rule, which is as accurate, overall, as the Sees decision trees, correctly identifies none of those combinations. It is also possible that an altogether different type of estimator should be used when goodness of fit is the criterion of interest. Because accurate data modeling is critical to our planning and reasoning systems, we welcome further work that would determine which, if either, of the above possibilities is correct [Received: October 24, 1996. Accepted: March 15, 1999.
*This research was supported in part by a grant from the Institute for Industrial Competitiveness. Three anonymous referees made valuable, constructive comments on this paper. We especially thank the associate editor, whose extensive and thorough recommendations played a key role in both the conceptual development and the presentation of our work.
Barnoon, S., & Wolfe, H. (1968). Scheduling a multiple operating room system: A simulation approach. Health Services Research, 3(4), 272-285.
Bashein, G., & Barna, C. (1985). A comprehensive computer system for anesthetic record retrieval. Anesthesia Analgesia, 64, 425-431.
Bratley, P, Fox, B. L., & Schrage, L. E. (1987). A guide to simulation (2nd ed.). New York: Springer-Verlag.
D’Agostino, R. B. (1986). Tests for the normal distribution. In R. B. D’Agostino & M. A. Stephens (Eds.), Goodness-of fit techniques. New York: Marcel Dekker, Inc., 367-419.
Dannenbring, D. G. (1977). Procedures for estimating optimal solution values for large combinatorial problems. Management Science, 23, 1273-1283. Dexter, F (1996). Application of prediction levels to OR scheduling. AORN Journal, 63(3), 1-8.
Dubey, S. D. (1967). Some percentile estimators for Weibull parameters. Technometrics, 9, 119-129.
Hancock, W. M., Walter, P R, More, R. A., & Glick, N. D. ( 1988). Operating room scheduling data base analysis for scheduling. Journal of Medical Systems, 12, 397-409.
Kirschner, C. G., Burkett, R. C., Marcinowski, D., Kotowicz, G. M., Leoni, G., Malone, Y, O’Heron, M., O’Hara, K. E., Scholten, K. R., & Willard, D. M. (1995). Physicians’ Current Procedural Terminology 1995, Chicago: American Medical Association.
Muralidhar, K., & Zanakis, S. H. (1992). A simple minimum-bias percentile estimator of the location parameter for the gamma, Weibull, and log-normal distributions. Decision Sciences, 23, 862-879.
Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann.
Robb, D. J., & Silver, E. A. (1996). Scheduling in a management context: Uncertain processing times and non-regular performance measures. Decision Sciences, 24(6), 1085-1106.
Rossiter, C. E., & Reynolds, J. A. (1963). Automatic monitoring of the time waited in out-patient departments. Medical Care, 1, 218-225.
Sees (release 1.09) (1998). [Computer software]. St Ives, NSW, Australia: Rulequest Research Pty Ltd.
Jerrold H. May
Joseph M. Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, PA 15260, email: firstname.lastname@example.org. edu
David P Strum
Department of Anesthesiology Department of Anesthesiology Queen’s University, Kingston General Hospital, 76 Stuart St., Kingston, Ontario K7L 2V7, email: email@example.com
Luis G. Vargas
Joseph M. Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, PA 15260, email.: firstname.lastname@example.org
Jerrold H. May is a professor of decision sciences and artificial intelligence at the Katz Graduate School of Business, University of Pittsburgh, and is also the director of the Artificial Intelligence in Management Laboratory there. He has more than 60 refereed publications in a variety of outlets, ranging from management journals such as Operations Research and Information Systems Research to medical ones such as Anesthesiology and Journal of the American Medical Informatics Association. Professor May’s current work focuses on modeling, planning, and control problems, the solutions to which combine management science, statistical analysis, and artificial intelligence, particularly for operational tasks in healthrelated applications.
David P Strum earned his M.D. degree from Dalhousie University, trained at the University of Toronto and the University of California, San Francisco, and is board certified in both critical care medicine and anesthesiology. Previously on the
faculties at the University of Washington, University of Pittsburgh, and University of Arkansas. Dr. Strum is currently an associate professor of anesthesiology at Queen’s University, Ontario, Canada. Dr. Strum was also a visiting scholar at the Katz Graduate School of Business, University of Pittsburgh from 1996-97. He has published numerous papers in refereed journals such as Anesthesiology, JAMIA, Science, Anesthesia and Analgesia, and Decision Sciences. His research interests are in operations research and management for surgical services.
Luis G. Verges is a professor of decision sciences and artificial intelligence at the Katz Graduate School of Business, University of Pittsburgh, and co-director of the AIM Laboratory. He has published over 40 publications in refereed journals such as Management Science, Operations Research, Anesthesiology, JAMIA, and EJOR, and three books on applications of the Analytic Hierarchy Process with Thomas L. Saaty. Professor Vargas’ current work focuses on the use of operations research and artificial intelligence methods in health care environments.
Copyright American Institute for Decision Sciences Winter 2000
Provided by ProQuest Information and Learning Company. All rights Reserved