Cautions on the use of the SERQUAL measure to assess the quality of information systems services

Cautions on the use of the SERQUAL measure to assess the quality of information systems services

Van Dyke, Thomas P

ABSTRACT

The SERVQUAL questionnaire (Parasuraman, Zeithaml, & Berry, 1988) is one of the preeminent instruments for measuring the quality of services as perceived by the customer. In a recent Decision Sciences article, Kettinger and Lee (1995) suggested the use of a modified SERVQUAL instrument to assess the quality of the services supplied by an information services provider. However, a number of problems with the SERVQUAL instrument are discussed in the literature. This article provides an illustrative example utilizing data collected from 138 executive and information systems professional customers of a multibillion dollar information services provider in order to examine the validity and reliability of Kettinger and Lee’s (1995) modified SERVQUAL instrument. Results of analyses do not confirm the findings of Kettinger and Lee. Moreover, it appears that the use of difference scores in calculating SERVQUAL contributes to problems with the reliability, discriminant validity, convergent validity, and predictive validity of the measure. These findings suggest that caution should be exercised in the use of SERVQUAL scores and that further work is needed in the development of measures for assessing the quality of information services.

Subject Areas: Management Information Systems, Quality Control, Service Operations, and Service Quality.

INTRODUCTION

A recent paper by Kettinger and Lee (1995) recommended a combination of Parasuraman, Zeithaml, and Berry’s (1985, 1988, 1991) Service Quality (SERVQUAL) questionnaire and Ives, Olson, and Baroudi’s (1983) User Information Satisfaction (UIS) instrument to measure the satisfaction of users with information services providers. Galletta and Lederer (1989) discussed the difficulties associated with the use of this UIS questionnaire to measure user satisfaction. Citing poor reliability, they cautioned against the use of the UIS instrument to evaluate the information systems (IS) function. Furthermore, although it may be true that the SERVQUAL instrument is a commonly used measure for the assessment of perceived service quality in both marketing practice and research, Kettinger and Lee acknowledged that a number of studies have identified potential difficulties related to this instrument (e.g., Carman, 1990; Babakus & Boller, 1992; Cronin & Taylor, 1992).

This note embodies a threefold purpose: (1) to summarize the theoretical and empirical difficulties with the SERVQUAL instrument, (2) to present an illustrative example utilizing a modified version of Kettinger and Lee’s (1995) SERVQUAL-based instrument, and (3) to summarize the limitations and key issues for practitioners and researchers.

THE SERVQUAL INSTRUMENT: PROBLEMS IDENTIFIED IN THE LITERATURE

The difficulties associated with the SERVQUAL measure that are identified in the literature can be grouped in four main categories: (1) The use of difference or gap scores, (2) poor predictive and convergent validity, (3) the ambiguous definition of the “expectations” construct, and (4) unstable dimensionality.

Problems with the Use of Difference or “Gap” Scores

A difference score is created by subtracting one measure from another in an attempt to create a third measure of a distinct construct. For example, in scoring the SERVQUAL instrument the expectations score is subtracted from the perceptions score to create such a “gap” measure of service quality. Several problems with the use of difference scores make them a poor choice as measures of psychological constructs (see Table 1). The described difficulties related to the use of difference measures include low reliability, poor discriminant validity, spurious correlations, and variance restrictions.

Reliability Problems With Gap Scores

Many studies demonstrate that Cronbach’s (1951) alpha, a widely used method of estimating reliability, is inappropriate for difference scores (e.g., Lord, 1958; Wall & Payne, 1973; Johns, 1981; Prakash & Lounsbury, 1983; Peter, Churchill, & Brown, 1993). This is because the reliability of a difference score is dependent on the reliability of the component scores and the correlation between them. As the correlation of the component scores increases, the reliability of the difference scores is decreased. Therefore, Cronbach’s alpha tends to overestimate the reliabilities of the difference scores when the component scores are highly correlated. Such is the case of the SERVQUAL instrument (Peter et al.).

Validity Issues

Another problem with the SERVQUAL instrument concerns the poor predictive and convergent validities of the measure. Babakus and Boller (1992) reported that perceptions-only SERVQUAL scores had higher correlations with an overall service quality measure and with complaint resolution scores than did the perception– minus-expectation scores typically used by SERVQUAL. Parasuraman et al. (1991) reported that the SERVQUAL perception-only scores produced higher adjusted R2 values (ranging from .72 to .81) compared to the SERVQUAL gap scores (ranging from .51 to .71) for each of the five dimensions. Brensinger and Lambert (1990) found evidence of the poor predictive validity of SERVQUAL, and the superior predictive and convergent validity of perception-only scores was confirmed by Cronin and Taylor (1992, 1994). Their results indicated higher adjusted R^sup 2^ values for perception-only scores across four different industries. The perception component of the perception-minus-expectation scores performs better as a predictor of perceived overall quality than the difference score itself (Parasuraman et al., 1988; Cronin & Taylor, 1992, 1994; Babakus & Boller, 1992; Boulding, Kalra, Staelin, & Zeithaml, 1993).

Ambiguity of the “Expectations” Construct

Teas (1994) noted that SERVQUAL expectations have been variously defined as desires, wants, what a service provider should possess, normative expectations, ideal standards, desired service, and the level of service a customer hopes to receive (e.g., Parasuraman, Zeithaml, & Berry, 1988, 1991, 1994b; Zeithaml, Berry, & Parasuraman, 1993). These multiple definitions and corresponding operationalizations of “expectations” in the SERVQUAL literature result in a concept that is loosely defined and open to multiple interpretations (Teas, 1994). Different interpretations of “expectations” include a forecast or prediction, a measure of attribute importance, classic ideal point, and vector attribute (Teas, 1993; Parasuraman et al., 1994b). These various interpretations can result in potentially serious measurement validity problems. For example, the classic ideal point interpretation results in an inverse of the relationship between SERVQUAL calculated as perceptions minus expectations (P – E) and perceived SERVQUAL (P only), for all values when perception scores are greater than expectation scores (i.e., P > E).

Unstable Dimensionality of the SERVQUAL Instrument

The results of several studies have demonstrated that the five dimensions claimed for the SERVQUAL instrument are unstable (see Table 1). The unstable dimensionality of SERVQUAL demonstrated in many domains including information services, is not just a statistical curiosity. The scoring procedure for SERVQUAL calls for averaging the P – E gap scores within each dimension (Parasuraman et al., 1988). Thus, a high expectation coupled with a low perception for one item would be cancelled by a low expectation and high perception for another item within the same dimension. This scoring method is only appropriate if all of the items in that dimension are interchangeable. However, given the unstable number and pattern of the factor structures, averaging groups of items to calculate separate scores for each dimension cannot be justified.

AN ILLUSTRATIVE EXAMPLE

The Sample

A study was conducted to examine the SERVQUAL instrument as modified by Kettinger and Lee (1995) for assessing the service quality of information services providers. Data were gathered from customers of a single, large, international provider of information services with multibillion dollar 1995 gross revenues. The data were collected by the IS provider for commercial purposes. The authors were given only post hoc access to the survey results. Thus, details of the sample population and the response rates were not made available to the researchers. Due to the competitive nature of the IS industry, the IS provider placed certain nondisclosure provisions on the authors in exchange for access to the survey results. While recognizing that these limitations preclude the reporting of some information regarding the administration of this survey, we believe that the value of the data set, being a nationwide sample of purchasers of IS services, outweighs its limitations. The respondents to this survey were all information systems managers and/or executives responsible for purchasing information system services. The respondents represent 112 different organizations in 33 different industries from every region of the United States. Seventeen of the organizations provided multiple responses. These organizations contained multiple business units that contracted separately for IS services. A total of 138 responses were collected from a voluntary sample.

There are two noteworthy differences between this data set and Kettinger and Lee’s (1995). One is the sample and the other is the information services provider. The Kettinger and Lee study used student subjects. Expectations are vital to the SERVQUAL model. These expectations come mainly from past experience with similar services (Carman, 1990). The expectations of students will likely be different than those of professional users of information systems services. The fact that there is a difference between the expectations of various sample groups does not, by itself, constitute a threat to the validity of making comparisons of the psychometric properties of an instrument across sample populations. However, differences in the internal consistency of responses will affect the results of factor analysis and reliability calculations.

The second difference between the two studies is with the provider of the information services. Kettinger and Lee (1995) utilized an internal information services function rather than an external provider of information services. Because their rationale for considering SERVQUAL in the context of information systems was the fact that the “information services function is now faced with serving customers that possess substantial discretion in their use and purchase of information systems (IS) services” (p. 737), it is believed that the instrument must also be evaluated with respect to the external-provider competitors of internal IS functions. Moreover, to be useful for this purpose, a valid and reliable instrument should perform comparably, and exhibit similar psychometric properties, in both situations.

The Measures

Kettinger and Lee (1995, p. 747) slightly modified the 22 pairs of SERVQUAL questions to make them more appropriate for an information systems context and then pretested this IS-version of SERVQUAL through interviews with IS professionals and graduate students. The questions and instructions were refined and data were then collected. Several confirmatory factor analyses were conducted on the five– dimensional SERVQUAL model using gap scores. This resulted in several pairs of items and one dimension being dropped. A second-order confirmatory factor analysis was conducted for this four-dimensional model using gap scores calculated from the 13 pairs of remaining items, and this produced satisfactory results.

The company in the current study utilized a proprietary instrument which contained slightly modified versions of all of the questions from Kettinger and Lee (1995). In order to render the instrument appropriate for use with a commercial organization, three general modifications were made. In the “Expectations” questions, the phrase “excellent college computing services” was replaced by “excellent computer companies.” In the “Performance” questions, the phrase “our college’s computing services” was replaced with the name of the company conducting the survey. Throughout the survey the term “student” was replaced with the term “customer.” The most significant difference between the Kettinger and Lee instrument and the one utilized in the current study was the use of a 5-point semantic differential scale instead of the 7-point scale used by Kettinger and Lee. This change was made prior to the involvement of the authors and no reason for the change was provided. It is possible that the use of a 5-point scale may cause the data from this sample to exhibit less variance. In addition, data were collected for separate measures of overall satisfaction and overall service quality. Each item utilized a 5-point semantic differential scale.

Analysis and Findings

These analyses and findings are presented in four parts. First, the unidimensionality of the individual subscales is examined for both performance-only and gap scoring methods. Second, the reliability of Kettinger and Lee’s (1995) 13-paireditem IS-version of SERVQUAL is assessed. Third, the convergent and predictive validities of the modified SERVQUAL measure are examined. Finally, the dimensionality of the instrument and the discriminant validity of its items with respect to the theorized four dimensions are assessed.

Unidimensionality of the Subscales

Items in a unidimensional scale (or subscale) measure a single construct. Only with evidence of unidimensionality should a single number be used to represent the value of the scale (Venkatraman, 1989). Unidimensionality is also a necessary condition for the use of coefficient alpha in reliability analysis (Anderson & Gerbing, 1991). Several recognized methods for assessing the unidimensionality of a scale are presented in Table 2. Analysis of these findings support the assumption of unidimensionality for all of the subscales of IS-SERVQUAL with the exception of the Empathy scale, using the difference or “gap” score. Thus, coefficient alpha was adjudged appropriate for use in the reliability analysis of IS-SERVQUAL.

Reliability of the Modified SERVQUAL Instrument

Kettinger and Lee (1995) utilized Cronbach’s (1951) alpha to examine the reliability of their modified SERVQUAL instrument. Their analysis was conducted for each of the four dimensions of the measure. As noted above, however, Cronbach’s alpha is not the appropriate measure of reliability for a difference score (Peter et al., 1993; Cronbach & Furby, 1970). Nevertheless, for purposes of comparison only, Cronbach’s alpha was first calculated in this analysis. Next, the modified alpha formula for calculating the reliability of a difference score was utilized (Stanley, 1967). The results of this reliability analysis are summarized in Table 3, which also reports Kettinger and Lee’s findings.

As can be seen in Table 3, the calculated alpha values are lower than those reported by Kettinger and Lee (1995) for each of the four subscales. Based on the greater than .80 rule-of-thumb for adequate Cronbach’s alpha values (Crano & Brewer, 1973; Nunnally, 1978), none of the four dimensions seemed to be measured with adequate reliability. Further analyses, using the more appropriate formula for reliability values of difference scores (Lord, 1958; Johns, 1981; Peter et al., 1993), were less impressive, with reliabilities ranging from. 353 to a high of only .652. The reliability coefficient for the perception-only scores, for which Cronbach’s alpha is appropriate, was also calculated. These are also reported in Table 3. Although higher than the calculated alphas for the gap score on all four dimensions, adequate reliability is indicated on only two of the subscales.

Predictive and Convergent Validity of the Modified SERVQUAL Instrument

Convergent validity was assessed by examining the ability of both gap and perceptions-only scores to explain variations in overall service quality. This was accomplished by regressing the respondent’s perception of overall service quality on all of the 13 individual gap scores of the IS-version of the SERVQUAL instrument. Then, a second regression was conducted for all of the perception-only scores. Predictive validity was assessed using the same technique and substituting overall customer satisfaction as the dependent variable. Results of the regression are presented in Table 4.

The above results conform to the pattern of several earlier studies which indicate that the perception-only scores capture more of the variation in both overall satisfaction and overall perceived service quality (Parasuraman, Zeithaml, & Berry, 1991; 1994a; Cronin & Taylor, 1994; Babakus & Boller 1992; Boulding et al., 1993). The indication is that the perception scores exhibit higher predictive and convergent validity than the gap scores.

Dimensionality of the Modified SERVQUAL Instrument

The dimensionality of the IS-version of the SERVQUAL instrument was assessed in three ways utilizing gap scores. First, replicating the work of Babakus and Boiler (1992), the interitem correlations between all pairs of scores were examined utilizing the guidelines proposed by Baggozi (1981) for convergence and discrimination in measurement. The correlations of the modified SERVQUAL items are presented in Table 5. An examination of the correlations reveals that Bagozzi’s guidefines do not hold for any of the four dimensions proposed by Kettinger and Lee (1995) and, thus, the four-dimensional model is not supported.

Second, a LISREL-based confirmatory factor analysis of the proposed fourdimensional model was conducted. Fornell and Larcker (1981) suggested the following cutoff scores as an indication of good fit: GFI >= .90, AGFI >= .80, and RMR

Third and finally, because of the poor fit of the confirmatory factor analysis for the four-dimensional structure, an exploratory factor analysis was performed utilizing principle-axis extraction with an oblique rotation, replicating the technique used by Parasuraman et al. (1988). The results are presented in Table 6. The percentage of variance “explained” by the four-factor solution was 66.9. Nunnally (1978) propounded a minimum factor loading of .30 as inappropriately high on a secondary factor. We utilized a more stringent.45 cutoff. An examination of Table 5 reveals that the expected factor loadings of a forced four-factor solution did not emerge. In fact, only seven items (i.e., question numbers Q7, Q8, Q 12, Q 13, Q 17, Q2 1, and Q22) loaded cleanly on their respective factors. The other six items were either split among multiple dimensions or loaded cleanly on the wrong dimension entirely. The indication is that these items have low discriminant validity.

Limitations

There are three notable limitations inherent in this study. First is the paucity of information regarding the administration of the survey to include sample selection, return rates, and detailed demographic information on the respondents. As is often the case in practical research, the authors were forced to sacrifice some measure of control, and thus some measure of internal validity, for greatly improved external validity or generalizability. A second limitation is the use of a 5-point as opposed to a 7-point scale for the response format. It is possible that this change could result in reduced variance for the instrument used in this study. A third limitation concerns the differences in the sample populations of the two studies. This offers a possible confounding variable when analyzing differences between the results of Kettinger and Lee (1995) and the current study, and also an alternative plausible explanation for the improved reliability of the performance-only scoring method.

DISCUSSION AND CONCLUSIONS

The results of this study indicate that many of the difficulties that have been identified for the SERVQUAL instrument also apply to the 13-paired-item IS-version of the SERVQUAL instrument recommended by Kettinger and Lee (1995). This modified SERVQUAL instrument, much like Parasuraman et al.’s (1988) original instrument, suffers from unstable dimensionality, poor predictive and convergent validity, and inadequate reliability. The literature, as well as the findings of this study, provide impressive evidence that the use of perception-minus-expectation gap scores is problematic. Practitioners who want to measure IS service quality should be cautioned. We recommend that practitioners who utilize ISSERVQUAL use the perceived-performance-only scoring method. This method shows superior reliability and predictive validity.

Future Research

This study provides several implications for researchers. First, there is still a need for a more reliable measure of IS service quality. Given the problems cited above, the psychometric properties of an IS-SERVQUAL scale modified to utilize an alternative, non-difference response format as suggested by Carman (1990) and Brown et al. (1993) should be investigated for both internal and external IS customers. Such an instrument would result in one-half as many questions as the current version, and would eliminate the many psychometric, methodological, and statistical problems associated with the use of difference scores. However, unlike the use of a perceived-performance-only scoring method, the new instrument would maintain the disconfirmation-of-expectations construct for perceived service quality.

The results of this study suggest questions that go beyond simply altering the response format of IS-SERVQUAL. Two separate items from the empirical investigation raise serious doubts about the IS-SERVQUAL measure and its underlying theory. Failure to support the four-factor model proposed by Kettinger and Lee (1995) suggests that either the theory is incorrect in specifying a four-component model, or the instrument is incapable of accurately capturing those four components. A second important theoretical question concerns the differentiation between service quality and customer satisfaction. The original SERVQUAL authors, Parasuraman et al. (1988), insisted that SERVQUAL measures service quality and not customer satisfaction. However, our results indicate that the IS-version of the SERVQUAL instrument was a better predictor of overall satisfaction than of overall service quality. Moreover, the perceived-performance-only model of scoring was a better predictor of both overall satisfaction and overall service quality than was the traditional “gap” scoring method. It is clear that more research needs to be conducted into the true form of the IS service quality construct. This research might begin with a focus on expectations. Little is known about the expectations of IS service customers and users. A better understanding of expectations would help increase our understanding of the service quality construct. Research should focus on different types of study populations such as internal versus external customers, purchasers versus end users, and differences based on varying levels of experience with the IS function. Such research would constitute an important step in the development of an improved measure of IS service quality.

Conclusion

There is a need for valid and reliable measures of the service quality of information services providers, both internal and external, to the organization. Kettinger and Lee made an important contribution to this effort with their suggestion of combining the UIS with a modified IS version of the SERVQUAL instrument. However, earlier studies raised several important questions concerning the SERVQUAL instrument (e.g., Carman, 1990; Babakus & Boller, 1992; Cronin & Taylor, 1992; Teas, 1994, 1995; Peter et al., 1993). In addition, the findings of the current study indicate that the revised IS-SERVQUAL (Kettinger & Lee, 1995) instrument is neither a valid nor reliable measure of perceived service quality in IS. Those choosing to use any version of the SERVQUAL instrument are cautioned. Scoring problems aside, the consistently unstable dimensionality of the SERVQUAL instrument intimates that further research is needed to determine the dimensions underlying the construct of service quality. Moreover, there is the question of the content validity of all current versions of the SERVQUAL instrument. Given the importance of the service quality concept in IS theory and practice, the development of improved measures of service quality for an IS provider deserves attention in further theoretical and empirical research. [Received: July 24, 1995. Accepted: September 4, 1998.]

REFERENCES

Anderson, J., & Gerbing, D. (1991). Predicting the performance of measures in a confirmatory factor analysis with a pretest assessment of their substantive validities. Journal ofApplied Psychology, 76(5), 732-740.

Babakus, E., & Boller, G. W. (1992). An empirical assessment of the SERVQUAL scale. Journal of Business Research, 24(3), 253-268.

Bagozzi, R. P. (1981). Evaluating structural equation models with unobservable variables and measurement error: A comment. Journal of Marketing Research, 18, 375-381.

Boulding, W., Kalra, A., Staelin, R., & Zeithaml, V. A. (1993). A dynamic process model of service quality: From expectations to behavioral intentions. Journal of Marketing Research, 30(l), 7-27.

Brensinger, R. P., & Lambert, D. M. (1990). Can the SERVQUAL scale be generalized to business-to-business services? In Knowledge development in marketing, AMA’s Summer Educators Conference Proceedings, Boston, MA, 289.

Brown, T. J., Churchill, G. A., & Peter, J. P. (1993). Improving the measurement of service quality. Journal of Retailing, 69(l), 127-139.

Bynner, J. (1988). Factor analysis and the construct indicator relationship. Human Relations, 41(5), 389-405.

Carman, J. M. (1990). Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions. Journal of Retailing, 66(l), 33-55.

Crano, W. D., & Brewer, M. B. (1973). Principles of research in social psychology. New York: McGraw-Hill.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334.

Cronbach, L. J., & Furby, L. (1970). How we should measure change-Or should we? Psychological Bulletin, 74(July), 68-80.

Cronin, I J., & Taylor, S. A. (1992). Measuring service quality: A reexamination and extension. Journal of Marketing, 56(3), 55-68.

Cronin, J. J., & Taylor, S. A. (1994). SERVPERF versus SERVQUAL: Reconciling performance-based and perceptions-minus-expectations measurements of service quality. Journal of Marketing, 58(l), 125 -131.

Finn, D. W., & Lamb, C. W. (1991). An evaluation of the SERVQUAL scales in a retailing setting. Advances in Consumer Research, 18, 338-357.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(l), 39-50.

Galletta, D. F., & Lederer, A. L. (1989). Some cautions on the measurement of user information satisfaction. Decision Sciences, 20(3), 419-439.

Ives, B., Olson, A H., & Baroudi, J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26(10), 785-793.

Johns, G. (1981). Difference score measures of organizational behavior variables: A critique. Organizational Behavior and Human Performance, 27,443-463.

Kaiser, H. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20, 141-151.

Kettinger, W. J., & Lee, C. C. (1995). Perceived service quality and user satisfaction with the information services function. Decision Sciences, 25(5), 737-766.

Lord, F. A (1958). The utilization of unreliable difference scores. Journal of Educational Psychology, 49(3), 150-152.

Nunnally, J. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future research. Journal ofMarketing, 49(4), 41-50.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(l), 12-40.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1991). Refinement and reassessment of the SERVQUAL scale. Journal of Retailing, 67(4), 420-450.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994a). Alternative scales for measuring service quality: A comparative assessment based on psychometric and diagnostic criteria. Journal of Retailing, 70(3), 201-229.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994b). Reassessment of expectations as a comparison in measuring service quality: Implications for further research. Journal of Marketing, 58(l), 111-124.

Peter, J. P., Churchill, G. A., & Brown, T. J. (1993). Caution in the use of difference scores in consumer research. Journal of Consumer Research, 19(l), 655-662.

Prakash, V, & Lounsbury, J. W. (1983). A reliability problem in the measurement of disconfirmation of expectations. In R. P. Bagozzi, & A. M. Tybout (Eds.), Advances in consumer research (Vol. 10). Ann Arbor, ML Association for Consumer Research, 244-249.

Stanley, J. C. (1967). General and special formulas for reliability of differences. Journal of Educational Measurement, 4(4), 249-252.

Teas, R. K. (1993). Expectations, performance evaluation and consumer’s perception of quality. Journal of Marketing, 57(4), 18-34.

Teas, R. K. (1994). Expectations as a comparison standard in measuring service quality: An assessment of a reassessment. Journal ofMarketing, 58(l), 132139.

Venkatraman, N. (1989). Strategic orientation of business enterprises: The construct, dimensionality, and measurement. Management Science, 35(8), 942962.

Wall, T. D., & Payne, R. (1973). Are deficiency scores deficient? Journal of Applied Psychology, 58(3), 322-326.

Zeithaml, V., Berry, L., & Parasuraman, A. (1993). The nature and determinant of customer expectation of service quality. Journal of the Academy of Marketing Science, 21(1), 1-12.

Thomas P. Van Dyke

Department ofManagement (MIS), College of Business and Economics, University of Nevada Las Vegas, 4505 Maryland Parkway, Las Vegas, NV 89154-6009,

e-mail. vandyke@ccmail.nevadaedu

Victor R. Prybutok and Leon A. Kappelman

Business Computer Information Systems Department, College of Business Administration, University of North Texas, Denton, IX 76203-3677, e-mail: kapp@unt.edu

Thomas P. Van Dyke is an assistant professor of management information systems at the University of Nevada, Las Vegas. He has published articles in MIS Quarterly, and the Journal of Computer Information Systems. His current research interests include MIS service quality, the productivity of software developers, and human factors related to computer-supported decision making.

Victor R. Prybutok is the director of the University of North Texas Center for Quality and Productivity and a professor of management science. He received a PhD in environmental analysis and applied statistics in 1984 from Drexel University, is an ASQC certified quality engineer, a certified quality auditor, a certified quality manager, and a Texas Quality Award Examiner (1993). His numerous presentations include the keynote address to the American Osteopathic Hospital Association Trustee Forum. He has published over 50 conference papers and over 50 refereed journal articles. Journals in which his articles have appeared include MIS Quarterly, Data Base, Communications in Statistics, Operations Research, and The American Statistician. In addition, he is in Who’s Who in American Education and Who’s Who in the South and Southwest.

Leon A. Kappelman is an associate professor of business computer information systems at the University of North Texas, associate director of the Center for Quality and Productivity, and co-chair of the Society for Information Management’s (SIM) Year 2000 Working Group. He was recently appointed SIM’s Senior Advisor for Issues Advocacy, is a founding member of the three-person steering committee of the UN- and World Bank-sponsored YES (Y2K Expert Service) Volunteer Corps of the International Y2K Cooperation Center, and has testified before the U.S. Congress on high-tech related issues. His professional expertise includes the management of information assets, information systems development and maintenance, management of change and technology transfer, project management, and information systems assessment and evaluation. He has

published several books, over 50 articles, and his work has appeared in the Communications of the ACM, MIS Quarterly, Information Week, and Computerworld. He authored Information Systems for Managers (McGraw-Hill, 1993) and edited Year 2000 Problem: Strategies and Solutions from the Fortune 100 (International Thomson Press, 1997).

Copyright American Institute for Decision Sciences Summer 1999

Provided by ProQuest Information and Learning Company. All rights Reserved