A note on SERVQUAL reliability and validity in information system service quality measurement

A note on SERVQUAL reliability and validity in information system service quality measurement

Jiang, James J

ABSTRACT

Today’s information system function includes a large service component. Recent research has examined the SERVQUAL instrument as a possible measure to assist managers and researchers in evaluating service quality. To further examine the appropriateness of the SERVQUAL measure, a large industry sample serves to verify the anticipated structure of the instrument. In addition, a high correlation with a common measure of user satisfaction indicates that the SERVQUAL metric may indeed represent accurate views of user perception. As such, the SERVQUAL instrument can serve as a useful indicator for information system managers attempting to identify areas of needed service improvement and to researchers seeking a success measure of information system services.

Subject Areas: Management Information Systems and Statistics.

INTRODUCTION The measurement of information systems (IS) service quality is critical to IS managers for evaluating and maintaining consistently high quality IS service (Watson, Pitt, & Kavan, 1998). Service quality is sometimes defined as a comparison between consumer expectations of service and consumer perceptions of the service level provided (Parasuraman, Zeithaml, & Berry, 1985). IS researchers suggest that gauging the magnitude of difference between users’ expectations and perceptions provides a superior indicator of IS service quality (Pitt, Watson, & Kavan, 1995; Watson et al., 1998). Due to its “gap-type” nature, a modified SERVQUAL instrument, as shown in Appendix A, has received attention from IS researchers and practitioners as a measure for IS service quality and a diagnostic tool for uncovering areas of service quality strengths and shortfalls (Kettinger & Lee, 1994, 1997, 1999; Pitt et al., 1995; Watson et al., 1998; Van Dyke, Kappelman, & Prybutok, 1997; Van Dyke, Prybutok, & Kappelman, 1999).

Kettinger and Lee’s (1994) study helped pioneer the use of the SERVQUAL instrument in the IS context. In their paper, using business students as subjects, they explored the dimensionality and validity of the instrument. Kettinger and Lee found support for four dimensions and a significant negative correlation between the perceived quality gap (G) and User Information Satisfaction (UIS) (Ives, Olson, & Baroudi, 1983). Of the five dimensions in the SERVQUAL instrument, Kettinger and Lee found Reliability, Responsiveness, Assurance, and Empathy to be present. Missing was the Tangibles dimension. To further cross-validate the SERVQUAL instrument, student samples from Korea, Hong Kong, and the Netherlands served to replicate the results found earlier (Kettinger, Lee, & Lee, 1995). The same four dimensions were found in the Netherlands sample, but not in the Korea nor Hong Kong samples.

Other researchers independently analyzed SERVQUAL data (Pitt et al., 1995). This study found dimensions of three, five, and seven factors from three different sample sites using principal components and maximum likelihood methods, no structure closely resembling the original five SERVQUAL dimensions. Based on their findings, Pitt et al. (1995, p. 181) concluded that “SERVQUAL does not always clearly discriminate among the dimensions of service quality.” Due to the mixed results, it is important to continue comparing factor structures across different samples (Chin & Todd, 1995).

Recently, further discussion concerning the conceptual validity of the use of SERVQUAL measurement has appeared in the literature. Some opinions criticize the IS-adapted SERVQUAL based on conceptual and empirical difficulties (Van Dyke et al., 1997, 1999). Van Dyke et al.’s criticisms include (1) the difference score invoked to operationalize service quality, (2) the ambiguity of the expectation construct, and (3) the suitability in IS settings. Counterarguments from the marketing literature serve to balance these conceptual difficulties (Pitt, Watson, & Kavan, 1997).

The empirical difficulties of SERVQUAL include (1) reduced reliability, (2) poor convergent validity, and (3) the unstable dimensionality of the SERVQUAL instrument (Van Dyke et al., 1997, 1999). However, limited empirical evidence indicates the reliability problem with the difference scores is not serious (Pitt et al., 1997). In addition, based upon student samples, SERVQUAL demonstrated an acceptable level of convergent validity (Kettinger & Lee, 1997). Thus, though perception-only measures of service quality may have marginally higher convergent and predictive validities than difference scores, the loss of diagnostic capabilities calls for continued use and further examination of the issues related to the rigor and relevancy of SERVQUAL.

While these pioneering studies provide an important starting point in adapting SERVQUAL to the IS setting, additional research is needed before we can confidently accept the IS-adapted SERVQUAL format. We add to the literature on the appropriateness of SERVQUAL by:

1. Examining the factor structure using a large, different sample than previous studies to address the dimensionality problem of the IS-adapted SERVQUAL instrument. This serves to cross-validate claims of external validity. To this purpose we conduct a confirmatory factor analysis on SERVQUAL’s 22 items (Kettinger & Lee, 1997).

2. Testing the proposed use of SERVQUAL as an IS diagnostic tool (Watson et al., 1998; Pitt et al., 1995; Kettinger & Lee, 1997). Here, we examine the relationship between the more accepted IS success measure-User Information Satisfaction (UIS)-and the IS-adapted SERVQUAL. A significant relationship between these two constructs indicates that service quality gap scores could serve as a measure of IS success (Cronin & Taylor, 1994; Kettinger & Lee, 1994; Teas, 1994).

3. Focusing on the gap measure properties of reliability and validity. Adjustments for gap scores are made for reliability concerns. Discriminant and convergent validity properties of the gap measure are examined.

RESEARCH METHODOLOGY The basic research methods were to apply the modified SERVQUAL instrument to a sample, then validate the measurement model. Thorough checks were made on reliability and validity for SERVQUAL and the user satisfaction scale.

IS Version of the SERVQUAL Instrument Underlying the 22-items in SERVQUAL are five dimensions that are used by customers when evaluating service quality (Parasuraman, Zeithaml, & Berry, 1988, 1991, 1994). The dimensions include: (1) Tangibles: the appearance of physical facilities, equipment, and personnel; (2) Reliability: the ability to perform the promised service dependably and accurately; (3) Responsiveness: the willingness to help customers and provide prompt service; (4) Assurance: the knowledge and courtesy of employees and their ability to inspire trust and confidence; and (5) Empathy: providing the caring and individualized attention to customers. Service quality for each dimension is captured by a difference score, G = P – E, where G is the perceived quality, P is the perception of delivered service, and E is the expectation of service.

The 1991 SERVQUAL instrument (Parasuraman et al., 1991) was slightly modified to apply to the IS setting (Kettinger & Lee, 1994). The SERVQUAL instrument consists of two sections. Section I measures the user’s expected service level, and Section II measures the user’s perceived service level (see Appendix A). Resulting gap scores are produced by subtracting the 22 expected items in Section I from the 22 perceived items in Section II. The instrument used to measure user satisfaction is from Baroudi and Orlikowski (1988). Appendix B shows the items in the UIS scale.

Sampling and Data Collection A questionnaire was mailed to 200 IS users in U.S.=based organizations. The organizations were randomly selected from a call list of publicly held U.S. corporations. The original call list contained about 800 industry contacts from an economic analysis database developed at a major university research center. The list contained no more than two contacts per company. The IS users were first contacted through telephone by the authors or graduate assistants in December 1998. A total of 280 contacts were made from the call list to secure the 200 volunteers. The organizations were primarily manufacturing (70%) as opposed to service providers (30%). The 200 IS users receiving the mailing were those who agreed to participate. Self-addressed return envelopes for each questionnaire were enclosed. All the respondents were assured that their responses would be kept confidential., A total of 193 questionnaires were returned (calls were made if subjects did not mail the questionnaire back within three weeks). The demographic information of these respondents is shown in Table 1.

External validity refers to the extent to which findings can be generalized to or across times, persons, and settings (Cook & Campbell, 1979). External validity of the findings is threatened if the sample itself is systematically biased-for example, if the responses were generally from small, or large organizations. The data showed that organizational size ranged in number from less than 50 to over 2,500 employees. The characteristics of organizational size were mean (200), median (275), skewness (1.73), and kurtosis (2.31). The responses had a good distribution on organizational size since the mean and median were similar, skewness was less than 2, and kurtosis was less than 5 (Ghiselli, Campbell, & Zedeck, 1981). Overall, organizational size-related bias seemed unlikely because of the considerable variation.

Likewise, external validity is threatened if there is a systematic bias with regard to user service quality “gap” scores-for example, if responses were obtained largely from relatively large or small “gap” organizations-that is, IS departments are relatively effective or ineffective. The mean gap to the SERVQUAL dimensions (reliability, responsiveness, assurance, and empathy) ranged from .46 to .82, the median ranged from .42 to .95, skewness ranged from .45 to .97, and kurtosis ranged from -.21 to 80. Overall, this indicated that service quality-related bias was unlikely. Similarly, the mean user information satisfaction measures (IS staff relation, knowledge and involvement, IS product quality) ranged from 3.24 to 3.73, the median ranged from 3.17 to 3.84, skewness ranged from -.04 to .40, and kurtosis ranged from -.06 to -.59. Overall, this indicated that user satisfaction-related bias was unlikely.

Additional threats to external validity could occur if the samples showed other systematic biases in terms of demographics, such as age, gender, managerial position, and experience. An ANOVA was conducted with user information satisfaction (as dependent variable) against each demographic category (independent variables). Results did not indicate any significant relationship. Similar results held for each SERVQUAL gap as the dependent variable.

Assessing Fit between Model and Data In an effort to achieve strong validity and reliability, second-order confirmatory factor analysis (CFA) was used (Marsh & Hocevar, 1985). The CALIS procedure of SAS (version 6.12) was utilized as the analytical tool for the estimation of the measurement and structural equation models discussed below. When conducting a CFA, if the model provides a good approximation to reality, it provides a good fit to the data. Goodness-of-fit indices in this study include (1) chi-square/df (

The results are shown in Figure 1. Two items in the responsiveness and assurance dimensions with nonsignificant loadings (Gap 13 on never too busy and Gap 17 on job knowledge) were dropped from the final model. Likewise, three items of the tangible dimension were found to be nonsignificant and were dropped. These three items were Gap 2 on facility appearance, Gap 3 on IS staff appearance, and Gap 4 on the match between facility appearance and services provided. This left only a single item in the tangible dimension (Gap 1 on the up-to-date hardware) with no reliability measure, resulting in the complete deletion of the tangible dimension. These results are consistent with the previous Kettinger and Lee (1994) study. Pitt et al. (1995) also found that the up-to-date hardware item formed a separate factor. In addition, a low reliability for the tangible dimension has been shown in the marketing literature (Cronin & Taylor, 1994; Parasuraman et al., 1991). The remaining items showed a good model fit.

One possible reason for the lack of significance in the tangible dimension on the IS SERVQUAL instrument is the reliance on appearance. Three of the four items ask about expectations and perceptions of appearance. A user’s physical view of the information facilities may be limited to one’s own workstation, creating an environment where judgment is difficult. In any organization, there may be a great deal of the IS function behind some “line of visibility” that prohibits consistent evaluation of these items. Appearance of IS personnel may also be confounding. It may not be clear to many users just what dress should be expected of a technical person.

While the validity and reliability of the UIS measure is widely recognized in the IS field, the UIS measure was subjected to a second-order confirmatory factor analysis in the same fashion as the SERVQUAL measure. The initial model’s results did not support an adequate model fit, and two items were dropped from the model. These two items involved quality of the IS product (item S7 on relevancy of output, and item S9 on accuracy of information). The resulting UIS model includes four latent variables (three UIS dimensions as first-order and one secondorder factor) and 11 indicators as shown in Figure 2.

The UIS items segmented into three factors. These closely follow the three found by Baroudi and Orlikowski (1988) and reported by Kettinger and Lee (1994). The first factor breaks along the lines of the user in terms of impact to schedules and the amount of involvement, and is termed User Impact and Involvement in the figures and text. Likewise, the second factor represents the interpersonal communications relationships between the IS staff and the user, and is called IS Staff Communication. The last is composed of common concepts of system quality and is termed Information Product Quality for the purposes of this study. Assessing Reliability and Validity of Constructs A construct is reliable if it provides essentially the same set of scores for a group of subjects upon repeated testing. Validity, on the other hand, refers to the extent to which an instrument measures what it is intended to measure. There are a number of different ways that reliability and validity can be examined. In the present study, the composite reliability, John’s (1981) difference reliability, variance extracted estimates, convergent validity, and discriminant validity were examined.

Composite reliability is analogous to the Cronbach (1951) coefficient alpha for measuring the reliability of a multiple-item scale. Composite reliability reflects the internal consistency of the indicators measuring a given factor (Fornel & Larcker, 1981). The composite reliability values for each SERVQUAL dimension are shown in Table 2. As shown, the composite reliability score for each dimension is relatively high (> .70). In addition, the traditional Cronbach’s alpha values for each of the SERVQUAL dimensions are shown in Table 2 for comparison, as well as the reliability measures corrected for differences (Johns, 1981). All values exceed the recommended value of .70 (Nunnally, 1978). The results are similar to those of Kettinger and Lee (1994).

Variance extracted estimates, as discussed by Fornell and Larcker (1981), assess the amount of variance that is captured by an underlying factor in relation to the amount of variance due to measurement error. Fornell and Larcker (1981) suggested that it is desirable that the construct exhibit estimates of .50 or larger. However, this test is quite conservative; very often variance extracted estimates will be below .50, even when reliabilities are acceptable. The variance extracted estimates for each dimension of SERVQUAL are also shown in Table 2. Only empathy falls below .50 at .48.

Convergent validity is demonstrated when different instruments are used to measure the same construct, and scores from these different instruments are strongly correlated (Campbell & Fiske, 1959). The convergent validity can be assessed by reviewing the t-tests for the factor loadings (greater than twice their standard error) (Anderson & Gerbing, 1988). The t-tests for each indicator loading are shown in Table 2. The results show that the construct demonstrated a high convergent validity since all t-values are significant at the .01 level.

Discriminant validity is inferred when measures of each construct converge on their respective true scores, which are unique from the scores of other constructs (Churchill, 1979). Discriminant validity is assessed by: (1) the confidence interval test and (2) the variance extracted test (Fornell & Larcker, 1981; Anderson & Gerbing, 1988). The confidence interval test to assess the discriminant validity between two factors involves calculating a confidence interval of plus or minus two standard errors around the correlation between the factors, and determining whether this interval includes 1.0. If it does not include 1.0, discriminant validity is demonstrated (Anderson & Gerbing). The results of each pair of dimensions in the SERVQUAL construct are shown in Table 3. Discriminant validity for SERVQUAL is supported since no range includes the value 1.0. With the variance extracted test, variance extracted estimates (as described previously) for any pair of two factors are compared to the square of the correlation between the two factors. Discriminant validity is demonstrated if both variance extracted estimates are greater than this squared correlation. The results of the variance extracted tests of SERVQUAL construct are shown in Table 4. Again, discriminant validity is supported since each squared correlation is less than both applicable variance extracted estimates.

User Information Satisfaction (UIS) Measurement Significant loading coefficients of the UIS model indicated convergent validity of the three UIS dimensions (see Table 5). Composite reliability, Cronbach’s alpha, and variance extracted estimates were also computed for Table 5. Next, a formal test of discriminant validity was performed in the same way as the SERVQUAL measurement model. The confidence interval tests are shown in Table 6, and extracted variance tests are shown in Table 7. Results indicated a high degree of reliability and validity. The reliability measures were all above .70, no range in the confidence interval test included the value of 1.0, all the squared correlations were less than the associated variance extracted estimates, and only the user impact and involvement dimension fell below the recommended .50 for the variance extracted estimate.

A second-order CFA was conducted to determine the correlation between SERVQUAL and UIS second-order latent factors, as illustrated in Figure 3. The two concepts are significantly negatively associated and connected by a -.47 coefficient (Figure 3). The results of this combined model showed adequate fit of the data to go along with the high correlation between the two second-order latent variables. The negative relationship was anticipated due to the nature of “gap scores” on SERVQUAL measurement, where a large gap indicates deviation from expectations. The direct implications are clear: As the gap between perceived service delivery and expectations of service on the part of the user grow, satisfaction will decline. It is important to understand the expectations of the user and strive to meet those expectations. CONCLUSIONS The purpose of this study was to examine the SERVQUAL measurement in the IS context in light of the increased importance of IS service quality and the limited empirical findings in the IS field. The relationship of SERVQUAL to the traditional UIS measure was examined. This was accomplished by administering both the UIS and SERVQUAL instruments to the same sample of IS users and then deriving unique dimensions of SERVQUAL.

The contributions and results of this study include: 1. The first empirical SERVQUAL study in the IS context using a variety of subjects in different organizations.

2. A strong examination of the IS-adapted SERVQUAL instrument’s reliability and validity in the IS field.

3. Adding to the evidence of four dimensions for the IS adopted SERVQUAL instrument. The results support the early results of Kettinger and Lee (1994).

4. Identifying a significant link between the SERVQUAL’s gap scores and UIS. Such a result supports the argument that SERVQUAL could be a useful diagnostic tool for service quality since it is perceived as related to overall satisfaction.

5. Providing significant evidence in the ongoing argument on the use of SERVQUAL. This study demonstrated that the revised SERVQUAL, after deleting the tangible dimension, has a high level of reliability and validity.

Analysis of SERVQUAL results within a single organization may direct an IS manager to dedicate more resources toward improving the dimension where the gaps are more excessive. This would require an action plan to collect preliminary data from appropriate users with the IS-modified SERVQUAL instrument. IS managers may wish to identify categories of users to assist in the process, perhaps along the lines of management versus managerial support staff. From the data, the gaps should be calculated along the four dimensions discussed. Large gaps indicate dimensions where the users perceive the service to be much less than their expectations, and relates strongly to higher user dissatisfaction. These dimensions, if important to the organization, then provide targets for improvement.

Resources can be allocated to target a specific dimension that has a large gap. A gap in reliability would indicate further attention to methods that improve the IS group’s ability to produce a product or service when promised, such as project management techniques. A large responsiveness gap may possibly be plugged with policies or office restructuring that increases the communication between users and IS staff. Assurance gaps may be addressed through training of IS staff, or even more training of users to manage the perception of service delivery. An empathy gap may be lessened by improving access to facilities and support. Once a plan of improvement is implemented, follow-up surveys need to be employed to measure the improvements. Development of a complete strategy is critical, but the specific actions will depend on each organization’s culture and ability to respond.

The SERVQUAL instrument is designed to provide managers with deeper insights concerning the dimensions of service quality. The results of this study lean toward the credibility of the instrument for use in the field for four of the five dimensions. Knowledge of these dimensions can provide practitioners with potential useful diagnostics. This managerial value adds urgency to further exploration of the SERVQUAL instrument for use in the evaluation of information system service quality. In particular, exploration of reasons for inconsistencies in the tangible dimension would be of interest. Likewise, an exploration of each dimension’s impact on various measures of success would aid in interpretation and allow for use of SERVQUAL in future studies on IS Success. [Received: September 24, 1999. Accepted: October 9, 2000.]

REFERENCES

Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411-423.

Baroudi, J. J., & Orlikowski, W. J. (1988). A short-form measure of user information satisfaction: A psychometric evaluation and notes on use. Journal of Management Information Systems, 4(4), 44-59.

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(9), 81-105. Chin, W. W., & Todd, P. A. (1995). On the use, usefulness, and ease of use of struc

ture equation modeling in MIS research: Note of caution. MIS Quarterly, 19(2), 237-246.

Churchill, G. A. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16, 64-73.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation. Boston: Houghton Mifflin.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334.

Cronin, J. J., & Taylor, S. A. (1994). SERVPERF versus SERVQUAL: Reconciling performance-based and perceptions-minus-expectations measurements of service quality. Journal of Marketing, 58(1), 125-131.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39-50.

Ghiselli, E. E., Campbell, J. P., & Zedeck, J. P. (1981). Measurement theory for the behavioral sciences. San Francisco: Freeman.

Ives, B., Olson, M., & Baroudi, J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26(10), 785-793.

Johns, G. (1981). Difference score measures of organizational behavioral variables: A critique. Organizational Behavior and Human Performance, 27, 443-463.

Kettinger, W. J., & Lee, C. C. (1994). Perceived service quality and user satisfaction with the information services function. Decision Sciences, 25(5-6), 737766.

Kettinger, W. J., & Lee, C. C. (1997). Pragmatic perspectives on the measurement of information systems service quality. MIS Quarterly, 21(2), 223-240.

Kettinger, W. J., & Lee, C. C. (1999). Replication of measures in information systems research: The case of IS SERVQUAL. Decision Sciences, 30(3), 893899.

Kettinger, W. J., Lee, C. C., & Lee, S. (1995). Global measures of information services quality: A cross-national study. Decision Sciences, 26(5), 569-588.

Marsh, H. W., & Hocevar, D. (1985). Application of confirmatory factor analysis to the study of self-concept: First and higher order factor models and their invariance across groups. Psychological Bulletin, 97(3), 562-582.

Nunnally, J. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future research. Journal of Marketing, 49, 41-50.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(1), 12-40.

Parasuraman, A., Ziethaml, V. A., & Berry, L. L. (1991). Refinement and reassessment of the SERVQUAL scale. Journal of Retailing, 67, 420-450. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994). Reassessment of expec

tations as a comparison in measuring service quality: Implications for future research. Journal of Marketing, 58(1), 111-124.

Pitt, L. F., Watson, R. T., & Kavan, C. B. (1995). Service quality: A measure of information systems effectiveness. MIS Quarterly, 19(2), 173-187.

Pitt, L. F., Watson, R. T., & Kavan, C. B. (1997). Measuring information systems service quality: Concerns for a complete canvas. MIS Quarterly, 21(2), 209221.

Teas, R. K. (1994). Expectations as a comparison standard in measuring service quality: An assessment of a reassessment. Journal of Marketing, 58(1), 132139.

Van Dyke, T. P., Kappelman, L. A., & Prybutok, V. R. (1997). Measuring information systems service quality: Concerns on the use of the SERVQUAL questionnaire. MIS Quarterly, 21(2), 195-208.

Van Dyke, T. P., Prybutok, V. R., & Kappelman, L. A. (1999). Cautions on the use of SERVQUAL measure to assess the quality of information systems services. Decision Sciences, 30(3), 877-891.

Watson, R. T., Pitt, L. F., & Kavan, C. B. (1998). Measuring information systems service quality: Lessons from two longitudinal case studies. MIS Quarterly, 22(1), 61-79.

James J. Jiang

Department of Computer Information Systems, College of Administration and Business, Louisana Tech University, Ruston, LA 71272, email: jiang@cab.latech.edu

Gary Klein

College of Business and Administration, The University of Colorado, Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80933-7150, email: gklein@mail.uccs.edu

Suzanne M. Crampton

Department of Management, Seidman School of Business, Grand Valley State University, 301 W. Fulton, Grand Rapids, M149504, email: cramp@gvsu.edu

James J. Jiang is the Max Watson Professor of Computer Information Systems at Louisiana Tech University. His PhD in computer information systems was awarded by the University of Cincinnati in 1992. His current research interests include project management, decision modeling, information system personnel, and the development of consonant expectations among information system stakeholders.

He has written more than 70 academic articles for journals such as IEEE Transactions on Systems, Man, and Cybernetics; IEEE Transactions on Engineering Management; Communications of the ACM; Decision Support Systems; Decision Sciences; and Project Management Journal. He is an active member of IEEE, ACM, and the Decision Sciences Institute.

Gary Klein is the Couger Professor of Information Systems at the University of Colorado in Colorado Springs. He obtained his PhD in management science at Purdue University. Before that time, he served with Arthur Andersen & Company in Kansas City and was director of the information systems department for a regional financial institution. He was previously on the faculty at the University of Arizona, Southern Methodist University, and Louisiana Tech University, and served as dean of the School of Business at the University of Texas of the Permian Basin. His interests include project management, knowledge management, system development, and mathematical modeling. He is a member of IEEE, ACM, AIS, PMI, INFORMS, and the Decision Sciences Institute.

Suzanne M. Crampton is an associate professor of management at Grand Valley State University’s Seidman School of Business. She received her PhD from Michigan State University and has published articles on a variety of management topics that cover organizational behavior, human resource management, and technology management issues.

Copyright American Institute for Decision Sciences Summer 2000

Provided by ProQuest Information and Learning Company. All rights Reserved