Utility analysis: Its evolution and tenuous role in human resource management decision making
Skarlicki, Daniel P
Industrial/organizational (I/O) psychologists have long considered it a worthwhile endeavour to try to quantify the value of the contribution of I/O psychology to organizational effectiveness. Utility analysis is a technique that attempts to achieve this objective by providing a way to forecast the net financial benefits of scientifically-based human resource (HR) initiatives. More generally, utility analysis “provides a way of thinking about HR decisions that makes facts, assumptions, and beliefs behind decisions more explicit, systematic, and rational” (Boudreau, 1991, p. 126). As such, utility analysis is considered by many to be a useful tool for management in deciding whether to implement HR decisions and initiatives (Kendrick, 1984; Kopelman, 1986).
Although studies of utility analysis have been applied most frequently to selection procedures (e.g., Cascio, 1991; Cascio & Ramos, 1986; Cascio & Sibley, 1979; Cronshaw, 1986; Cronshaw & Alexander, 1985; Schmidt, Hunter, McKenzie, & Muldrow, 1979), researchers have applied similar cost/benefit analyses to other HR interventions, including performance feedback (Florin-Thuma & Boudreau, 1987; Landy, Farr, & Jacobs, 1982), training (Cascio, 1991; Mathieu Leonard, 1987; Schmidt, Hunter, & Pearlman, 1982), promotion (Cascio & Ramos, 1986), recruitment (Boudreau & Rynes, 1985), and turnover/layoff management (Boudreau & Berger, 1985; Cascio, 1991). These and other studies purport to identify with a high degree of precision the financial payback to be realized through investments designed to improve employee productivity. Few firms, however, appear to use utility analysis in deciding whether to implement new HR policies.
A primary objective of this article is to identify reasons why utility analysis is used infrequently as a managerial decision-making aid. To accomplish this objective, this article first identifies intractable issues that have arisen as utility analysis has evolved. Second, if utility analysis is to be used by managers, they must be convinced that it is relatively accurate. Hence, the reliability and validity of utility analysis is discussed. Specifically, this section examines data that are either included or excluded in the calculation of utility, such as the standard deviation of human performance (SDy) in the former case or information concerning external labour markets and the nonhuman contributions to performance in the latter. Assumptions and implications underlying the concept of human performance measured as an “asset” of the firm are also discussed. Finally, research on decision making is reviewed as it relates to managerial perceptions of the usefulness of utility estimates for decision making. This section identifies reasons why managers are likely to discount the results of a utility analysis when making decisions.
THE EVOLUTION OF UTILITY ANALYSIS
Cost/benefit analyses of HR initiatives first appeared in the scientific literature over forty years ago. Yet a review of research on the intellectual ancestry of utility analysis, namely, the dollar criterion and human resource accounting, reveals issues that continue to be unresolved to this day. These issues were, in part, responsible for the eventual abandonment of both the dollar criterion and human resource accounting, and continue to plague utility analysis as a direct descendant of these two techniques.
THE DOLLAR CRITERION
Utility analysis has its roots in the pioneering research of Brogden and Taylor (1950) on the dollar criterion. They argued that the ultimate criterion for any business firm in evaluating an employee’s effectiveness is financial. Dollars and cents were said to be a meaningful metric that is common to employees in most if not all organizations. As Blum and Naylor (1968) noted, however, there are at least two difficulties with this approach. First, many indices of job effectiveness do not lend themselves to a monetary value. Second, it is difficult to determine the proportion of the total profit of a unit that can be allocated to each individual employee. In addition, a unitary measure of employee performance is often inappropriate because an employee’s performance is multidimensional (Dunnette, 1963; Ronan, 1963). These difficulties were not resolved, and as a result interest in the dollar criterion died.
HUMAN RESOURCE ACCOUNTING
A second form of human resource cost/benefit analysis to be developed was human resource accounting (HRA). Proponents of HRA (e.g., Flamholtz, 1974; Likert, 1967, 1973a, 1973b; Pyle, 1970) attempted to address shortcomings of conventional information and accounting systems as applied to human resources. First, they argued that accounting practices treat costs relating to the management of human resources (e.g., selection and training) as short-term expenses rather than long-term investments. These expenses are usually charged against income in the year paid, which tends to overstate operating expenses and understate profitability (Pyle, 1973). In addition, traditional cost-cutting efforts that result in employee layoffs and terminations ignore the expense of future workforce replacement.
Second, HRA proponents criticized traditional accounting systems for focusing managerial attention on the financial, material, and technological aspects of the business while ignoring HR issues. In a review of several hundred studies, Likert (1973a) reported that between 40-60% of the variation in an organization’s productivity and profitability could be explained by variation among systems of management. He concluded that one way to improve an organization’s effectiveness was to improve its accounting systems.
HRA was designed to measure systematically both the asset value of labour and the amount of asset creation that could be attributed to personnel activities (Friedman & Lev, 1974). Advocates of HRA argued that if matters of employee development, employee satisfaction, and interpersonal effectiveness were included in accounting systems, managers would increase their attention to them and thus manage the organization more effectively (Cammann, 1974).
Research on HRA followed three parallel lines. One line can be described as a cost approach, in which employee hiring and training costs were treated as an investment to be amortized rather than as a current expense (e.g., Pyle, 1970). The value of an employee is calculated by totalling the historical expenditures on an individual’s recruitment, training, and orientation. Adjustments reflecting information about human resources such as replacement cost of personnel, expected tenure of employees, and the performance and/or potential of personnel can be made to the data.
A second line, followed by Likert (1973a, 1973b), assesses the productivity of personnel by focusing on the social-psychological characteristics of organizations. Likert proposed the measurement of causal variables such as organizational climate and managerial leadership, intervening variables such as employee loyalty, satisfaction, attitudes, perceptions, and motivations, and end result variables such as productivity and earnings. The linkages among these variables, he said, were critical to a manager’s understanding of the productivity of the firm’s human resources. In a study of a continuous-process plant of over 500 employees (Likert & Pyle, 1971), Likert showed that by measuring key causal and intervening variables, managers could predict trends in employee productivity and performance.
A third approach to HRA (Flamholtz, 1974) argues for the use of compensation data, replacement cost, and performance measures as surrogates of an individual’s value to the organization. The value of the organization’s human resources and its changes could then be measured to the extent that they increased, decreased, or remained unchanged over time.
The first published financial statements to include human resource data are found in R.G. Barry Corp.’s 1969 Pro Forma Annual Report. The report was designed to improve internal management, and was provided as a matter of interest to shareholders. By the 1970s, major corporations such Texas Instruments, Celanese Corp., General Telephone & Electronics Laboratories, Touche, Ross, & Co., Sherwin Williams, General Motors, and General Electric had implemented HRA for a variety of purposes (Rohan, 1972).
The Decline of HRA. HRA was abandoned in the late 1970s, in part because public accounting standards were too stringent to allow the direct reporting of human asset value in financial statements say, 1977). Specifically, for employees to be considered an asset according to public accounting standards, they had to show verifiable service potential to the firm for more than a year beyond the investment period (Dittman, Juris, & Revine, 1976; Flamholtz, 1985).
Even if HRA had been able to meet public accounting standards, enthusiasm for HRA was beginning to wane. For example, Rohan (1972) reported that HRA was expensive to install and maintain, making it difficult to sell to HR managers, controllers, and stockholders. HRA also tended to equate investment in people with their competence. Rohan (1972) noted that this was a problem because competence varies across individuals, from task to task, and over time. Moreover, the benefits of HRA were not immediate. Likert (1973a) estimated that managers might not expect positive results from HRA decisions in less than three years, and even longer in poorly managed plants.
It was also noted that HRA data had limited significance for assessing how social-psychological variables might be changing through time. For example, rate of growth and subsequent ability to contribute to the organization was not measured, even though it varied significantly among individuals (Pyle, 1973). Moreover, Pyle (1970) acknowledged the difficulty in identifying that portion of the return on investment that could be attributed to human components. As an example of the complications, he pointed out that productivity gains based upon normal technological improvements could easily mask deterioration in human performance.
Robinson (1973) pointed out that one of the greatest obstacles to industry acceptance of HRA was the unwillingness of financial people to use measurement techniques unique to human resources. He concluded that “HRA cannot make a full contribution to enhancing management efficiency until accountants and personnel specialists can agree on methods which are consistent with existing conventions within both disciplines” (p. 32).
Finally, Rhode and Lawler (1973) offered a fourfold explanation of the problems underlying HRA. First, they reported that the literature offered no agreement on precisely what HRA measured. Second, they argued that human resources in most industrial organizations did not qualify as assets because they were not normally bought and sold. Hence, it is inappropriate to place a monetary value on an individual. Third, it is problematic to measure the benefit of a training program to an employee because the effects of training vary among individuals, and improvements may be caused by variables outside the program. Fourth, they noted that among the greatest barriers to HRA acceptance were managers themselves, who often perceived HRA as limiting their freedom of action.
This final concern in particular undercut the implicit theory underlying HRA, namely, that managers would use improved accounting information to make better decisions. Considerable research (e.g., Argyris, 1952; Hofstede, 1967; Jasensky, 1956) has shown that managers tend to misuse accounting systems. They distort information that goes into the system and build slack into the goals and standards in such a way as to make the accounting measures and hence themselves look better. In a study of 357 managerial personnel of a utility company, Cammann (1974) found that if managers can gain extrinsic rewards by using defensive behaviours to improve their accounting measures, they will do so.
In conclusion, earlier approaches to cost/benefit analysis in the HR context met with limited success and subsequent decline. The cycle has repeated itself, however, as there is renewed interest in the literature in the economic utility of HR procedures. The third attempt by I/O psychologists to understand an employee’s effectiveness in economic terms has arisen phoenix-like from the ashes of the dollar criterion and HRA. It is utility analysis.
Utility analysis summarizes and identifies key variables that describe the consequences of HR programs. Its underlying logic is that HR decision making will be improved by using a decision-support framework that explicitly considers the costs and benefits of HR decisions (Boudreau, 1991).
This section examines technical issues associated with utility analysis that might affect its reliability and validity. In addition, this section questions the assumptions underlying utility analysis and identifies potentially important variables that have been omitted.
Although modifications to the original formulation of utility analysis have been proposed, most approaches follow the Brogden-Cronbach-Gleser model (Cronbach & Gleser, 1965) for evaluating the dollar value of selection systems. The model is:
U=(N)(T)(r sub xy )(Z sub x)(SD sub y ) – C, (1)
U = the increase in average dollar-value payoff that results from selecting N employees using the predictor (x) instead of selecting randomly;
N = the number of applicants selected;
T = the average tenure of the selected cohort;
r sub xy = the correlation coefficient (among prescreened applicants) between predictor score (x) and scores on a measure of job performance (y);
Z sub x = the average standard predictor score of the selected cohort;
SD sub y = the standard deviation of a dollar-valued measure of job performance or outcomes (y); and
C = the total selection cost for all applicants.
Standard Deviation of Performance (SD sub y )
The most difficult parameter of the utility model to estimate, as was the case with the dollar criterion and HRA, is SDy. Many researchers (Becker & Huselid, 1992; Boudreau, 1991; Cascio, 1991; Cascio & Ramos, 1986; Greer & Cascio, 1987; Landy, Farr, & Jacobs, 1982; Raju, Burke, & Normand, 1990) have suggested that advances in utility analysis have been limited by an inability to develop a workable, defensible, and agreed-upon method to estimate the dollar value of job performance and its standard deviation. The estimation of SDy continues to be a controversial and complex calculation because of the uncertainty around task characteristics and task relationships. For example, high levels of task nonroutineness (Perrow, 1970) or task independence (Thompson, 1967) make it difficult to monitor and evaluate human inputs, actions, or the outcomes of those actions (Jones Wright, 1992).
A number of approaches have been proposed to calculate SDy, including cost accounting (e.g., Cronshaw & Alexander, 1991), SDy as 40% of wages (Schmidt et al., 1982), Cascio-Ramos’ estimate of performance in dollars (CREPID) (Cascio & Ramos, 1986) and the global estimate method (GEM) (Schmidt, Hunter, McKenzie, & Muldrow, 1979). As Goldsmith (1990) noted, each method has been criticized in the research literature and no method is clearly superior to the others.
Several studies have compared the different methods of estimating SDy (Bobko, Karren, & Parkington, 1983; Burke & Frederick, 1986; Greer & Cascio, 1987; Weekley, Frank, O’Connor, & Peters, 1985). These studies, however, are silent with respect to validity because there is seldom useful data against which to validate the SDy measures (Florin-Thuma & Boudreau, 1987). Studies such as the one conducted by Schmidt, Hunter, Outerbridge, and Trattner (1986) are needed that investigate the relationship between utility estimates and actual productivity.
There are at least three problems with regard to the reliability and validity of performance estimates. First, there is debate (Cascio, 1992; Raju et al., 1990) as to whether personnel data meet the underlying assumptions of a linear homoscedastic model required to estimate SDy. Schmidt et al. (1979) suggested that personnel selection data often do meet these assumptions and that deviations from specific assumptions such as normality have trivial consequences. Other scholars (e.g., Cascio & Ramos, 1986), however, disagree with this conclusion. As a result, some researchers (e.g., Raju et al., 1990) have proposed alternative approaches to utility analysis to avoid the estimation of SDy.
A related concern is that studies demonstrate high variability in SDy estimates (e.g., Bobko et al., 1983; Bobko, Shetzer, & Russel, 1991; Burke & Frederick, 1984, 1986; Greer & Cascio, 1987; Rich & Boudreau, 1987; Schmidt et al., 1979; Weekley et al., 1985). Some researchers (e.g., Rich & Boudreau, 1987) have suggested that despite similar job titles, SDy variability might be due to true situational differences rather than measurement error. Factors such as supply and demand as well as seniority influence salary decisions, and therefore may also affect the shape of the salary distribution. This has a direct impact on the estimation of SDy because the salary estimates used may not match that of the dollar-valued performance rating for the same workers (Cascio & Ramos, 1986).
Practitioners and researchers may also have difficulty obtaining the information required for a utility analysis (Schmidt et al., 1979). Obtaining relevant information can be difficult, particularly if the purpose of the utility analysis is to determine whether a new selection procedure, for which no data exist, should be implemented (Goldsmith, 1991). Moreover, applications of utility analyses may be limited if there is difficulty in assigning dollar values to outcomes whose outputs can not be counted (Cascio & Ramos, 198, as may be the case in capital intensive firms (Flamholtz, 1985).
Another fundamental difficulty lies in the judgments to be made with respect to the work behaviours that should be included in SDy estimates. In a study of supervisors in a large medical supply corporation, On; Sackett, and Mercer (1989) concluded that nonprescribed behaviour, commonly referred to as organizational citizenship behaviours (OCB)(Organ, 1988), are relevant to dollar-valued work behaviour. However, they found that most supervisors do not take OCB into account when making dollar judgements of work performance. This may in turn contribute to the variability among performance estimates.
Other factors have also been shown to influence SDy estimation. For example, studies (e.g., Bobko et al., 1991; Shetzer & Bobko, 1987) investigating the effects of frame and presentation order on SDy estimation found significant main effects for framing. Roth (1993) reported that content knowledge, availability of feedback, and the amount of uncertainty affects an individual’s estimate of SDy. Boudreau (1984) and Florin-Thuma and Boudreau (1987), however, argued that accurate SDy estimates should not seriously affect decision quality in many situations because in theory any positive SDy value should result in the same decision.
ASSUMPTIONS UNDERLYING UTILITY ANALYSIS
A fundamental assumption of utility analyses is that: “all aspects of human resource management (including morale) can be measured and quantified in the same manner as any operational function” (Cascio, 1991, p. 8). Past research suggests several reasons why this assumption may be incorrect. First, the economic value generated by an individual is dependent on a combination of both human and nonhuman inputs (Flamholtz, 1985; Friedman & Lev, 1974), including market and institution variables (Steffy & Maurer, 1988). Second, utility formulas assume that only direct and immediate costs need to be considered when evaluating the utility of HR programs. Jones and Wright (1992) argued cogently that utility analysis underestimates bureaucratic costs of HR programs and that utility estimates are perceived as inflated by managers relative to the true value of such programs. Third, external labour market conditions are largely ignored by utility analyses. Becker (1989) stated that utility analysis makes an implicit assumption that utilities are “invariant across changing labour market conditions and that employees (both current and prospective), as well as employers, are unresponsive to these forces” (1989, p. 531). Fourth, performance may differ greatly between current and potential employees (Steffy & Maurer, 1988). New employees experience different learning curves that create variability in their performance (Cascio, 1992). Paradoxically, research on utility analysis has typically ignored the institutional and market context of an HR intervention that might influence an employee’s job performance.
These findings indicate that dollar estimates of an employee’s value are affected by many factors that may limit the relevance, reliability, validity, and practicality (i.e., acceptance by managers) of utility analyses for HR decisions. As a result, concerns about utility analysis are essentially no different from those raised by researchers decades ago regarding the dollar criterion and HRA. The limitations of utility analysis, however, are insufficient to explain why managers are reluctant to use it. For that we need to turn to research on individual and organizational decision making. Utility Analysis as a Theory of Managerial Decision Making
The primary purpose of utility analysis is to assist managers in HR decision making. Yet, for reasons to be discussed below, it would appear that utility analysis is likely to have at best a marginal influence on the HR decisions that managers make.
An implicit assumption of utility analysis is that organizations make independent and rational decisions to adopt the most technically efficient human resource management practices. In other words, implicit in utility analysis as an instrument of persuasion is the view that the truth will prevail if it is appropriately presented. Rauschenberger and Schmidt (1987), for example, suggested that utility analysis is ignored by managers largely because I/O psychologists and HR professionals have failed to communicate it in a manner that is clear and credible to organizational decision makers. Their assumption is often inconsistent with the way that managers actually make decisions about the class of innovations to which utility analysis can be applied. This perspective also appears to be inconsistent with the very way that managers typically approach a decision-making task.
If managers do not make decisions to adopt HR innovations on the basis of utility analysis-type cost/benefit analyses, then how are such decisions made? Johns (1993) suggested that objective high-quality personnel practices may not be adopted for several reasons, including the intervention of crises, organizational politics, competing sources of innovation, government regulation, and institutional factors. The influence of these factors increases as the extent of perceived uncertainty about the innovation increases. Technical merit is consequently only one of many factors that determine whether firms adopt state-of-the-art HR practices, and therefore does not explain much of the variance in the rate of adoption and diffusion of HR innovation. As an example, Johns noted that although empirical research (Weisner & Cronshaw, 1988) shows that structured employment interviews are more valid than unstructured interviews, the former are less commonly used than the latter.
The decision to adopt a given human resource policy may also reflect a tendency to do as other firms do. Imitation may therefore drive change regardless of whether the adoption of a policy or procedure is justified. The symbolic aspects of organizational decision making, as well as a tendency to imitate others, are reflected in the popularity of downsizing. Downsizing shows no sign of abatement even in the face of strong evidence that it falls far short of its intended objectives (Cascio, 1993). Thus, even though most managers have high incentives to make good decisions, these incentives provide no guarantee that rational decisions will be made.
Strict bottom line considerations do not determine the occurrence of HR innovation because uncertainty, at least in managers’ eyes, appears to characterize the potential benefits of most HR practices (Johns, 1993). Uncertainty provokes political behaviour in organizations, which in turn produces decisions that are inconsistent with the advice generated by an objective analysis (Pfeffer, 1992). Analysis of some form will often precede the resource allocation decisions required for HR innovation to occur, but so too will political behaviour, bargaining, and persuasion (Lax & Sebenius, 1986).
A variety of other explanations can be invoked to predict a weak link between the outcomes of a utility analysis and managerial decision-making. For example, by whom and for what purpose is utility analysis conducted? If the utility analysis is conducted by external consultants in support of a change that they will be paid to implement, there is an apparent as well as a real conflict of interest. Moreover, this conflict is so obvious that it is likely to undercut the credibility of the results of the utility analysis as well as the credibility of the consultants who perform it (Latham & Whyte, 1994).
Managerial decision making implies the existence of accountability in decision making (Tetlock, 1995). If that is the case, then it is relevant to ask to what extent the information contained in a utility analysis allows managers to compete effectively for funds and feel secure in justifying their decisions in management meetings. Utility analysis would seem to require high face validity, something that Tenopyr (1987) found it lacks, before it can be used by managers as credible support for the view that a change in human resource policy is required. Further, to what extent does support for a manager’s decision in the form of a utility analysis shield the manager from criticism or blame if the decision subsequently fails? Anecdotal evidence suggests that a utility analysis would provide very little, if any protection, in the event that things don’t turn out as predicted.
Although there is little research on the value of utility analysis for decision-making purposes, there is related research that is relevant to this issue. Mintzberg (1975) found that a manager’s job consists of several integrated roles, including that of decision maker. Decision making involves allocating resources by authorizing important decisions before they are implemented. According to Mintzberg, an interesting aspect about this process is that “despite the widespread use of capital budgeting procedures –a means of authorizing various capital expenditures at one time–executives in my study made a great many authorization decisions on an ad hoc basis. Apparently, many projects cannot wait or simply do not have the quantifiable costs and benefits that capital budgeting requires” (1975, p. 59).
What factors determine resource allocation decisions, if not the results of a financial analysis? Mintzberg found that managers face very complex choices involving more than just the technical details of the issue at hand. For example, managers have to be aware of more than just the viability of a proposal and its potential net benefits; they also have to consider factors such as the proposal’s timing, acceptability, and impact on the organization’s strategy, resources, and other decisions. Frequently, in order to make project approval decisions in complex environments, the managers that Mintzberg studied would “pick the man instead of the proposal. That is, the manager authorizes those projects presented to him by people whose judgment he trusts” (1975, p. 58).
To report, as Mintzberg does, that managers rely heavily on intuition in decision making, and do not display a high regard for analysis, is not to endorse this approach to decision making. Intuition is simply less costly to a manager in terms of time than utility analysis. Managers operate at a hectic pace that demands action; there is usually little time for reflection and analysis. Quantitative analysis works well with structured decision problems. But with unstructured problems, when the nature and causes of problems are ambiguous and critical pieces of information are missing, quantitative analysis alone does not produce good decisions.
Feldman (1989) conducted a study on the relationship between information and decision making that supports Mintzberg’s findings. Specifically, Feldman reported the paradox of a U.S. government department that allocates considerable resources for policy analysis that is ultimately not used or even discussed. The absence of a tight link between information and decision making is all the more striking because of Feldman’s choice of research site, namely the U.S. Department of Energy (DOE). Previous studies that have found a weak relationship between information and decision making have examined organizations such as educational institutions that have used soft technologies (e.g., Cohen & March, 1986). The DOE, however, deals with a technology that can be described as hard. In this case, relationships between cause and effect are more certain and the outcomes of decisions are readily measurable (e.g., in production units). Thus the tenuous connection between analysis and decision making does not appear to be affected by the measurability of outcomes.
That the behaviour of decision makers often departs from the recommendations of decision theory has also been found by March (1978, 1987). A primary reason for this discrepancy is that the portrayal of decision making implied by decision aids such as utility analysis understates the degree of ambiguity that decision makers face. Managers who ignore the recommendations of decision theory may therefore be justified in doing so (March, 1978). That is, the fact that managerial behaviour deviates from the prescriptions of rational analysis does not mean that such behaviour is necessarily irrational. For example, the sheer size of the benefits estimated by a utility analysis have, on occasion, dwarfed the U.S. GDP, and frequently indicate a ROI of over 100o (Cascio, 1992). In the absence of research that has verified the accuracy of these estimates, managers are likely to react to them with the attitude that they are too good to be true. Decision theory may not recognize the appropriateness of such a reaction, but many managers and others will.
Evidence that managers will not be guided by the results of a utility analysis when they make decisions regarding human resource policies was also found in a study by Latham and Whyte (1994). Managers were more likely to adopt an I/O psychologist’s advice to implement new and improved selection procedures when that advice was accompanied by an explanation of standard validation procedures than when that advice was accompanied by an explanation of validation procedures plus a highly positive utility analysis. In other words, utility analysis actually reduced the support of managers for implementing validated HR selection procedures. One explanation for these results is the large size of the benefits claimed, which undercut support for the proposed changes by damaging the psychologist’s credibility.
Another reason why utility analysis is unlikely to influence managerial judgment is the metric by which utility analysis evaluates potential changes in HR policy. Utility analysis determines the net dollar gain that will accrue, if any, to an organization as the result of a change in HR practice. For example, if a change in selection procedures is proposed and the change is a good one, utility analysis will provide the manager with a calculation of the financial benefits of the new procedure as compared to the existing one.
There is a stronger way than this to state the case for change. In general, individuals are loss-averse; the find it relatively easier to forego benefits than to incur costs or suffer losses (Kahneman & Tversky, 1982). Therefore, one way to increase the impact of utility analysis on those it is intended to influence is to redescribe the outcome of utility analysis on a technically meritorious innovation as the costs or losses that will be absorbed if the innovation is not adopted. For example, managers might be shown the potential costs of not adopting a legally-defensible employment interview or performance appraisal. Reframing the outcome of a decision not to adopt I/O type HR practices as suffering losses rather than foregoing gains could cause individual level preferences to reverse even though the objective nature of the situation remains unchanged. Whether this approach would substantially strengthen the case made by utility analysis, however, has not to our knowledge been investigated.
On the basis of this discussion, what can we conclude about the future of utility analysis as a decision-making technique? To the extent that utility analysis is practiced in organizations, it seems destined to go the way of its forbears, namely, HRA and the dollar criterion. At present, there is no compelling empirical evidence that human choice is improved by the application of utility analysis. Moreover, there is evidence (Latham & Whyte, 1994) that utility analysis influences managers in a way that was not intended by advocates of this technique. The most appropriate way to determine the usefulness of utility analysis in decision making is to determine the extent of the willingness of clients to purchase the services of experts with skills in this area. This would be an interesting and telling project for future research.
Argyris, C. (1952). The impact of budgets on people. Ithaca, NY: Cornell University Press.
Becker, B.E. (1989). The influence of labor markets on human resources utility estimates. Personnel Psychology, 42, 531-546.
Becker, B.E.; & Huselid, M.A. (1992). Direct estimates of SDy and the implications for utility analysis. Journal of Applied Psychology, 77, 227-233.
Blum, M.L., & Naylor, J.C. (1968). Industrial psychology: Its theoretical and social foundations (rev. ed.) New York: Harper & Row.
Bobko, P., Karren, R., & Parkington, J.J. (1983). Estimation of standard deviation in utility analysis: An empirical test. Journal of Applied Psychology, 68, 170-176.
Bobko, P., Shetzer, L., & Russell, C. (1991). Estimating the standard deviation of professors’ worth: The effects of frame and presentation order in utility analysis. Journal of Occupational Psychology, 64, 179-188.
Boudreau, J.W. (1984). Decision theory contributions to HRM research and practice. Industrial Relations, 23, 198-217.
Boudreau, J.W. (1991). Utility analysis for decisions in human resource management. In M.D. Dunnette & L.M. Hough (Eds.), Handbook of industrial and organizational psychology (Vol. 2, pp. 621-745). Palo Alto, CA: Consulting Psychologists Press.
Boudreau, J.W., L Berger, C.J. (1985). Decision-theoretic utility analysis applied to employee separations and acquisitions (Monograph). Journal of Applied Psychology, 70, 581-612.
Boudreau, J.W., & Rynes, S.L. (1985). Role of recruitment in staffing utility analysis. Journal of Applied Psychology, 70, 354-366.
Brogden, H.E., & Taylor, E.K. (1950). The dollar criterion –Applying the cost accounting concept to criterion construction. Personnel Psychology, 3, 133-154.
Burke, M.J., & Frederick, J.T. (1984). Two modified procedures for estimating standard deviations in utility analysis. Journal of Applied Psychology, 69, 482-489.
Burke, M.J., & Frederick, J.T. (1986). A comparison of economic utility estimates for alternative SDy estimation procedures. Journal of Applied Psychology, 71, 334-39.
Cammann, C. (1974). Can accounting systems produce change? Paper presented at The American Psychological Association Meeting. New Orleans, LA.
Cascio, W.F. (1991). Costing human resources: The financial impact of behavior in organizations (3rd ed.). Boston, MA: Kent.
Cascio, W.F. (1992). Assessing the utility of selection decisions: Theoretical and practical considerations. In N. Schmitt, W.C. Borman, & Associates (Eds.), Personnel selection in organizations (pp. 310-340). San Francisco, CA: Jossey Bass.
Cascio, W.F. (1993). Downsizing: What do we know: What have we learned? The Executive, 7, 95-104.
Cascio, W.F., & Ramos, R.A. (1986). Development and application of a new method for assessing job performance and behavioral/economic terms. Journal of Applied Psychology, 71, 20-28.
Cascio, W.F., & Sibley, V. (1979). Utility of the assessment center as a selection device. Journal of Applied Psychology, 64, 107-118.
Cohen, M.D., & March, J.G. (1986). Leadership and ambiguity: The American college president (2nd ed.). Boston, MA: Harvard Business School Press.
Cronbach, L.J., & Gleser, G.C. (1965). Psychological tests and personnel decisions (2nd ed.). Urbana, IL: University of Illinois Press.
Cronshaw, S.E (1986). The utility of employment testing for clerical/administrative trades in the Canadian military. Canadian Journal of Administrative Sciences, 3(2), 376-385.
Cronshaw, S.E, & Alexander, R.A. (1985). One answer to the demand for accountability: Selection utility as an investment decision. Organizational Behavior and Human Decision Processes, 35, 102-118.
Cronshaw. S.E, & Alexander, R.A. (1991). Why capital budgeting techniques are suited for assessing the utility of personnel programs: A reply to Hunter, Schmidt, and Coggin (1988). Journal of Applied Psychology, 76, 454-57.
Dittman, D.A., Juris, H.A., & Revine, L. (1976). On the existence of unrecorded human assets: An economic perspective. Journal of Accounting Research, 14, 49-65.
Dunnette, M.D. (1963). A note on the criterion. Journal of Applied Psychology, 47, 251-254.
Feldman M.S. (1989). Order without design: Information production and policy making. Stanford, CA: Stanford University Press.
Flamholtz, E.G. (1974). Human resource accounting. Encino, CA: Dickinson.
Flamholtz, E.G. (1985). Human resource accounting. San Francisco, CA: Jossey-Bass.
Florin-Thuma, B.C., & Boudreau, J.W. (1987). Performance feedback utility in a small organization: Effects on organizational outcomes and managerial decision processes. Personnel Psychology, 40, 693-713.
Friedman, A., & Lev B. (1974). A surrogate measure of the firm’s investment in human resources. Journal of Accounting Research, 11, 235-250.
Goldsmith, R.E (1990). Utility analysis and its application to the study of the cost-effectiveness of the assessment center method. In K.R. Murphy & F.E. Saal (Eds.), Psychology in organizations: Integrating science and practice (pp. 95-110). Hillsdale, NJ: Lawrence Erlbaum Associates.
Greer, O.L., & Cascio, W.F. (1987). Is cost accounting the answer? Comparison of two behaviorally-based methods for estimating the standard deviation of job performance in dollars with a cost-accounting-based approach. Journal of Applied Psychology, 71, 588-595.
Hofstede, G.H. (1967). The game of budget control. Assen, The Netherlands: Van Gorcum.
Jasensky, F.J. (1956). Use and misuse of efficiency controls. Harvard Business Review, 4, 105-112.
Johns, G. (1993). Constraints on the adoption of psychology-based personnel practices: Lessons from organizational innovation. Personnel Psychology, 46, 569-612.
Jones, G.R., & Wright, P.M. (1992). An economic approach to conceptualizing the utility of human resource management practices. Research in Personnel and Human Resource Management, 10, 271-299.
Kahneman, D., & Tversky, A. (1982). The psychology of preferences. Scientific American, 246, 160-171.
Kendrick, J.W. (1984). Improving company productivity. Baltimore, MD: Johns Hopkins University Press.
Kopelman, R.E. (1986). Managing productivity in organizations. New York: McGraw-Hill.
Landy, F.J., Farr, J.L., & Jacobs, R.R. (1982). Utility concepts in performance measurement. Organizational Behavior and Human Performance, 30, 15-40.
Latham, G.P., & Whyte, G. (1994). The futility of utility analysis. Personnel Psychology, 47, 31-46.
Lax, D.A., & Sebenius, J.K. (1986). The manager as negotiator: Bargaining for cooperation and competitive gain. New York: Free Press.
Likert, R. (1967). The human organization: Its management and value. New York: McGraw-Hill.
Likert, R. (1973a). An evolving concept of human resources accounting. Paper presented at The American Psychological Association meeting, Montreal, Quebec, Canada.
Likert, R. (1973b). Human resource accounting: Building and assessing productive organizations. Personnel, 3, 8-24.
Likert, R., & Pyle, W.C. (1971). Human resource accounting. Financial Analysts Journal, January-February, 1-9.
March, J.G. (1978). Bounded rationality, ambiguity, and the engineering of choice. The Bell Journal of Economics, 9, 587-608.
March, J.G. (1987). Ambiguity and accounting: The elusive link between information and decision making. Accounting, Organizations and Society, 12, 153-168.
Mathieu, J.E., & Leonard, R.L., Jr. (1987). Applying utility concepts to a training program in supervisory skills: A time-based approach. Academy of Management Journal, 30, 316-35.
Mintzberg, H. (1975). The manager’s job: Folklore and fact. Harvard Business Review, July-August, 49-61.
Organ, D.W. (1988). Organizational citizenship behavior: The good soldier syndrome. Lexington, MA: DC Heath.
Orr, J.M., Sackett, P.R., & Mercer, M. (1989). The role of prescribed and nonprescribed behaviors in estimating the dollar value of performance. Journal of Applied Psychology, 74, 34-40.
Perrow, C. (1970). Organizational analysis: A sociological view. Belmont, CA: Wadsworth.
Pfeffer, J. (1992). Managing with power: Politics and influence in organizations. Boston: Harvard Business School Press.
Pyle, W.C. (1970). Human resource accounting. Financial Analysis Journal, 5, 1-10.
Pyle, W.C. (1973). Investment/effectiveness measurements for planning and evaluating major program legislation. Testimony Presented to the Ninety-Third Congress, Committee on Government Operations, United States Senate.
Raju, N.S., Burke, M.J., & Normand, J. (1990). A new approach for utility analysis. Journal of Applied Psychology, 75, 3-12.
Rauschenberger, J.M., & Schmidt, EL. (1987). Measuring the economic impact of human resource programs. Journal of Business and Psychology, 2, 50-59.
Rhode, J.G., & Lawler, E.E. (1973). Auditing change: Human resource accounting. In M. Dunnette (Ed.), Work and nonwork in the year 2001 (pp. 153-177). Belmont, CA: Wadsworth.
Rich, J.R., & Boudreau, J.W. (1987). The effects of variability and risk in selection utility analysis: An empirical comparison. Personnel Psychology, 40, 55-84.
Robinson, D. (1973). Human asset accounting. Personnel Management, March, 31-43.
Rohan, T.M. (1972, Nov. 6). Who’s worth what around here? Industry Week, pp. 28-36.
Ronan, W.W. (1963). A factor analysis of eleven job performance measures. Personnel Psychology, 16, 255-267.
Roth, P.L. (1993). Research trends in judgement and their implications for the Schmidt-Hunter Global Estimation Procedure. Organizational Behavior and Human Decision Processes, 54, 299-319.
Schmidt, F.L., Hunter, J.E., McKenzie, R.C., Muldrow, T.W. (1979). Impact of valid selection procedures on work-force productivity, Journal of Applied Psychology, 64, 609-626.
Schmidt, F.L., Hunter, J.E., Outerbridge, A.N., & Trattner, M.H. (1986). The economic impact of job selection methods on size, productivity, and payroll costs of the federal work force: An empirically based demonstration. Personnel Psychology, 39, 1-30.
Schmidt, F.L., Hunter, J.E., & Pearlman, K. (1992). Assessing the economic impact of personnel programs on workforce productivity. Personnel Psychology, 35, 333-347.
Shetzer, L., & Bobko, P. (1987). The effects of frame and anchoring on estimates of overall worth in utility analysis. Paper presented at the Annual Meeting of the Academy of Management, New Orleans, LA.
Steffy, B.D., & Maurer, S.D. (1988). Conceptualizing and measuring the economic effectiveness of human resource activities. Academy of Management Review, 13, 271-286.
Tenopyr, M.L. (1987). Policies and strategies underlying a personnel research operation. Paper presented at the Annual Conference of the Society of Industrial and Organizational Psychology, Atlanta, GA.
Tetlock, P. (1985). Accountability: The neglected social context of judgement and choice. In. L.L Cummings & B.M. Staw (Eds.), Research in organizational behavior (Vol. 7, pp. 297-332). Greenwich, CT: JAI Press.
Thompson, J. (1967). Organizations in action. New York: McGraw-Hill.
Tsay, J.J. (1977). Human resource accounting: A need for relevance. Management Accounting, 58, 33-36.
Weekley, J.A., Frank, B., O’Connor, E.J., & Peters, L.H. (1985).
A comparison of the three methods of estimating the standard deviation of performance in dollars. Journal of Applied Psychology, 70, 122-126.
Weisner, W., & Cronshaw, S.E (1988). A meta-analytic investigation of the impact of interview format and degree of structure on the validity of the employment interview. Journal of Occupational Psychology, 61, 275-290.
The authors wish to thank John Boudreau for his helpful comments on an earlier draft of this paper.
Preparation of this article was funded by a Social Sciences and Humanities Research Council grant to the second author. Address all correspondence to Daniel Skarlicki, Department of Psychology, University of Calgary, 2500 University Drive N.W., Calgary, AB, Canada, T2N 1N4.
Copyright Administrative Sciences Association of Canada Mar 1996
Provided by ProQuest Information and Learning Company. All rights Reserved