Cognitive processes and computer advances in job evaluation: Innovation in reverse?

Cognitive processes and computer advances in job evaluation: Innovation in reverse?

Korukonda, Appa Rao

Perhaps no other topic in human resources has been the subject of as much ideological acclaim, lip service, and public and private damnation as job evaluation. Job evaluation has been hailed as a means of allowing salary decisions to be delegated more effectively, of supporting systematic gathering of data, and of generally providing employees with a better understanding of how their pay is determined (McMillan & William, 1982). At the other extreme, job evaluation is branded as the “single most effective device

apart from blatantly biased selection techniques

by which organizations can retain and create discriminatory pay practices” (Thomsen, 1981, p. 348). A few quotes from a survey of employee unions on job evaluation illustrate this point even more succinctly (Janes, 1979, p. 85).

Job and wage evaluation seems to confuse our membership, and this adds to the communication problem between management and our union.

We believe that free collective bargaining cannot exist if limited by the restrictions imposed by job evaluation.

Job evaluation lends itself to management as an instrument to resist legitimate union demands and offers. In fact, management can, and does disavow responsibility for its actions by constantly referring to the evaluation system.

Job evaluation tends to dampen bargaining where wage and rate problems are raised to pseudotechnical levels, and trained management personnel find it easier to operate and mystify workers with their jargon, in order that the workers refrain from pressing legitimate grievances.

JOB EVALUATION: AN OVERVIEW

Job evaluation is an attempt to assess and evaluate a specific job or category of jobs, as opposed to the person holding the job (Leap & Crino, 1989). While some non-quantitative approaches to job evaluation continue to be used (e.g., the ranking method and the classification method), over the last several years, an emerging preference towards quantitative methods is clearly discernible. Two major methods representing the quantitative approach are the factor comparison method and the point method (Arvey, 1986; De Cenzo & Robbins, 1994; Henderson, 1989). The factor comparison method involves selection of benchmark jobs and placement of a dollar value on compensable factors within those jobs. (Benchmark jobs are those which meet two important characteristics–they have a stable set of tasks, duties and responsibilities, and their pay rate, as determined by wage and salary survey, is commensurate with the criteria of external equity.) The point method, by contrast, does not place a direct dollar value on a job. Rather, each job is first assigned a total point value and then the aggregate point value is converted into a dollar figure (Lawler, 1996; Leap & Crino, 1989).

There are a number of similarities and areas of overlap between these two basic methods. In addition, there exist almost innumerable variations and combinations. The underlying common theme, however, is the same: selecting a set of compensable factors, benchmark jobs, and then determining a given job’s relative worth in comparison to the benchmark jobs. As can be seen, the benchmark jobs provide an overall context for the rating process and for the linkage between internal and external market forces (for a detailed discussion, see Milkovich and Boudreau, 1988, pp. 703-744).

LINEAR PROGRAMMING APPROACH TO JOB EVALUATION

Subjectivity inherent in the rating process used in job evaluation lies at the core of criticism for many job evaluation systems. Concern over this issue has, over the years, led to a strong advocacy of quantitative and highly sophisticated computer models as aids to improve objectivity in job evaluation. For example, Dertien states emphatically: he key to the design and utilization of a job evaluation plan is utilizing quantitative factor measurements over which very little subjective discretion can be exercised”

emphasis added

(1981, p. 566). Thomsen (1981) similarly argues that subjectivity and personal bias enter many job evaluation systems “through the manipulation of measurements before they are weighted by the model” (p. 352). A direct quote from a union official is quite revealing: “In our opinion, all job evaluation plans are subjective schemes and fail to provide a clear, objective way for determining our employees’ real worth” (Janes, 1979, p. 85).

Quantitative programs and computer models clearly provide an invincible aura of objectivity. However, whether they actually add to the objectivity of the underlying process is always questionable. This apart, in this paper I will argue that preoccupation with quantitative programs could lead to theoretical, illogical, and literally absurd results. I will illustrate my argument using a recent publication (Ahmed, 1989) that advocates the use of linear programming techniques to add to the objectivity of job evaluation process.

I will first review the originally proposed model of job evaluation and then illustrate how use of this algorithm runs counter to the essential logic of any rating process. In a more general sense, it is hoped that this discussion will stimulate a critical and fundamental look at uncritical use of quantitative techniques.

THE MODEL REVIEWED

As discussed earlier, job evaluation can be broadly described as the process of assessing the worth of jobs as a basis for deciding pay. The modeling approach in question was intended to aid this process of job evaluation. It is best reviewed using the example provided in the original article.

The linear programming model is based on the following assumptions:

(a) There are four compensable factors (A, B, C, and D) representing: (i) the complexity of duties; (ii) education; (iii) necessity of supervision; (iv) mental/visual demands.

(b) There are six levels for each factor as denoted by a subscript (A sub 1 – A sub 6 , B sub 1 – B sub 6 , C sub 1 -C sub 6 D sub 1 – D sub 6 ) subject to the following conditions:

(i) a floor of 5 and a ceiling of 35 points for each factor (In other words, 5 = A sub 6 , B sub 6 , C sub 6 , D sub 6 )

(ii) a minimum difference of 2 points between two successive levels of any given factor

i.e.(A sub i+1 – A sub i ) > = 2, (B sub i+1 – B sub 1 ), > = 2, (C sub i+1 – C sub i ) > = 2, (D sub i+1 – D sub i ) > = 2, where i could be any integer from 1 to 5

.

(c) There are five benchmark jobs whose relative point scores, based on the wage rates, are 100, 88.4, 73.4, 66.0, and 58.9.

(d) The benchmark jobs, in descending order of pay, are rated as follows on the four dimensions:

(e) The discrepancy variables (X sub 1 , X sub 2 ,…X sub 5 ) for each benchmark job, defined as the amount of inequity in each benchmark job, is no more than 2 percent of its point score. (Note that the equitable pay for a given job is determined by its composite ratings in comparison to those of the benchmark jobs. Since the benchmark jobs represent a reference point, it is assumed that they are selected such that they do not inherently carry an undue amount of inequity themselves.)

The linear programming model is now written as:

MINIMIZE Z = X sub 1 + X sub 2 +X sub 3 + X sub 4 + X sub 5 SUBJECT TO

A sub 6 + B sub 5 + C sub 6 + D sub 6 + X sub 1 = 100

A sub 5 + B sub 5 + C sub 4 + D sub 3 + X sub 2 = 88.4

A sub 4 + B sub 4 + C sub 4 + D sub 3 + X sub 3 = 73.8

A sub 3 + B sub 3 + C sub 3 + D sub 3 + X sub 4 = 66.0

A sub 1 + B sub 2 + C sub 3 + D sub 2 + X sub 5 = 58.9

A sub 1 , B sub 1 , C sub 1 , D sub 1 > = 5

A sub 6 , B sub 6 , C sub 6 , D sub 6 > = 35

A sub i+1 , – A sub i > = 2 for i = 1 to 5

B sub i+1 – B sub 1 > = 2 for i = 1 to 5

C sub i+1 – C sub i > = 2 for i = 1 to 5

D sub i+1 – D sub i > = 2 for i = 1 to 5

X sub 1

X sub 2

X sub 3

X sub 4

X sub 5

The following optimal values of factor weights and discrepancy variables are arrived at by using a linear programming algorithm:

(algorithm omitted)

Now we are ready to apply the above results. As Ahmed (1989, p. 4) states, for any new job which may require different levels of factors and which is not the same as the benchmark job, the total points can readily be obtained by adding the points for levels of different factors in that job. Later, the total points can be directly converted into the salary structure.” Thus, if one has to evaluate another nonbenchmark job with ratings of, say, A sub 2 , B sub 4 , C sub 5 , and D sub 2 , all that needs to be done, according to the author, is to compute its factor score as equal to 7 + 11 + 33 + 17, or 68.0. This would mean that if our top benchmark job (i.e., the benchmark job with a point score of 100) commands a rate of $20.00 per hour, the job in question can be rated at 20 x 0.68, or $13.60. (Recall that the value of the corresponding discrepancy variable, i.e., X sub 1 is zero.)

WHAT IS WRONG?

Clearly the above model rests on a number of subjective assumptions regarding the maximum and minimum point values for each factor level, the allowable limits of inequity for the benchmark jobs, and so on. The major problem with this approach, however, lies elsewhere. What we are essentially doing here is to first assign different factor levels for each of the benchmark jobs–by saying, for example, that benchmark job 1 is equal to level 6 on factor A, level 5 on factor B, level 6 on factor C, and level 6 on factor D. We assign similar ratings for other benchmark jobs also, before we begin the modeling process. However, it is not until after the solution is obtained that we really know that A sub 4 is much, much farther away from A than say, As is from A sub 6 –in fact, the A4 – As interval is more than 10 times the A sub 5 – A sub 6 interval. And so is the case with the C sub 2 – C sub 3 vs. the C sub 4 – C sub 5 intervals. Similarly, the D sub 1 – D sub 2 interval is six times the D sub 2 – D sub 3 interval.

How can one logically reconcile this post-hoc, albeit “optimal,” scale structure with the cognitive scale used to assign the initial rating for the benchmark jobs? In order for the initial rating to be meaningful and logically consistent in the rater’s own mind, we must, of necessity, hypothesize the existence of an a priori cognitive scale with its own internal structure. What theoretical, empirical, or logical meaning can then be assigned to the factor weights derived after the fact?

A little reflection will reveal that our elaborate modeling trip has not really taken us far. In fact, it forces us to run against the grain, so to speak, of our own initial assumptions. It cannot, therefore, be expected to systematize, much less add objectivity to, the process of job evaluation.

THE CASE OF THE QUANTUM FAUCET

One might argue that the factor weights help the raters assign empirical and specific operational meaning to what might have been somewhat abstract labels used in the initial rating process. Might it not be possible, then, one might ask, that repeated and recursive use of the model would help achieve successive refinements?

This seems a reasonable argument at first glance. However, closer scrutiny will reveal that it does not hold much water. The basic incongruency here can be traced to assigning meaning to assumptions based on conclusions.

Hofstadter provides a delightful example of a quantum water faucet to illustrate Heisenberg’s Uncertainty Principle and it might be useful here to help put the central argument in perspective.

Imagine a water faucet with two knobs, one labeled ‘H’ and one labeled ‘C’, each of which you can twist continuously. Water comes streaming out of the faucet, but there is a strange property to this system: the water is always either totally hot or totally cold; there is no in-between. These are called the two temperature eigenstates of the water…The only way you can tell which eigenstate the water is in is by sticking your hand in and feeling it. Actually, in orthodox quantum mechanics it is trickier than that. It is the act of putting your hand in the water that throws the water into one or the other eigenstate. Up till that very instant, the water is said to be in a superposition of states (or more accurately, a superposition of eigenstates)…As long as no measurement is made of the system, a physicist cannot know which eigenstate the system is in. Indeed, it can be shown that in a very fundamental sense, the system itself does not “know” which eigenstate it is in, and only decides–at random –at the moment the observer’s hand is stuck in to “test the water”, so to speak. Up to the moment of observation, the system acts as if it were not in an eigenstate. For all practical purposes, for all theoretical purposes–in fact for all purposes–the system is not in an eigenstate. (1986, pp. 466-467)

The point here is that it is all too easy for one to fall into this trap of argument and use the notorious uncertainty principle to justify the attribution of post-hoc meanings to measurements. As Hofstadter amply demonstrates, “It is a total misinterpretation of Heisenberg’s uncertainty principle to suppose that it applies to macroscopic observers making macroscopic measurements” (1986, p. 464). Thus, impressive as it might seem, the uncertainty principle cannot be invoked in defense of the basic incongruency involved in the present application of linear programming technique to job evaluation.

Problem formulation involves imposing a structure on a problem. This essentially requires strategies and approaches for factoring out the complexity into manageable components.

KEY CRITERIA IN COMPUTER MODELS OF JOB EVALUATION: A CONCLUDING NOTE

Subjectivity is an inherent and essential feature of any process and outcome based on judgment. The problem of subjectivity is well known in ratings of job characteristics (Bernardin & Pence, 1980; Cellar, Kernan, & Barrett, 1985), job analysis (Ash & Levine, 1980), and performance appraisal (Weekley & Gier, 1989) as well as in job evaluation. This is one reason why attempts are made every so often to apply well-established quantitative techniques in an effort to improve objectivity, precision, and accuracy of these processes. These techniques usually have the effect of masking the soft and subjective side of their assumptions and creating a false sense of objectivity. Following Whitehead (1929), this might be termed the fallacy of misperceived concreteness. Such a fallacy is perhaps endemic, in some form or other, in many attempts to use quantitative approaches in management.

Advances in computer technology and the introduction of user-friendly algorithms have brought sophisticated mathematical tools within the reach of practicing managers, in terms of cost and simplicity of operation. In such applications, usually the first issue that is addressed is the bottom-line consideration–that the use of an algorithm should result in some marginal gain in precision, uniformity, and systematization. In choosing these algorithms, the bottom-line usually tends to be overemphasized for a number of reasons, including considerations of cost, equity, and efficiency. Second, and somewhat less recognized, is the issue of false security of numbers. This can be generally addressed, at least over the long run, through a program of education and sensitization. Third, and an equally important issue-though much less recognized–is the necessity for the algorithm to be logically and theoretically consistent with the process it is trying to capture. It is the third criterion–what we might call the process consistency criterion–that the present article attempts to highlight. As shown here, if this criterion is violated, use of a computer algorithm could boil down to a futile exercise that could be potentially contradictory and logically inconsistent. Needless to say, such computer models–however advanced and sophisticated the technique–have the potential to play havoc with key human resources decisions.

REFERENCES

Ahmed, N.U. (1989). An analytical technique to evaluate factor weights in job evaluation. The Mid-Atlantic Journal of Business, 25, 16.

Arvey, R.D. (1986). Sex evaluation in job evaluation procedures. Personnel Psychology, Summer, 316-318.

Ash, R.A., & Levine, E.L. (1980). A framework for evaluating job analysis methods. Personnel, 57, 53-59.

Bernardin, H.J., & Pence, E.D. (1980). Effects of rater training: Creating new response sets and decreasing accuracy. Journal of Applied Psychology, 65, 60-66.

Cellar, D.F., Kernan, M.C., & Barrett, G.V. (1985). Conventional wisdom and ratings of job characteristics: Can observers be objective? Journal of Management, 11, 131-138.

De Cenzo, D.A., & Robbins, S.P. (1994). Human resource management. New York: Wiley.

Dertien, M.G. (1981). The accuracy of job evaluation plans. Personnel Journal, 60, 556-570.

Henderson, R. (1989). Compensation management: Rewarding performance. Englewood Cliffs, NJ: Prentice-Hall.

Hofstadter, D.R. (1986). Metamagical themas: Questing for the essence of mind and pattern. New York: Bantam Books.

Janes, H. (1979). Union views on job evaluation: 1971-1978. Personnel Journal, February, 80-85.

Lawler, E.E., III (1986). What’s wrong with point-factor job evaluation. Compensation and Benefit Review, March-April, 29-40.

Leap, T.L., & Crino, M.D. (1989). Personnel/human resource management. New York: Macmillan.

McMillan, J.D., & Williams, V.C. (1992). The elements of effective salary administration programs. Personnel Journal, 61, 832-838.

Milkovich, G.T., & Boudreau, J.W. (1988). Personnel/human resource management: A diagnostic approach. Plano, TX: BPI.

Thomsen, D.J. (1981). Compensation and benefits. Personnel Journal, 60, 348-354.

Weekley, J.A., a Gier, J.A. (1989). Ceilings in the reliability and validity of performance ratings: The case of expert raters. Academy of Management Journal, 32, 213-222.

Whitehead, A.N. (1929). Process and reality. New York: Macmillan.

Copyright Administrative Sciences Association of Canada Mar 1996

Provided by ProQuest Information and Learning Company. All rights Reserved