A hedonic model

A hedonic model – 1995 NRC Ratings of Doctoral Programs

Ronald G. Ehrenberg

The 1995 National Research Council (NRC) rankings of doctoral programs in 41 arts and humanities, biological sciences, engmeering, physical sciences and mathematics, and social and behavioral sciences fields have drawn considerable attention. Because the study–titled Research-Doctorate Programs in the United States: Continuity and Change–is the first major assessment of doctoral education undertaken in over a decade, the rankings undoubtedly will be used not only by potential graduate students making application and acceptance decisions, but by university administrators making resource allocation decisions. Thus, the study will influence doctoral programs at universities across the nation–sometimes positively and sometimes negatively. (See article by David Webster and Tad Skinner on page 22.)

These rankings were obtained from a 1993 survey of over 16,700 graduate faculty members who were asked to rate each PhD program in their disciplines on a scale of 0 (“Not sufficient for doctoral education”) to 5 (“Distinguished”). Each rater was provided with lists of the faculty members associated with each of 50 randomly chosen programs in the discipline and the number of new doctorates produced by each of the programs over the previous five-year period. Raters were asked to rate both the scholarly quality of each program’s faculty and each program’s effectiveness. The response rate to the survey was about 50 percent, with the programs in each discipline rated by at least 200 faculty members.

The NRC also collected a set of objective statistics about the seniority, research productivity, and productivity in conferring doctoral degrees of program faculty. These data were not provided to raters. If one assumes, however, that raters were sufficiently knowledgeable about their professions that in making their ratings they acted as if they knew the objective measures, then multivariate regression models can be used to estimate the extent to which variations in the objective measures influence raters’ decisions. That is, it is possible to estimate models of the determinants of departmental ratings. The resulting estimates can then be used to help guide resource allocation decisions at universities.

Why do we need such estimates? After all, the published NRC volume presented simple correlations between some of the objective measures and the subjective ratings of the raters. For example, program size, as measured either by the number of faculty associated with the program or the number of doctoral degrees granted by the program over the past five years, was shown to be positively correlated with the subjective ratings in most fields. However, when the objective measures are themselves correlated, as faculty size and degrees granted are, simple correlations do not permit us to learn the partial correlation of each objective variable with the subjective ratings. For example, they do not provide information about whether increasing faculty size, while holding constant the number of degrees granted, would be associated with a higher subjective rating. To answer such a question requires a multivariate analysis.

In this article, we describe how to use multivariate regression models and the NRC data to analyze how measures of program size, faculty seniority, faculty research productivity, and faculty productivity in producing doctoral degrees influence raters’ subjective ratings of doctoral programs in various academic fields. Using data for one of the fields, economics, we then indicate how university administrators can use the models to compute what the impact of adding one faculty position would be on the ranking of an economics department. The department chosen to illustrate the methodology is the Cornell economics department in which one of us has an appointment.

We also illustrate how administrators can “decompose” the differences between a department’s rating and the ratings of a group of higher-rated departments in the field into differences due to faculty size, faculty seniority, faculty research productivity, and faculty productivity in producing new doctorates. To illustrate the methodology, we compare the Cornell economics department with the top 10 economics programs. This decomposition suggests the type of questions that the department and university should be addressing if they are serious about wanting to improve the department’s ranking.

The methodology that we describe in the next section, statistically relating an outcome (in this case the subjective rating) to a set of objective measures and then inferring the effect of a change in any of the latter on the outcome, has a long tradition in economics where it is commonly referred to as the “hedonic function” or “implicit price” approach. This approach assumes that causality can be inferred from such cross-section estimates. While there are other methodological approaches that are more appropriate for trying to infer causality when longitudinal or panel data are available, in the absence of these types of data we are restricted to the approach presented here. Hence, in what follows, we will assume in places that we can infer how a given change in any one of the objective measures would quantitatively influence a program’s subjective rating. Readers unhappy with this assumption can view our work as purely descriptive.

Our discussion here is necessarily nontechnical in nature. A technical version of the paper, which contains footnotes, references, and qualifications, is available on the World Wide Web at http://www.ipr.cornell.edu. In addition to reporting tables of empirical results, the technical version stresses the need to test the sensitivity of the analyses to alternative functional forms and indicates that, at least for the field of economics, the results do appear to be relatively robust.

METHODOLOGY

Table 1 presents data for Cornell University’s doctoral program in economics on its scholarly quality of program faculty rating, as well as some of the objective measures collected by the NRC. On a scale of 0 to 5, the mean rating of Cornell’s economics department in terms of scholarly quality of the faculty was 3.56 in 1993, which ranked it as the 1 8th best department in the nation. (Economics is one of Cornell’s lower-ranked doctoral programs. Most of its humanities, engineering, and physical science programs are ranked in the top 10 nationwide in their fields.) Its 30 program faculty members, however, made it only the 29th largest economics department.

Table 1

1993 NCRC Cornell Economics Program Charactericstics

(Cornell’s Rank Among Institutions)(*)

Characteristics Value Rank

FACQUAL 3.56 (18)

FACULTY 30 (29)

FULL 70% (13)

RESEARCH 33% (9)

PUBFAC 3.1 (37)

GINIPUB 5.0 (8)

CITPUB 6.6 (26)

PHDFAC 2.0 (20)

PHDSTU 0.8 (23)

MEDTIME 7.4 (15)

The estimated values of these parameters provide estimates of the marginal impact of a one-unit change in each objective variable on the subjective rating the doctoral program received, holding constant all other variables. The squared faculty size variable permits the relationship between rating and faculty size to be nonlinear. In particular, the marginal effect of an increase in faculty size by one on the faculty program rating, holding all the other variables in the model constant, is given by

(2) [a.sub.1] + 2[a.sub.2][F.sub.i]

As a result, if a, proves to be positive and [a.sub.2] proves to be negative, the program rating will first increase as faculty size increases, but at a declining marginal rate. The rating will eventually reach a maximum at a faculty size equal to minus [a.sub.1]/2[a.sub.2]. Finally, the rating will decline as program size continues to increase beyond this faculty size.

The other variables (the x’s) included in the analyses are the percentage of full professors (FULL); the percentage of program faculty with external research support (RESEARCH); publications per faculty member (PUBFAC); a measure of the dispersion of publications per faculty member, the GINI coefficient for publications per faculty member (GINIPUB); the number of citations per faculty publication (CITPUB); the number of PhD degrees granted per faculty member (PHDFAC); the number of PhDs granted per enrolled graduate student (PHDSTU); and the median number of years that it took new doctorates to receive their degrees (MEDTIME).

Publication data were not collected by the NRC for faculty in the arts and humanities. Hence, for these fields PUBFAC, GINIPUB, and CITPUB do not appear in the analyses. Instead, the NRC collected data for these fields on the total number of prestigious awards and honors won per faculty member (AWARDF), and this variable is included in the models for these fields. Implicit in the set of variables included in the model is the belief that the raters’ subjective ratings of a program are determined by the program’s size, the seniority distribution of its faculty, its faculty members’ research productivity, and their doctoral production productivity.

EMPIRICAL FINDINGS

We estimated equation (1) separately for each of 35 of the 41 fields for which the NRC conducted rankings, excluding from the analyses the six fields in which Cornell had no PhD program. Tables of regression coefficients for each field are available in the longer, more technical version of our paper on the World Wide Web. Here, we summarize the major findings.

In all fields but one, we found a positive relationship between program rating and faculty size (FACULTY). However, typically, the coefficient of the square of faculty size was negative, implying that after some faculty size, further growth had a negative effect on ratings. In most fields (but not in the majority of the biological sciences), increases in the proportion of faculty who were full professors (FULL) led to higher ratings, presumably because cumulative accomplishments and name recognition are higher for full professors.

The three measures of faculty research productivity (RESEARCH, PUBFAC, CITPUB), all tended to be positively associated with the subjective ratings. In contrast, the dispersion in faculty productivity, GINIPUB, showed a statistically significant negative association with ratings in the majority of fields. Since an increase in GINIPUB means an increase in dispersion, this implies that simultaneously hiring a “star” and a “lemon,” whose average productivity is the same as that of existing faculty, may decrease program ratings in these fields! In the arts and the humanities, the objective measure of faculty productivity (AWARDF) was also positively associated with the subjective ratings.

Measures of doctoral program success also mattered. In about two-thirds of the fields, the greater the number of doctoral degrees produced per faculty member (PHDFAC), the higher the ratings tended to be. This implies that increasing the number of faculty associated with a program, without also increasing the number of degrees granted, will indirectly have a negative effect on program ratings in these fields. Finally, longer median times-to-degree (MEDTIME), which within a field typically are associated with programs with less financial support per graduate student, were also associated with more poorly rated programs.

IMPLICATIONS

We return to the field of economics, and in particular Cornell’s program, to illustrate how the estimates we obtained may be used to help guide university decision-making. As Table 2 indicates, Cornell’s economics program included 30 faculty in 1993, while the average program size of the top 10 departments in economics was 38 that year. Not unexpectedly, the Cornell department has argued over the years that it needs more faculty positions if it is to improve its ranking. Faced with tight budgets, the institution as a whole alternatively wonders how much the department would be hurt if it lost a position.

[TABULAR DATA 2 OMITTED]

It is possible, in fact, to conduct simulations of what impacts increasing–or decreasing–the faculty resources devoted to the Cornell economics program by one faculty member, all other variables held constant (including PhDs produced per faculty member), would have on Cornell’s ranking in the field. The answers to these questions depend upon the magnitudes of the estimated coefficients of faculty size and faculty size squared, and the number of faculty currently associated with the program, which together through the expression in (2) determine what the change in Cornell’s absolute rating would be predicted to be. How tightly bunched the ratings of other programs are around the Cornell program then determines how this change would translate into a change in the Cornell program’s relative ranking.

When these simulations were conducted, they indicated that Cornell economics would improve its relative rank by one if it received an additional faculty line and would similarly reduce its relative rank by one if it lost a position. These simulations assume, of course, that all of the other “explanatory” variables in the model would remain unchanged. If any of them would change as a result of a change in faculty size, the impact of these changes would have to be included in the simulations. For example, if the number of PhDs granted per faculty member declined when faculty size increased because there was no increase in support for graduate students, this indirect negative impact would also have to be included in the computation of the change in the ranking.

It should also be noted that larger changes in faculty size will not necessarily lead to simulated proportionately larger changes in a department’s relative ranking. This is because the predicted change in the ranking depends both on the predicted change in the rating and on the number of schools whose ratings are “closely bunched” around the department. Unless the program ratings are uniformly distributed, the change in a program’s ranking will not necessarily be proportionate to the change in its rating.

The estimated coefficients for economics can also be used to estimate what percentage of the difference between the average absolute rating of the top 10 economics departments and the absolute rating of Cornell’s economics department can be “explained” by each of the variables in the model. Given the estimated coefficients [a.sub.j] for a field, the predicted absolute rating of Cornell’s program, [R.sub.c] is given by

(3) [Mathematical Expression Omitted]

Similarly, given the means of the characteristics of the top 10 schools, the predicted absolute rating for the mean of the top 10 schools, [R.sub.m], is given by

(4) [Mathematical Expression Omitted]

Here the subscripts c and m refer to the values for Cornell and the mean of the top 10 schools, respectively, of each of the variables.

The predicted absolute difference in the rating of Cornell and of the mean of the top 10 schools that is due to differences in faculty size is thus given by

(5a) [Mathematical Expression Omitted]

Similarly, the predicted absolute difference due to differences in any of the other explanatory variables is given by

(5b) [a.sub.j]([x.sub.jm]-[x.sub.jc]) j=3,4,…8

To obtain an estimate of the percentage of the actual difference that is “due” to each explanatory variable, one simply divides the estimates from (5a) or (5b) by the actual observed difference, [R.sub.m]-[R.sub.c], and then multiplies the result by 100. The percentage of the actual difference that is due to all of the explanatory remarks is obtained by summing the percentages due to each variable.

The estimates obtained from performing such calculations for the Cornell field of economics are summarized in rows 14 and 15 of Table 2. Row 15 indicates that 87 percent of the difference between the average rating of the top 10 programs in economics and the Cornell economics department’s program rating was explained by all of the variables in the model.

Row 14 indicates how this explained percentage can be divided across sets of explanatory variables. About one-third, or 28 percent, is due to Cornell’s having a smaller faculty size than the average of the top 10 programs. Thirty-seven percent is due to Cornell’s economists having lower research productivity than the economists at the top 10 programs, as measured by the percentage of faculty with research grants, publications per faculty member, and citations per publication. Finally, 23 percent is due to Cornell’s faculty having lower productivity in doctoral student production, as measured by fewer doctoral students produced per faculty member and longer times-to-degree per student. Crucially, although a smaller faculty size is one of the contributing factors to Cornell’s not being ranked among the top 10 economics departments, it is not the major factor. Rather, the major factor is the lower research productivity of its faculty.

The fact that Cornell’s economists’ research productivity–as measured primarily by publications per faculty member and citations per publication–was lower on average than the research productivity of faculty at the top 10 economics programs should, of course, be of concern to the university. Does it reflect an inability to attract the very best young scholars? Does it reflect that Cornell’s faculty in economics spend more time on teaching and less time on research than their colleagues do at the top 10 competitor institutions? Does it reflect a propensity by Cornell to promote a much higher proportion of assistant professors than its competitors in economics, and thus the need to revise tenure standards? If it is a failure of Cornell to attract the very best faculty in economics, is this because of the relatively low level of salaries that Cornell pays its senior faculty or because of its unwillingness to pay a compensating wage differential to attract top faculty to relatively weaker departments in the university, such as economics? A university interested in improving the ranking of a department needs to know the answers to such questions.

[TABULAR DATA 2 OMITTED]

Ronald G. Ehrenberg is Vice President for Academic Programs, Planning and Budgeting, and Irving M. Ives Professor of Industrial and Labor Relations and Economics, at Cornell University. Peter J. Hurst is Senior Research and Planning Associate, Office of Institutional Research and Planning, Cornell University. The report containing the National Research Council rankings is Research-Doctorate Programs in the United States: Continuity and Change, Marvin L. Goldberger, Brendan A. Maher, and Pamela Ebert Flattau, eds., Washington, DC: National Academy Press, 1995. The authors wish to thank numerous colleagues at Cornell for their comments on an earlier draft.

COPYRIGHT 1996 Heldref Publications

COPYRIGHT 2004 Gale Group