Organizational Report Cards. – Review – book review
Richard E. Boyatzis
William T. Gormley, Jr., and David L. Weimer. Cambridge, MA: Harvard University Press, 1999. 272 pp. $39.95.
The anxiety of waiting for your grades in high school and taking them home to show your parents has been replaced for many of us by various comparative assessments of our organizations. Professional schools are now ranked in the media.
Despite our professorial feigned or real indifference, the results affect enrollments, donors, alumni, and the mood of deans, not to mention faculty recruiting. A sudden drop in a school’s rank precipitates various attempts at reform, innovation, or blame. Are these organizational report cards a good thing for our society?
In their new book, Gormley and Weimer present a critical, conceptual view of all forms of organizational report cards. They claim that a report card is “a powerful instrument because of its capacity to draw distinctions across subjects, across time, and across persons; because of its capacity to enlighten and embarrass; and because of its capacity to propel students forward” (p. 1). Organizational report cards can serve many stakeholders. They can be used to monitor performance and allow comparisons for customers, provide accountability for regulatory agencies, and insight for policy makers. They are external assessments that transform available information so as to make it more useful to these various audiences. The authors analyze other forms of organizational scorekeeping, such as the balanced scorecard, benchmarking, and public rankings of institutions. They focus on the importance of using outcomes as the key variables. The majority of the book is an in-depth discussion of validity and comprehensiveness (chap. 4), comprehensibility and relevance (chap. 5), reasonableness and functionality (chap. 6). They weave in and out of conceptual overview and technical specifics, such as the importance of considering cohort differences in longitudinal assessment of an organization’s performance.
In tackling report cards as policy instruments, they show how they can reduce information asymmetry and increase accountability and public access. The authors make a strong case for report cards as better than regulation because they are less coercive and tend to provide more accurate information on non-trivial variables. Although they use many examples throughout the book, the thorny issues of health care and education are often used for exploration. Admittedly, report cards can become a political tool, but the authors argue that they can be a democratizing tool and facilitate constructive action to improve institutions and organizations. Information, they contend, can be a public good. The government or public sector, they contend, probably has a greater capability to design a fair and accurate instrument, but the private sector has a greater capability to disseminate the information engagingly. The media, they point out, is in the private sector. They make a case for private and public sector responsibilit y in creating and maintaining these report cards.
A minor criticism of their thoughtful discussion of audience demands and needs is that it takes a decidedly third-party approach. Executives and leaders of institutions are portrayed in an almost reactive mode. The greatest utility of organizational report cards may be as useful information to executives of organizations, as an aid in organizational improvement, as a gauge for measuring continuous improvement, and as a catalyst for organization development or transformation. The authors offer a discussion of functional organizational responses, such as process improvement, input reallocation, managerial focusing, and mission enhancement. They also discuss dysfunctional organizational responses as self selection, cream skimming, teaching to the test, deception, and blaming the messenger. Those of us involved in professional and higher education’s emerging arena of “outcome assessment” can offer numerous examples of the intellectually stimulating and organizationally energizing impact of such assessments of how our students change and/or what they learn in our programs (Banta, 1993; Boyatzis, Cowen, and Kolb, 1995).
“Effective report cards compare organizations in terms of their performance. Measuring performance generally requires that some outcomes, or their proxies, be adjusted to take account of the difficulty of bringing them about. The task, commonly referred to as risk adjustment, ranges from the relatively simple calculation of ratios … to the complex construction of counterfactuals” (pp. 205-206). The utility of comparative assessments of institutions rests on such risk adjustments. The obstacles to appropriate risk adjustment may be technical, such as errors arising from oversimplification in creating a unidimensional measure. The authors contend that the obstacles are more often political, such as the convenience and cost advantages of using standardized tests such as SATs and GMATs to assess academic progress. These tests are easy to administer but are subject to social class bias and are valid predictors of little other than grades (McClelland, 1973, 1994).
Even though school rankings may be composed of a variety of variables, readers do not typically make detailed distinctions but use the school’s rank as an overall, unidimensional measure, thereby violating many of the qualifications and issues of validity that the authors discuss. Some regulatory agencies, such as the AACSB, which accredits schools of management and business, have attempted to respond to potential abuse of unidimensional measures, such as school rankings. They have anchored the accreditation review process on a school’s specific mission. These dynamics of intentional or unintentional unidimensional measures also place organizational report cards in the realm of extrinsic rewards and motivators versus emphasizing the arousal of intrinsic motivators, such as the intellectual curiosity of academics as to whether or not their students are learning.
The authors seem to assume that better use of organizational report cards would help free market forces, which would lead to better services. The recommendation for more organizational tournaments in which rewards might be more proportional to performance sounds intriguing but opens up even more institutions to pressures analogous to free-market forces. Such pressures may or may not be fair or responsive to the social-value-maintenance role of our institutions in society. Such activities could create positive feedback loops, or self-fulfilling prophesies, in which strong institutions survive and only successful ones succeed. There is little forgiveness or second chance for improvement in such a society, nor are institutions providing unique or small-market-niche services protected by larger forces.
This is a conceptual and thought-provoking book. There is no sex, no violence, and only minor allusions to treachery. The assumptions about the benefit of comparative assessment and the utility of measurement will appeal to many, but post-modernists stay away! The best audiences for the book are (1) scholars in organizational research (e.g., in fields such as strategy or organizational behavior); (2) policy makers and advisors (i.e., consultants); (3) organizational leaders and innovators who want to build creative and energizing visions; and (4) corporate planners. Perhaps the most compelling application of these ideas other than for research is to encourage organizations to develop and promote their own report cards and then challenge their colleagues in other institutions to join them. This would be far better than to wait for someone to get sufficiently angry or self-righteously indignant to impose regulations and compliance-driven processes.
REFERENCES
Banta, T.
1993 Making a Difference: Out-comes of a Decade of Assessment in Higher Education. San Francisco: Jossey-Bass.
Boyatzis, R. E., S. C. Cowen, and D. A. Kolb
1995 Innovation in Professional Education: Steps on a Journey from Teaching to Learning. San Francisco: Jossey-Bass.
McClelland, D.C.
1973 “Testing for competence rather than intelligence.” American Psychologist, 28:1–40.
1994 “The knowledge-taking-educational complex strikes back.” American Psychologist, 49 (1): 66–69.
COPYRIGHT 2000 Cornell University, Johnson Graduate School
COPYRIGHT 2001 Gale Group