The lid on the garbage can: institutional constraints on decision making in the technical core of college-text publishers
The Lid on the Garbage Can: Institutional Constraints on Decision Making in the Technical Core of College-Text Publishers In a study of college physics and sociology textbook publishers, coercive, mimetic, and normative forces in the institutional environment are shown to order the decision and access structures of garbage can systems and to account for a uniformity of outcomes that is unexpected from garbage can decision models. Interviews with editors established that decision making in textbook publishing conforms with the garbage can model and helped us determine the ten best-selling introductory texts in each field. Optimal matching, a quantitative technique for content analysis, was used to demonstrate that differences in the homogeneity of contents and sequencing of material in these textbooks are determined by the degree of development of paradigms in the academic discipline. We show that, in contrast to Thompson’s (1967) model, organizations with ambiguous core technologies can benefit from opening their technical cores to be shaped by the institutional environment.
The garbage can model of organizational choice (Cohen, March, and Olsen, 1972) has been a mainstay of the literature on organizational decision making for over fifteen years. According to this model, many decision processes within organizations do not operate according to rational choice models. Rather, confounding situational elements further limit the cognitive capacities of organizational participants (Cohen, March, and Olsen, 1972; March and Olsen, 1986). Streams of loosely coupled problems, solutions, participants, and choice opportunities flow into the organization at different rates and connect or decouple elements according to a temporal rather than a causal logic. Garbage can processes “depend on a relatively complex intermeshing of elements, [including] the mix of problems that have access to the organization, the mix of solutions looking for problems, and the outside demands on the decision makers” (Cohen, March, and Olsen, 1972: 16). Thus, solutions may seek problems, both problems and solutions may await opportunities for decisions, and participant energy is likely to be distributed according to the overall load and arrival time of the various streams rather than by any “objective” criteria determining the relative importance of a particular issue. In garbage can systems, decisions are often made by flight or oversight rather than by calculation.
Two aspects of organizational structure are elaborated in the garbage can model. The first is the decision structure, the mapping of choices onto decision makers. The second is the access structure, the mapping of problems onto choices. This paper integrates the garbage can model with institutional theory to explain how different aspects of the institutional environment lead to different consequences for the decision and access structures of college-textbook publishers than might be expected from the garbage can decision world they inhabit.
The garbage can model of organizational choice implies that random or heterogeneous outcomes should be expected, because the connections between decisions and outcomes are determined by temporal factors, such as time of arrival or overall load on the system, rather than by causal connections between decisions and outcomes. In this paper, however, we show that garbage can decision processes in the technical core (Thompson, 1967) of organizations–the structures that process the “products” of the organization (Scott, 1981: 97)–may result in homogeneous outputs. We demonstrate that the institutional environment, which surrounds the garbage can decision processes of the technical core and limits the range of problems, solutions, and choice opportunities flowing into the core, is a key source of this ordering of the technical core and its outputs. The institutional environment thereby constrains–or puts the lid on–the garbage can processes.
This notion that external institutional contraints homogenize the outputs of the technical core extends research following Meyer and Rowan (1977) and DiMaggio and Powell (1983), which suggests that the institutional environment can homogenize the institutional and administrative structures of organizations, respectively (see Scott, 1987, and Zucker, 1987, for reviews). This paper demonstrates that the institutional environment can also homogenize the outputs of the technical core when that technical core is characterized by garbage can processes. Thus, in the case of organizations in which the core technology is ambiguous, the logic of Thompson’s (1967) model should be reversed: By not sealing off the technical core from the perturbations and influences of the external environment, organizations derive orderliness in the outputs of the technical core from the constraints imposed by the institutional parts of that environment.
We explore the relationship between the institutional environment and garbage can processes in the technical core in a study of editorial decision making, a key feature of the technical core of the college textbook-publishing industry (Powell, 1985). It is generally accepted that college textbooks in the various disciplines are relatively homogeneous with respect to topics and the order in which topics are presented. We show that the institutional environment, particularly the level of development of paradigms in the academic disciplines, shapes the organization and content of introductory textbooks in physics and sociology.
THE GARBAGE CAN MODEL AND
Garbage can models “describe a portion of almost any organization’s activities, but not all” (Cohen, March, and Olsen, 1972: 1). The garbage can model has been successfully applied to educational (Clark et al., 1980), public (Sproull, Weiner, and Wolf, 1978), and military (Bromiley, 1985) organizations, as well as other “organized anarchies” (Cohen, March, and Olsen, 1972) with chaotic, theater-of-the-absurd decision worlds. Our study of college-textbook publishing showed that the garbage can model is also applicable there.
In performing structured, open-ended interviews with editors of the ten best-selling introductory textbooks in physics and sociology, we were struck by the way the editors consistently described their work in gambling terms, such as “a lottery with bad odds,” “an attempt to hedge one’s bets,” or “a crapshoot.” We were particularly surprised to hear this from college-text editors, because we expected college-text publishing to be the most rationalized segment of the publishing industry (Coser, Kadushin, and Powell, 1982; Powell, 1985). We were informed however, that college-text publishing simply represents “the poker game with the highest ante because of the high costs of production” and that the procedures for decision making are best described as “guesswork, intuition, and opinion.” The sense of confusion experienced by participants inhabiting this haphazard and unpredictable universe is captured in the following comment from a sociology editor: “Editors can become schizophrenic. You think a manuscript is good and it doesn’t make money. Then you get a manuscript that you think is bad, and it makes money–but not always.”
Because postindustrial Western culture places such a high value on rationality, which implies theories of cause and effect, participants in ambiguous organizational environments find themselves in the position of having to make sense of a world that is not eminently sensible. Editors are surprisingly willing to admit the bewilderment and the lack of control that accompanies the interpretation and evaluation of outcomes in their business:
This is a high-risk business, and it’s also a demoralizing business. You think a book is solid and well-written, and it doesn’t do well. It never got a chance, and you don’t know why. Maybe the jacket was the wrong color. You think of it as the luck of the game. We have little control. You have to accept the things you don’t have control over.
This lack of routinization is not sufficient evidence that textbook publishing conforms to the garbage can model, because ambiguity and variability may result from numerous organizational and environmental characteristics, including charismatic leadership (Weber, 1968), ideologies (Swidler, 1979), and unstable consumer demand (Hirsch, 1972). However, many aspects of the decision-making processes in college-textbook publishing are well described and characterized by the garbage can model (Powell, 1978), because many of these decisions reflect the importance of timing, load on the system, and serendipity. For example, timing frequently determines the chance of a given project coming to fruition. Many, but not all, introductory texts sell better in the second or third edition, but the likelihood of producing and marketing a second edition depends on whether the publishing house is planning other new entries into a particular market niche at that time. Load on the decision maker is frequently a factor in determining how books are treated, because editors typically work on approximately twenty books at any given time.
Serendipitous events (such as hearing about a potential manuscript while consulting an academic about a project in hand) are built into editors’ expectations concerning the acquisition of manuscripts, although the timing of such fortuitous events cannot be planned. Editors consistently stress the importance of “being out there” and maintaining strong networks with academics. Scanning the environment to increase the possibility of serendipitous outcomes is frequently effective, as in the case of the psychology editor who reported that he acquired his most successful manuscript on the topic of human sexuality through a chance meeting with a psychologist while he was on vacation in Acapulco.
There is also evidence of problems and solutions tracking each other temporally. For example, academics in the field of artificial intelligence claim that they would teach introductory courses if there was a suitable text, while editors argue that they would produce texts if there were courses.
In addition to having a temporal logic, garbage can decision contexts are characterized by unclear preferences, that is, confusion concerning the interpretation and definition of success versus failure; an ambiguous technology, where procedures for achieving a given outcome are unclear; and fluid participation stemming from the sometimes unpredictable exits and entrances of participants (Cohen, March, and Olsen, 1972). The decision processes of college-textbook publishing frequently exhibit some or all of these three characteristics.
Unclear preferences. It seems reasonable to assume that under all imaginable circumstances, organizational participants will prefer success over failure. In garbage can systems, such as college-textbook publishing, the distinction between, and therefore the interpretation of, success and failure becomes equivocal, malleable, and profoundly complicated. Although there is a clear preference for a strong bottom-line in this business, editors often have difficulty interpreting success within this framework because short-term and long-term success are often unconnected or even contradictory. An editor’s success is often evaluated in comparison with his or her previous performance level, with the following consequences for a sociology editor:
See, last year, I had this manuscript that I didn’t think was very good, and it did brilliantly. The problem is that there’s an implicit quota system round here. Now, I have no idea why it did so well, and there’s no way I’m going to get that lucky twice in a row. So, this year, it’s going to look like my performance is declining.
Similarly, failures can be transformed interpretatively into a vision of success. When asked about a specific book that did very poorly, a physics editor reported:
The reason why the “Smith and Jones” [pseudonyms] text failed is that Smith died before the project was completed, and Jones is impossible to deal with. But I don’t really think of it as a failure because at that point in time it was important for the company to have an entry, any entry, in the physics market.
The confusion concerning preferences, which derives from the difficulty in distinguishing sharply between success and failure, is succinctly captured in a classic story from editorial folklore. The standard format of the story describes an editor who “signs” a Pulitzer-Prize or another distinguished award-winning manuscript and then gets fired because the book does not break even. Variations on this theme were encountered in talking to editors and in the literature on the publishing industry (e.g., Coser, Kadushin, and Powell, 1982; Powell, 1985).
Ambiguous technology. The ambiguity of the technology derives from unclear connections between means and ends. Editors claim that it is not possible to specify the procedures leading to any given outcome because there is no agreed-upon formula for producing a successful textbook. This makes it difficult for participants to operate under what Thompson (1967) has termed norms of rationality.
Editors perceive attempts to rationalize the decision process to be of limited use. In the majority of cases, the failure of rationalization is attributable to a disjunction or loose connection between information and action. For example, editors believe that market research, a common rationalization strategy (Beniger, 1986), is a useful guideline but, in the final analysis, editorial judgment must prevail because of the nature of the business:
This business is not like doing market research for soft drinks. For example, a certain diet drink was not selling very well, and market research showed that males will not drink a beverage in a pink can. In publishing, it’s not black and white. You have to remember that the market for textbooks is like the market for dog food because purchasing decisions are not made by the ultimate consumer.
another cause of the ambiguity between means and ends is the disjunction between knowledge and action: Editors often work in a discipline in which they were not trained. A physics editor complained about reviewers: “Physicists . . . say things like, ‘Rotational bodies in the second dimension needs fixing.’ I say, ‘Give me a break. I don’t even know what rotational bodies are.'” However, there is also virtue in ignorance, as explained by another physics editor:
It is preferred if you don’t have a background in the discipline. That way you bring no prejudices. Otherwise, you would start arguing with authors and reviewers. That is not an editor’s role. You are publishing physics, not doing physics. The editor is a market person. You have to master the lingo and the jargon, but this is not a science game; it’s a marketing game.
Editors do claim that confidence and skill increase with experience, but they define the benefits of experience in uncodifiable terms such as “acquiring a feel for the market,” or “developing a sense of smell.” Although newcomers to the business tend to rely on market research as a guide to decision making, “after a while you gain confidence in your own judgment and intuition. Then you can let go of the surveys and questionnaires, which aren’t that helpful anyway.” The benefits of experience, then, are difficult to articulate and therefore difficult to codify into rules and procedures. Despite their attempts to reduce ambiguity and risk, editors describe themselves as inhabiting a world in which “there are no formulae for finding talent. . . . All we have to work with are probabilities and intuitions.”
Because no one knows the rules for making a particular book successful, editors insist on evaluation based on the overall quality of their lists, rather than on the fate of any single manuscript. Unlike practitioners in well-established professions such as medicine and law, editors cannot deal with ambiguity by resorting to procedural criteria for evaluation (Meyer and Rowan, 1977), which only exist when a codified body of knowledge has been successfully institutionalized.
Fluid participation. The ambiguous and equivocal world of garbage can decisions is made even more volatile by the sometimes unpredictable exits and entrances of participants. Fluid participation is a quintessential feature of the publishing business. Editors are promoted, but it is also very common for editors to be fired. Editors, like managers, are often victims of scapegoating (Pfeffer, 1981), and it is part of the occupational culture of editors that being fired (even more than once) is no indication of incompetence.
Another problematic factor is that textbooks take three to five years to produce. projects are often inherited from former incumbents of the position. This serves to complicate further inherently tricky attributions of success and failure, because it adds the confounding element of determining whether responsibility (for success or failure) lies with the initiating editor or with the inheritor of the project (Coser, 1979). A particularly insightful editor noted the distorting connections among fluid participation, time frame to produce a text, and editorial reputation:
Sometimes acquisition editors sign as many manuscripts as possible in order to look good. They are on their way up and out. Acquisition editors move on to marketing and sales management. It is only if you stay long enough that you have to live with failures. Since textbooks take three to five years to produce, in three years other people’s mistakes have come home to roost, not yours. So it’s smart only to stay for three years.
Given the ambiguity of the technology, which complicates attributions of success and failure, and fluid participation, how do certain editors acquire a better reputation than others? A concrete, consensually agreed-upon measure of editorial reputation is the number of successful authors (known as “repeats”) who follow an editor from publisher to publisher: Editors are often recruited on the basis of the contacts they are likely to bring with them. However, this exacerbates the problems caused by fluid participation.
A key factor in developing a reputation in college-textbook publishing (and other garbage can systems) is to get lucky early in the game. If the random error in this probabilistic universe works in an editor’s favor in the early stages of a career, that reputational effect is likely to adhere and be self-fulfilling, resulting in the Matthew Effect (Merton, 1968). Once an editor is perceived to be “good,” the editor-in-chief will give him or her more autonomy, and the editor will have more clout with the promotion and publicity department. Other factors such as the prestige of the press (cf. Lamont, 1987) and intereditor networks (cf. Granovetter, 1973) seem to be less important. Editors may not be unrealistic, therefore, when they attribute a great deal of their fortunes to “the luck of the game.”
Just as the garbage can decision processes of text publishing described above lead to disorderly results for the participants, it seems likely that chaotic decision processes should lead to heterogeneous or random technical-core outputs. However, the belief that introductory college textbooks in the various disciplines are highly homogeneous seems to be widely shared. At an informal level, editors, academics, and students all believe that introductory textbooks are very similar to one another. Researchers working within the production-of-culture perspective (e.g., DiMaggio, 1977; Coser, Kadushin, and Powell, 1982; Powell, 1985) also have assumed, based primarily on impressionistic data, that the contents of textbooks are fundamentally homogenous. The purposes of the study reported here were to test empirically for this homogeneity, examine the source of it, and explain variations across disciplines.
Noninstitutional sources of homogeneity. Because of the temporally determined, random outcomes that tend to emerge, order is rarely derived from within technical cores based on garbage can decision processes. Therefore, participants must look to the environment to attempt to order the core processes. In textbook publishing, for several reasons, noninstitutionalized aspects of the environment, including training and learning and market surveillance, cannot explain either within-discipline homogeneity or systematic variation between disciplines in levels of homogeneity of texts.
Training and learning. Like most technologies involving tacit knowledge (Stinchcombe, 1959), college-textbook publishing relies on apprenticeships rather than formal professional training to socialize new members. Textbook editors do not have a systematic and institutionalized body of knowledge, nor do they work in the discipline in which they were trained. Different editors may develop similar procedures through learning, but learning does not tend to be very successful. When editors attempt to learn by experience, they confront the problems of the intractability of past experience, the uniqueness of the product, and the potential for superstitious learning (Levitt and March, 1988).
Intereditor networks are weaker in college-textbook publishing than in other segments of the publishing industry, reducing the possibility of pooling experience (Levitt and March, 1988; cf. Granovetter, 1985) and therefore reducing the likelihood that outputs will be homogeneous. The relative secrecy that characterizes intereditor networks is related to the fact that editors in college-text publishing compete for identical market niches, such as the lucrative introductory psychology market. Therefore, training and learning have not been successfully institutionalized, and even noninstitutional forms of training, such as apprenticeship, can have only a very minimal effect on homogenization within garbage can systems because no standard, codifiable rules or procedures are transmitted. Further, training and learning would not explain the variation in the degree of homogeneity that may exist between disciplines.
Market surveillance. Textbook publishers have evolved certain market-surveillance mechanisms to ascertain the demands of university professors and university departments, such as feedback from market research and outside reviewers. Although market research has limitations, it clearly has some small role in homogenizing the outputs of the textbook-publishing industry (Powell, 1985). One key variant of market research is the use of outside reviewers. These reviewers sometimes reduce uncertainty, but they often disagree with one another:
It’s easy to weed out the books you know you don’t want to do, as well as the others you know you are definitely going to publish, no matter what. It’s the ones in the middle that are hard. For example, the manuscript sitting on my desk right now. I wasn’t sure what to do with it, so I got five reviewers. Two said great, two said terrible, one said don’t know. So I’m back where I started.
Editors also employ such surveillance strategies as developing strong networks with academics (Granovetter, 1985), as evidenced by publishing houses’ support for generous expense accounts. However, academic networks frequently supply incorrect information, as faculty members express a desire for innovation and then make conservative textbook adoption decisions:
When you ask faculty questions, the result is not truth with a capital T. Faculty have a double standard. Faculty may tell you what they think you want to hear, or what they believe they should say. Instructors say they would never use a test bank. But, if you don’t have an instructor’s manual, you are dead in the water.
The lack of valid information and reliability among the “totality of actors” (DiMaggio and Powell, 1983) in the environment, compounded by weak intereditor networks, hinders the institutionalization of editorial norms and limits the effectiveness of market research for generating homogeneity. To the extent that market surveillance can provide homogenizing influences, these influences, like training and learning, would not be expected to lead to systematic differences between academic disciplines.
Institutional sources of homogeneity. A given college-textbook publishing house is part of a highly structured organizational field (DiMaggio and Powell, 1983: 148) consisting of other publishing houses and one of the most institutionalized sectors of the environment: the educational system (Meyer and Rowan, 1977; Meyer and Scott, 1983). We propose that different aspects of the institutional environment exert mimetic, coercive, and normative influences (DiMaggio and Powell, 1983), homogenizing garbage can structures and outputs.
Mimetic isomorphism–imitation. Mimetic isomorphism (DiMaggio and Powell, 1983) describes the tendency of organizations faced with environmental uncertainty to imitate other organizations perceived to be successful. In college-textbook publishing, mimetic isomorphism is realized through the imitation of textbooks produced by other publishers.
Editors often note that in the face of the uncertainty generated by garbage can processes, they learn by imitation. Imitation is clearly a widespread phenomenon in the textbook-publishing industry (witness the impressionist paintings on the covers of many introductory sociology texts), especially given the need for minimizing risk in such a high-risk, chaotic industry (Levitt and March, 1988). The “copycat” nature of the business is an important factor in the homogeneity of textbooks (Powell, 1985). Copying the texts of other publishers clarifies the access structure (the mapping of problems onto choices) of college-textbook publishers by constraining some of the choices attached to the problems associated with producing a new text. That is, editors need not start from scratch in evaluating a text but can ensure that it reflects the basic structure and organization of material in other texts.
Imitation is a limited source of order, however. Editors are faced with incomplete information and problematic interpretations in knowing what and how to imitate. Because of the difficulties in identifying success and failure (sales figures of textbooks are not publicly available), it is not always clear whom they should be copying. Editors report that they hedge their bets by looking at a number of successful books rather than a single book to imitate, but there are no standard procedures that dictate which aspect of a book or books to copy.
Imitation also is of minimal strategic utility for the crucial step of differentiating a product. Editors consistently report that the “safest bet is to copy other successful books, but to do one thing different; otherwise, all you’ve got is a me-too book.” Editors must balance the simplicity of imitation against the need for product differentiation, without clear rules for achieving this balance. Therefore, imitation is an important but limited source of homogeneity, and a simple imitation model cannot explain disciplinary differences in the homogeneity of textbooks.
Coercive isomorphism–structure of college-text publishing houses. Coercive isomorphism in the form of pressure from state or state-like bodies is clearly evident in elementary and high school publishing where state adoptions of texts are crucial for success. The state does not exert the same kind of pressure on college-text publishers. However, “[d]irect imposition of standard operating procedures and legitimated rules and structures also occurs outside the governmental arena” (DiMaggio and Powell, 1983: 151), specifically in higher education.
College-text publishing is inextricably interstructured with higher education, and this interdependence leads to higher education exerting coercive pressures on college-text publishers to achieve “structural equivalence,” that is, a match between the hierarchical or functional divisions of the organizations. For example, college-text publishers create positions such as physics and sociology editors to match the structural subdivision of higher education into academic disciplines. Thus, coercive institutional forces lead to the structure of academe being reproduced in the structure of college-text publishers (Meyer and Scott, 1983), thereby shaping the organizational field such that publishing houses come to resemble each other structurally.
This structuring of textbook publishers by academe provides guidelines for mapping choices onto decision makers; that is, it orders the decision structure of the garbage can system’s technical core. For example, it ensures that physics manuscripts will be sent to the physics editor. Despite this homogenization of the structure of publishing houses, there are limits to the coercive homogenization of outputs. Fluid participation, manifested in the notoriously high turnover of editorial personnel, combined with the three to five years required to produce an introductory text, leads to problems in associating choices, decision makers, and outcomes. Further, the structural isomorphism between publishing houses cannot account for the variation between disciplines in the homogeneity of textbooks.
Normative isomorphism–paradigms. Normative institutional pressures are usually derived primarily from professionalization (DiMaggio and Powell, 1983), which is not a major source of institutional isomorphism within the college-textbook publishing industry. To the extent that normative institutional pressures are exerted on the college-textbook publishing industry, they are derived from the professionalization of academics and the normative structure of academic disciplines. College-text editors can import the institutionalized norms of academe by relying on academics as authors and outside reviewers, for example, although this strategy is not foolproof.
Academics are socialized and credentialed via institutionalized programs organized around a legitimated knowledge base called a disciplinary paradigm. The organization of knowledge into paradigms is a socially constructed phenomenon (Kuhn, 1970; Lodahl and Gordon, 1972; Pfeffer, Salancik, and Moore, 1987; Hargens, 1988) that reflects the degree of consensus in a discipline regarding (1) what constitutes the core of the discipline and (2) the important problems to be studied (Kuhn, 1970; Cole, Simon, and Cole, 1988: 152). Paradigms, created and reinforced by academics in each discipline, can organize and constrain the range of possibilities, or menu of options, available to editors confronted with taking the crucial step of differentiating their products from others’. The number of options for differentiation are greater in preparadigmatic disciplines than in disciplines with fully developed paradigms. Thus, disciplinary paradigms can differentially order and organize the access structure of the garbage can system of editorial decision making.
Disciplinary paradigms can constrain the contents of material in introductory textbooks in a number of ways. Introductory texts contain the exemplary theories in substantive areas consensually believed by a community of scientists to reflect the core of the discipline. Fields with well-developed or mature paradigms, like physics, have high levels of consensus on what constitutes the core of the discipline and the appropriate issues and methods to be used in extending the knowledge base. For example, Lodahl and Gordon (1972) found that fields that were more paradigmatically developed showed greater agreement about the requirements and content of graduate programs. Thus, the extent of paradigmatic development or consensual agreement in a field will determine how clearly defined the core of the discipline is and therefore how much variation in the contents of introductory texts is acceptable.
Mature paradigms also reflect consensus on the ordered relationship between areas of knowledge in a field. Mature paradigms are based on cumulative knowledge. Pfeffer, Salancik, and Moore (1987) used the length of course sequences involving prerequisites as an indicator of paradigm development. Their rationale for this measure was that faculty members could not structure course sequences unless (1) knowledge was assumed to be cumulative, and (2) there was agreement concerning the relationship of one area of knowledge to another.
A similar logic applies to the sequencing of topics in introductory texts. Disciplines with mature paradigms, like physics, should constrain the order of presentation of material in introductory texts to a greater extent than those with immature paradigms, like sociology, because cumulative knowledge requires the mastery of material in sequential order. Thus, the organization of physics and sociology into disciplinary paradigms should be a key factor in the overall homogeneity of physics and sociology texts, as well as in the different levels of homogeneity, despite the garbage can decision processes of the text-publishing industry.
If paradigms are partly responsible for determining the contents and ordering of textbooks, then there should be disciplinary variations in the degree of homogeneity of contents and organization of material, depending on the status of the disciplinary paradigm. Disciplines with well-defined, mature paradigms, such as physics, will show greater homogeneity with respect to sequencing and contents of material than will disciplines with less developed, immature paradigms, such as sociology (Hargens, 1988). As discussed above, other sources of homogeneity, such as training, market surveillance, imitation, and the structure of publishing houses, cannot explain such disciplinary variation.
To test statistically our expectations of general homogeneity in textbooks and differing levels of homogeneity between disciplines, we used techniques for measuring the degree of homogeneity in introductory textbooks and applied them to textbooks from physics and sociology so that we could capture variations between disciplines in homogeneity.
Two basic ways in which a set of textbooks exhibit heterogeneity are in the order in which topics are presented and in the topics that are included. If we conceive of each textbook as a set of topics in a fixed order, then homogeneity can be tested by comparing sequences in different textbooks within the discipline.
One procedure that has been used in determining the match between sequences is optimal matching (Sankoff and Kruskal, 1983; Abbott and Forrest, 1986). In general, optimal matching provides a quantitative measure of the minimum extent of modifications in one sequence required for it to match the other. In optimal matching, the three basic types of modifications are (1) the substitution of one element for another at the same position in the sequence, (2) the deletion of an element from a sequence, and (3) the insertion of an element into a sequence. Optimal matching cannot be used to measure some other common operations, such as transposition (switching two elements in a sequence) and compression (combining two elements into one element). Each modification is assigned a “cost,” a real number greater than zero that reflects an a priori decision as to the severity of the modification; the less severe, the smaller the cost.
Substitution costs are defined for each pair of potential elements, with greater substitution costs assigned to pairs that are judged to be more dissimilar. The substitution cost of an element for itself is 0, and it is conventional to standardize substitution costs to a maximum of 1 (Bradley and Bradley, 1983). Insertion and deletion costs are set equal to each other and may be any value greater than half the minimum, non-zero substitution cost (because an insertion and a deletion is equivalent to a substitution). As the strings are converted until they match, the costs of the transformations are summed.
Because there are multiple sets of transformations, there are multiple costs that will make the sequences match. In computing the optimal matching result, one uses algorithms to compute the least costly method of matching the sequences (Sankoff and Kruskal, 1983). Optimal matching is frequently called the “Levenshtein distance” (Levenshtein, 1966) because the result has the properties of a distance function; that is, if we have three sequences, A, B, and C, and denote the result of optimal matching by d (X, Y), then d (X, Y) has the following properties: d (A, B) [is greater than or =] 0 d (A, A) = 0 d (A, B) = d (B, A) d (A, B) + d (B, C) [is greater than or =] d (A, C)
Although it has become conventional to divide the total cost by the length of the longer string, for the present analysis, the total cost is standardized to a maximum of 1 by dividing it by the maximum total cost that could be obtained from an optimal matching of two sequences of the same length as the compared sequences. If the length of the strings are l.sub.1 and l.sub.2., and the insertion/deletion cost is i, the maximum possible cost is Cost.sub.max ” min(l.sub.1.,l.sub.2.)*min(2*i,l) + abs (l.sub.1 – l.sub.2.)*i.
After standardization, a distance of 0 means that the two sequences are identical; a distance of 1 means that the two sequences are maximally different. The present approach has the advantage of not giving undue influence to the differences in length between the two strings.
An example should be helpful. Assume that we have sequences whose elements are the letters of the alphabet. Let the substitution cost for a given pair of letters equal the difference in position of the alphabet divided by 25. Thus, the substitution cost between “b” and “g” is (7 – 2)/25; the maximum substitution cost is between “a” and “z” (26 – 1)/25 = 1. Let the insertion and deletion cost be .25. To convert the sequence “HOUSE” into “MICE” in the least costly fashion, one would do the following: Action Cost Substitute “M” for “H” (13 – 8)/25 = .2 Substitute “I” for “O” (15 – 9)/25 = .24 Delete “U” .25 Delete “S” and Insert “C” .25 + .25 = .5 (note that this costs less than changing “S” to “C”) Leave E alone 0 Total 1.19
The maximum cost is 4*.5 + 1*.25 = 2.25. Therefore, the distance between “HOUSE” and “MICE” is 1.19/2.25 = .529.
If one wishes to take into account substitution costs and insertion/deletion costs to test for homogeneity of contents, independent of ordering, one can calculate the optimal matching distance between the two sequences for all possible orderings of the sequences and then choose the minimum. There are numerous methods for rapid approximation of the “content distance” (Nass, 1989), and exact solutions can be found rapidly for certain sets of substitution and deletion costs.
The Universe of Textbooks
In both physics and sociology, there are a large number of introductory texts available. However, according to the editors interviewed for this study, the top ten best-selling texts account for over 90 percent of all introductory textbook sales. Therefore, we chose to focus our study on the ten best-selling introductory texts in those two disciplines.
Because textbook sales figures are not publicly available, we used a snowball search process to determine the ten best-selling texts. We asked the editor of one of the popular introductory textbooks in each discipline to give his or her opinion of the ten best-selling introductory textbooks in the discipline. We then contacted the editors of the named texts and asked their opinion. We repeated this until we had spoken with the editor of every book that was named by at least one editor (over ten editors for both disciplines). We then generated a list of the ten best-selling introductory texts in each discipline, as perceived by editors, by averaging the rank-orderings within each discipline. Any text already on our list that was not named by a given editor received a rank of one plus the lowest ranking the editor provided.
So that we could use optimal matching for an analysis of the contents and sequencing of the material in these textbooks, two sociologists developed a list of the basic topics (as well as subtopics) that would be found in sociology texts, and two physicists developed the topics and subtopics for the physics texts. Substitution costs between basic topics (e.g., stratification versus ideology) were defined as 1.0. Substitution costs between subtopics of a given topic (e.g., race versus gender as subtopics of stratification) and between a subtopic and the topic itself (e.g., race versus stratification) were defined as 0.5 (optimal matching is relatively robust with small or limited changes in substitution costs). Although insertion/deletion costs were defined as 1.0 for the following analyses, we also tried setting insertion/deletion costs equal to .3, .7, and 1.5 and found no substantive differences in the results.
For each textbook in each discipline, two trained coders classified each chapter title into one of the previously specified topics or subtopics. Intercoder reliability was greater than .85 for both disciplines, and disagreements were resolved by discussion. If two contiguous chapters covered the same topic, they were collapsed for the purposes of the following analyses. Analyses that did not involve collapsing the texts resulted in substantively similar results to those reported below.
Homogeneity of Textbooks
If textbooks in a given discipline are homogeneous, then the sequence and content distances between the actual texts should be significantly smaller than the distances between texts comprising random chapters drawn from the pool of possible topics. No statistic establishes levels of significant differences for distances between sequences; therefore, we used a stochastic estimate for each discipline. Comparing each of the ten texts with all other texts, ignoring order, we first computed the 45 optimal matching distances between the ten actual texts in each discipline. For each discipline, we then created 99 different sets of ten random texts, matched with the ten original texts on number of chapters. The chapters for the random texts were drawn randomly with replacement from the pool of possible topics for the discipline. For each of the 99 random sets in each discipline, we computed the 45 optimal matching distances; we then averaged across the 99 sets. We repeated this procedure for the content distances between texts. If textbooks are homogeneous, then the sequence distances and the content distances for the actual texts should be significantly smaller than the averaged distances between the random texts.
The sequence comparisons for physics and sociology texts showed the mean of the 45 sequence differences computed from the actual physics texts to be .49 (S.D. = .19). This means that on average, about half of the sequence of topics in a physics text would have to be changed to make it conform in sequence to another text, suggesting a moderate degree of homogeneity among texts. The mean of the averages for the random sets of texts was .87 (S.D. = .02). The sequences of the actual physics texts were significantly more homogeneous than the random sets of texts by a paired-comparison t-test (t = 14.1; d.f. = 44; p < .001).
For the sociology texts, the mean sequence differences for the actual texts was .57 (S.D. = .12) versus .91 (S.D. = .04) for the average of the matched random texts, also a highly significant difference (t = 15.1; d.f. = 44; p < .001).
The contents were also compared. Because content distances are based on the minimal distance between the sequences, the content distances are always less than or equal to the sequence distances. The mean for the differences between the actual physics contents was .23 (S.D. = .13); on average, a little less than one quarter of the contents in one text would have to be changed for it to match another text. The mean for the random physics sequences was .51 (S.D. = .06); the difference was highly significant (t = 19.5; d.f. = 44; p < .001). The mean for the differences between the actual sociology sequences was .32 (S.D. = .13); the mean for the random sociology sequences was .63 (S.D. = .02), also a highly significant difference (t = 15.8; d.f. = 44; p < .001).
These results confirm statistically what was suggested impressionistically: Textbooks in each discipline tend to be significantly homogeneous with respect to the ordering of contents and topics. This runs counter to the random or heterogenous outcomes expected from the disorderly garbage can decision processes of the textbook-publishing industry.
To test the effect of disciplinary paradigms on the homogeneity of college textbooks in different disciplines, we used our variant of optimal matching to compare physics texts to sociology texts. Physics is assumed to represent a mature paradigm, while sociology is assumed to be a preparadigmatic discipline (Hargens, 1988). Table 1 exhibits the comparisons with respect to sequence and contents.
The sequence distance means were significantly different based on an unpaired comparison t-test, with physics, the more mature discipline, exhibiting higher levels of homogeneity, as expected. Physics was also more homogeneous than sociology with respect to contents, demonstrating the greater constraints applied by mature paradigms. Only normative isomorphism explains these between-discipline differences. Therefore, paradigms help to shape the contents and sequences of texts, with well-defined paradigms demanding greater conformity and providing greater direction on what the “essential” contents and sequences of chapters are.
There is also qualitative evidence, based on structured, open-ended interviews with the editors of the texts we analyzed, that paradigms constrain editors, particularly the strategies they use for producing, differentiating, and revising introductory texts. The comments of the editors thus support and elaborate the findings of the content analysis. For example, physics editors show an awareness of the ways in which the well-defined core of physics as a discipline constrains, and in a sense simplifies, their task, by statements such as “In sociology, there’s no core. The hard sciences, like physics, give editors an advantage. The topics are standard and sequentially ordered.” Sociology, on the other hand, is described as being “new and confused” or “In sociology, there is greater tolerance for diversity. The research base is enormous, but there is only a small core of classic material.”
The effects of paradigms also appear in the strategies editors use for differentiating products. One would expect that the more mature the discipline, the more difficult it is to differentiate an introductory text substantively. While the maturity of physics as a discipline reduces uncertainty and the potential for variation with respect to contents, it simultaneously makes it “hard to find a new wrinkle.” Physics editors reported that “There isn’t much you can do. You just have to present your text better. There aren’t five different viewpoints in the hard sciences. This makes it harder to differentiate your product.” Physics editors reported that the most common strategy for differentiating or revising an introductory physics text is to update or improve the accuracy of the problems. However, even in a mature discipline such as physics, the average difference in contents between texts is 23 percent.
A sociology editor described the process of product differentiation in sociology as follows:
Books may vary by chapters [such as] sports, health care, education. In some books, education and religion are together. Some emphasize collective behavior more. Some orders of presentation are more appealing than others. So essentially, books are differentiated in terms of what topics are included and in what order they are presented.
The above results show that coercive, mimetic, and normative institutional pressures tend to homogenize college textbooks. Normative isomorphism, in the form of paradigms, also explains disciplinary variation in the degree of homogeneity of textbooks and shapes the strategies that editors use for differentiating products.
DISCUSSION AND CONCLUSIONS
Our research suggests three key extensions of institutional theory. First, in contrast to Meyer and Rowan (1977) and Thompson (1967), it is frequently efficacious for organizations with garbage can technical cores to open their cores to the institutionalized environment rather than buffering their cores from institutional pressures. The implication of this view is that the logic of Thompson’s model (1967) must be reversed in arenas where the connections between means and ends are unclear. Thompson’s model posits a universe in which managers seal off the technical core by buffering it from the uncertainties and perturbations of the environment. We are suggesting that in the case of the textbook-publishing industry, boundary-spanning roles, such as that of editor, import orderliness from the institutional environment to impose order on processes within the technical core.
Second, we extend DiMaggio and Powell’s (1983) notion of normative isomorphism to professionals outside the industry under consideration. It is the professionalization of academics, rather than editors, that generates homogeneity.
Third, by comparing the disciplines of sociology and physics, we have provided empirical support for a within-industry relationship between the level of institutionalization of the environment and the homogeneity of outputs. If outcomes of garbage can systems are mediated and homogenized by institutional factors, this relationship has far-reaching consequences for organizational and professional activities that involve inherently ambiguous technologies and customized or unique products. These activities include culture production, mental health services, education, public policy, technological innovation, research, and recruitment. For example, controlling for success rate, medical specialists with clear paradigms will exhibit greater homogeneity of treatment than will specialists with less clear paradigms (cf. Nass, 1986). In the case of industry systems in which individual talent is difficult to assess (e.g., Wall Street investment firms), recruitment is simplified by relying on a limited number of external institutional endorsements, such as an M.B.A. from a top business school. This leads to homogeneity of personnel based on training and professional orientation (March and March, 1977; DiMaggio and Powell, 1983).
Our research also has implications for management practices, especially the management of timing. Hirsch (1972) has noted that high levels of ambiguity combined with a unique product makes close supervision extremely difficult. Our research suggests that under these conditions, open-ended environmental scanning that anticipates and facilitates serendipity rather than problemistic search (March and Olsen, 1975) should be encouraged.
The technique used to determine the degree of homogeneity of introductory texts–optimal matching–enabled us to demonstrate statistically absolute levels of homogeneity as well as capture nonobvious disciplinary variations. Although the comparison of best-selling texts with randomly composed texts is a liberal test of homogeneity, more stringent tests can be developed. Optimal matching should be a valuable tool in the quantification of cultural products that systematically differ by sequence and content, such as chord structures in music (Cerulo, 1988).
Taken together, the quantitative and qualitative evidence shows how coercive, mimetic, and normative sources of institutional isomorphism have effects on a garbage can system, with differing consequences for the decision structure, the access structure, and the homogeneity of the product. The research reported here suggests that garbage can processes can lead to a subjective sense of confusion and chaotic results for participants while permitting the institutional environment to influence the homogenization of outputs.
COPYRIGHT 1989 Cornell University, Johnson Graduate School
COPYRIGHT 2004 Gale Group