Small world and beyond: A rejoinder
In David Lodge’s (1984) Small World, the Persse character asked, “What do you do if everybody agrees with you?” Arthur Kingfisher’s reaction was, “That is a very good question. A very in-ter-est-ing question. I do not remember that question being asked before. You imply, of course, that what matters in the field of critical practice is not truth but difference. If everybody were convinced by your arguments, they would have to do the same as you and then there would be no satisfaction in doing it. To win is to lose the game. Am I right?” “It sounds plausible,” said Persse. “I don’t have an answer myself, just the question.”
I am thankful to the authors of the two commentaries addressing my recent paper (Erkut, 2002), which indicate that, while I may have to ponder a number of different and difficult questions, one question I do not have to worry about is Persse’s question above. Both commentaries are thoughtful and contain much fuel for thought. The two commentaries have both similarities and differences. As an example of a similarity, both indicate that the omission of scholarly books from an assessment of research output and impact is a serious one. As an example of a difference between the two commentaries, while Baba (2002) suggests that my study is “an honest attempt to look at scholarship more comprehensively than most studies do,” Dery and Toulouse (2002) argue that I provide a “small world depiction” that measures “cognitive output rooted in the classical model of science and a nomothetic epistemology, part of a sociocognitive framework that is subject-based, decontextualized, and published in only one language,” which “does not correspond at all to the reality of Canadian research.” In the following sections I respond to the comments by the authors, and expand on some of the points raised by them.
In his commentary, Baba asks questions about strategic directions for Canadian business schools, and challenges us to “take the next step, to go beyond measurement, and generate knowledge that is empirically sound, theoretically valuable, and practically useful.” I agree with many of the points in his commentary, and most of my comments below are not so much a response to Baba as they are an attempt to further engage in this discussion.
Early in his commentary, Baba makes the point that “in matters of scholarship, output is necessary but not sufficient to pronounce judgement as to its quality.” This view supports my belief that, when it comes to evaluating research, citation counts are a superior measure to publication counts for several reasons. Studies that use publication counts go to great lengths to differentiate between publications in different journals, either by using a limited set of top-tier journals, or by calculating standardized page counts and then multiplying them by a factor that represents the quality or the prestige of the journal. Not only are such corrections open to debate, but also paper-count studies implicitly assume that journals do not make errors in judging the publishability of papers. Yet anyone who has ever dealt with the refereeing process is aware that it is prone to Type I and Type 2 errors. Despite the efforts of the editors to minimize these errors, it is impossible to completely eliminate them. Citation counts take care of the journal prestige issue, as well as the Type 1 and Type 2 errors. Unfortunately citation counts are not a perfect measure of research impact for the reasons outlined in my study. Nevertheless, I believe they are superior to a paper count metric. Given how much easier it is to use citation counts today than it was in the recent past, one of the goals of my study has been to encourage more extensive use of this metric by both administrators, in assessing research output, and the popular media, in ranking schools. (It is interesting to note that in the Kirkpatrick and Locke (1992) study, they “had to look up all articles and all citation data by hand, which along with list compilation and data analysis took a full-time doctoral research assistant plus two summer clerks more than 13 months to accomplish.” While the study reported in Erkut (2002) was very labour-intensive, it did not consume anywhere near that much effort.)
In his commentary, Baba underlines the need for “future research to speculate and debate possible strategic directions at the macro, meso and micro levels.” While I hesitate to make any suggestions about what strategies to adopt at the government level, I have some opinions about specialization at the level of institutions and individuals. It seems to me that expecting the same research output from every business school in the country, from every department at a school, and from every individual in a department is an excellent recipe for suboptimal productivity. Research indicators (publication and citation records, as well as research grants) suggest that a small number of Canadian universities produce most of the research. Hence, it seems that there is already some de facto specialization at the university level. It should be noted, however, that Canadian business schools do not have much freedom in this respect since their research agendas cannot stray too far away from the mission of their host universities.
Regarding research specialization at the school level, Trieschmann, Dennis, Northcraft, and Niemi (2002) point out that most of the top U.S. business schools (including the richest ones) focus their research strength in a few disciplines, rather than spread efforts more evenly across disciplines. The Canadian data show both extremes, with UBC at one extreme with its relatively even strength in all areas, and Laval at the other extreme with strong focus in one area. While all schools may need teaching strength in all areas covered by their programs, research specialization may make sense, particularly for smaller schools, given the resource-heavy nature of long-term research programs needed to produce useful research. Instead of allocating a fixed number of positions to each area and maintaining the same faculty counts over time, perhaps faculties should identify areas of research success and/or potential and shift resources to achieve the critical mass needed for a strong research program. As Ivey’s #1 ranking in the world, in terms of international strategic management research output (Lu, 2003), demonstrates, it is possible for a Canadian business school to be a world leader in a specialization through strategic resource allocation.
The question of specialization at the school level can also be applied in the area of programs. Having every business school offer exactly the same programs is another example of a sub-optimal allocation of resources. Canadian business schools have proven that they can offer innovative niche programs that attract students from home and abroad, such as Queen’s techno– MBA, McGill’s IMPM, UBC’s M.Sc. in Management Science, and Alberta’s Natural Resources and Energy MBA. Hence, it is fair to say that by applying their relatively limited resources in strategic ways, some Canadian business schools have come to be considered world class in the programs category.
The question of specialization can also be applied at the individual scholar level. While there is debate whether teaching and research are complementary or competing activities, Locke et al. (1994) find that research and teaching outcomes are independent. Locke and Kirkpatrick (1994) suggest that it is possible to emphasize both outcomes by giving the best teachers heavier teaching loads and the best researchers more time to do research. While I do not advocate a two-tier system consisting of pure teachers and pure researchers, I believe that some amount of specialization may increase the quality of the output in both areas. In fact some schools have formalized such arrangements. For example, my faculty has an “alternate career track,” with increased teaching loads and reduced research expectations, and tenured faculty members can opt into this track after a certain point in their careers. Similarly, research-heavy alternate tracks can be arranged informally through reduced course loads for those individuals with active and productive research programs.
Baba’s commentary raises the question of “whether there should be more research” in Canadian business schools, or whether we should “move toward more professional development work.” Trieschmann et al. (2002) point out that the focus of most business schools has shifted at least twice in the 20th century. While the Carnegie report (Pierson, 1959) resulted in a shift from undergraduate instruction and very practical applied research towards a focus on scientific research, the Porter-McKibbin report (1988), and the popular-press MBA program rankings, “have driven the focus back to MBA instruction at the possible expense of research” (Trieschmann et al., 2002). Their results show that over the past five years, schools with greater financial resources have tended to emphasize MBA program performance over research performance. (This is consistent with the observations in my study about increased graduate program activity in Canada coupled with decreased research output.) The concern, therefore, is that schools may be shifting valuable and scarce resources from research into programs without considering the long– term effects of such a decision.
One area where such shifts in resources may have a deleterious effect is on a school’s reputation. Near the end of his commentary, Baba wonders “whether there is an implicit theory of reputation” at work here, “a theory that suggests certain antecedents that shape reputation and certain outcomes that follow.” While I have no new insights in this regard, some recent research has produced interesting results on this question. Armstrong and Sperry (1994) compare the prestige rankings of business schools (generated by querying academics, recruiters, CEOs, and program applicants) to research impact (as measured by Kirkpatrick & Locke, 1992) and determine that scholarly research is positively (and significantly) correlated to prestige. In contrast to the proposals in the popular media’s reports on business school performance, Armstrong and Sperry conclude that schools should emphasize research instead of teaching if they desire high prestige. If we accept this conclusion, then schools wishing for successful executive education programs, for example, should emphasize research, and operating such programs by sacrificing research may not be a wise long– term strategy. Similar prestige arguments could be applied to full-cost recovery MBA programs as well. While schools have to provide value for the money in relatively expensive MBA programs, shifting resources away from research programs will have a negative impact on their reputations in the long term.
In terms of the methodology used in Erkut (2002), Baba points out that the exclusion of books results in an incomplete assessment of research. While I believe using publication and citation counts captures much of the research output and impact of business schools, I agree that books and book chapters provide another dimension that is worth measuring. Information about books and book chapters published by Canadian authors, as opposed to journal publications, is more difficult to obtain, since books are not identified as a separate category in the ISI database. Hence, one would have to generate a complete list of books and book chapters, which is only possible through cooperation of all Canadian business schools. If we assume 100% cooperation by administrators, we could further expand the measurement of research output to other products, such as cases, reports, and software. In fact, under cooperation assumption, we could target a more comprehensive assessment of business school performance, one that goes beyond the measurement of just research output.
I believe a comprehensive and rational assessment of all business school inputs and outputs would be an invaluable resource. Rather than having business schools react to rankings in the popular press, which are heavily weighted towards MBA programs, I believe a balanced measure that considers teaching, research, and service performance would be much more useful. To this end, I propose the use of data envelopment analysis (DEA) to evaluate business school performance, something that would go beyond a one-dimensional ranking of schools.
To implement such an analysis, various inputs and outputs would need to be measured. Erkut (2002) provides an assessment of research output, which can be enriched by the addition of books and book chapters. Another very important research output is the production of highly qualified personnel, such as new Ph.D.s, postdoctoral fellows, and research assistants. The number of students graduating from each program could quantify teaching outputs. Placement rates and salaries of graduates could be considered as well. Service outputs could be measured by considering activities such as presentations made to local organizations, consulting engagements, spin-off companies, journal editorships, papers refereed, conference organizing committee memberships, and memberships on boards. As inputs, one can use the number of faculty members, library holdings, facilities, and budgets.
The advantage of DEA is that it does not produce a simple ranking of units. It creates a subset of efficient units, and identifies the distance between inefficient units and the efficient frontier. This analysis would recognize differences between various schools, and would evaluate schools in accordance with their strengths. As a result, a school that ranks below average on a one– dimensional research output ranking may be identified as an efficient unit if its performance is proportional to its resources. Similarly, a school with low research output may be identified as efficient if it performs well in teaching. The use of DEA for multi-objective assessment of units is as old as the tool itself. There are dozens of applications in the public sector. In fact, McMillan and Datta (1998) have applied it to Canadian universities. Such a study, though, would only measure input-output efficiency; it would not explicitly consider impacts on the various stakeholders of a business school. However, I believe this type of study would provide valuable information to business school administrators (as well as others), and I strongly encourage CFBSD to sponsor such a project.
Dery and Toulouse (2002)
The focus of the commentary by Dery and Toulouse is quite different from the one by Baba. In it they remark that the study in Erkut (2002) is “interesting insofar as it represents an opportunity to debate what makes up research in business schools”, but that it “offers us a limited view of the research conducted in Canadian business schools.” They feel that papers published in scholarly journals (which are the focus of the study) are just one part of the “cognitive output” of business schools. Their commentary elaborates on their view of the nature of this cognitive output and its diversity. They begin by pointing out the “three-fold mission” of business schools (pedagogical, scientific, and professional), and go on to enumerate the many outputs generated through pedagogical and professional activities (for example, case studies or expert reports written for government bodies).
While I see nothing wrong with engaging in a debate on what should be considered “research” in business schools, that was not the focus of the study. My own opinions on this are that while Dery and Toulouse feel that my study takes too narrow a view of business school research, I believe they suggest too broad a view. Perhaps it is only a question of semantics, but I would not characterize all business school cognitive output as “research.” For example, I would not consider most textbooks written for undergraduate student audiences as research. Likewise, I would assign no research value to a handbook for practitioners containing a synthesis of several existing books, but no new material. Likewise I believe that many teaching cases (such as those that are produced by Harvard and Ivey) have small research components. There is no doubt that each of these is an example of a worthy scholarly activity for a faculty member, but it is not refereed research.
It may be worth noting, though, that even if my view of what qualifies as “research” is narrower than theirs, my study did not specifically exclude published papers that had a pedagogical or professional focus. For example, a journal article intended for a professional audience would not be excluded from the study as long as it was published in an indexed journal. As an example of research that is oriented towards practitioners (as opposed to the scientific community), Dery and Toulouse offer the study by Rothkopf (2002). This article measures the contributions of schools to the practice of operations research by counting papers published in Interfaces (a practitioner-oriented journal) and the OR Practice section of Operations Research. All of the papers counted by Rothkopf (2002) are accounted for by the study in Erkut (2002). Likewise, a case study published in an indexed journal (for example, European Journal of Operational Research publishes case studies regularly) would be captured by the study described in Erkut (2002) even if it is intended primarily for student audiences.
What Dery and Toulouse appear to be advocating, however, is not only fuller recognition of all business school cognitive output (and not just scholarly publications), but also recognition of the impacts that these works have on target audiences beyond only other academics. While not as eloquent, structured, and rooted in the literature as Dery and Toulouse (2002), Erkut (2002) does point out this need for a multidimensional evaluation of research (Erkut, 2002, p. 98). However, such an exercise is certainly beyond the scope of the study. Towards this end, Dery and Toulouse suggest an excellent framework for comprehensively evaluating the “cognitive productivity” of a business school (Table 1). They themselves, however, point out the enormous difficulties in trying to assess the impacts of different cognitive products (papers, books, case studies, reports) on the different target audiences (pedagogical, scientific, professional).
I agree with their assertion that business school cognitive output takes many different forms, depending on the topic, area, audience, and goals of the project. By excluding cases, books, reports, and so on from the study, I do not mean to imply that these products have no utility. My interest, though, is with scholarly research, and I believe that the utility of journal articles outweighs the utility of other cognitive products in this area. Given the difficulties associated with collecting information about the other products (and assessing their impact), I am choosing to use only paper and citation counts to measure research output and impact (Pareto’s 80-20 law). While this certainly limits what can be measured, I do not believe I am alone in this position.
There have been a number of studies in the last two decades that have used journal paper (and sometimes citation) counts in their assessment of business school scholarship (Stahl, Leap, & Wei, 1988; Kirkpatrick & Locke, 1992; Trieschmann et al., 2002). Other studies point to the importance attached to scholarly research within business schools themselves. In a recent study comparing publications in four functional areas of business, Swanson (2002) suggests, “Publication in top-tier journals is the primary criterion for promotion at many business schools and a strong influence on salary, teaching load, and research support.” Similarly, Gomez-Mejia and Balkin (1992) find that the highest correlate of annual salaries, raises, and job moves is an individual’s total number of top-tier publications. The correlations to other performance measures, such as books, teaching evaluations, and second-tier publications are far behind. It is certainly the case that the hiring, tenure, and promotion criteria used by top business schools in Canada emphasize refereed journal publications above other forms of cognitive outputs.
Perhaps the clearest example of the differences in importance given to various cognitive outputs, by a Canadian business school, is the research incentives policy of Dery and Toulouse’s own institution, Hautes Etudes Commerciales Montreal (HEC, 2000). While this policy recognizes the importance of different forms of scholarly activity, it provides incentives only for publications. A point system is used to aggregate different forms of research output, and faculty members receive financial rewards based on the number of points they collect every year. This incentive policy ranks journals into four classes. Class A contains journals that are considered to be in the top 15% of their field, Class B contains journals that are in the top 16% to 50%, and Class C contains most of the rest. Class D is reserved for journals of provincial, national, or international trade associations or government organizations that do not make it to the other three classes (for example, because they are not indexed). Under this incentive system, sole-authored articles in journal Classes A, B, C, and D generate 7, 3, 2, and 1 point respectively. Books are also classified into four categories.and produce credits of 8, 4.5, 3, or 1.5 points. For example, a book intended for practitioners or a general audience generates 1.5 points. To match the credit associated with one Class A journal publication, one would have to write five such books. Full articles in refereed conference proceedings receive 0.5 points. Cases receive points only if they are published in a refereed journal. This elaborate incentive policy reflects HEC’s priorities in research and its dissemination, and it is in support of some of the choices I made in the study.
Dery and Toulouse (2002) raise many interesting points in the remainder of their commentary-too many, in fact, to respond to here. I would like to touch on a few of them however. In the section entitled “Scientific models”, they point out the emergence of a new type of cognitive output, such as a new product or software program, that differs “from the previous output in that it is not found in traditional written form but in material form, both numerical and otherwise.” They feel that this “science in action”, as they call it, “cannot be evaluated with the same criteria used to assess yesterday’s science.” This is a very valid observation, and draws attention to one more dimension that perhaps should be added to their evaluation framework (Table 1). I would argue that these new activities could be, at least in part, captured by a study that counts papers and citations if the faculty member makes an effort to publish articles about the product or software. As just one example, I will offer the case of Jean-Marc Rousseau, who is one of the most prominent operations researchers in Canada. He created Giro Inc. (HASTUS) in 1979, which is a very successful spin-off company that employs 160 people. According to the company web site, it “provides computer software and related consulting services for transportation applications notably in the area of public transit scheduling and routing of pick-up and delivery operations.” Dr. Rousseau has co-authored a total of 35 papers on problems that are closely associated with his company, several of which describe the software developed by the firm.
The impacts of these new types of cognitive outputs are not the only ones that Dery and Toulouse (2002) feel are overlooked in the study found in Erkut (2002). They use books written for management professionals as another such example. Although the omission of books from the study is mentioned in three of the five sections in their commentary, they spend the most energy on this subject in the section entitled “Epistemological projects”. In this section, they make reference to Porter’s books in the field of strategy. While it is true that Porter’s books receive more citations than his papers, he has also published over 85 papers and his citation credit total based on his papers published between 1990 and 1999 alone would give him the first place in the citation study described in Erkut (2002). However, this is a moot point since Porter is not employed by a Canadian business school. The authors list a total of nine authors by name in this section. Although they suggest, “in many fields, research with any real impact is disseminated in books and not papers,” it seems to me that all authors mentioned write general management books. Six of the nine authors are not academics who would have little incentive to publish papers (and a lot of incentive to publish books). Only one of the nine authors is employed by a Canadian business school (and he is featured rather prominently in the citation study through his papers). While I agree that omission of books impacts different academic areas differently, I believe Dery and Toulouse overstate their “small world” case by using the most popular authors, most of whom are irrelevant to the study.
In the section entitled “The socio-cognitive structure of fields of research”, the authors point out the existence of three general areas in business schools: basic, functional, and overlapping. The authors argue that it may be necessary to develop different criteria to assess cognitive output in these different areas due to variations in their socio-cognitive structure. While there are differences between areas, all have developed their own research cultures and outlets. Research standards and expectations may differ between areas, but they all publish in refereed academic journals. I think the authors’ argument is valid for the university at large (where English professors are expected to write books, and music professors are expected to perform in concerts), but less so for business schools. I do agree, however, with the authors’ contention that the social structure of research in Canada may not have favoured management research. In fact, I believe this is one of the reasons for the low research output in most areas. In Erkut (2002), I highlight the enormous difference between NSERC funding for management science research and SSHRC funding for the other areas, and provide this as a possible reason for the difference in research outputs. This is evidence that can be used when advocating for increased business research funding.
While there is much that I agree with in this section of the commentary, it is the next section, “The social basis for research”, which I find the most puzzling. The authors suggest that Erkut (2002) “completely excludes all French-language scientific production despite the fact that it is most definitely part of Canadian social reality.” I have to say that this assertion is patently wrong. By simply going through the publications of the HEC faculty members in the citation study web site, one can identify several French-language journals: RAIRO-Recherche operationnelle, Sociologie du travail, Revue d’histoire de l’Amerique francaise. In addition, most Canadian journals (such as INFOR and Relation Industrielles– Industrial Relations) publish articles in French, and many are included in the indices.
A visit to the ISI web site confirms that the Science Citation Index contains 148 journals published in France, and the Social Science Citation Index contains 19. Clearly, the number of English-language journals dwarfs the French-language journals, since the indices contain over 8,000 journals in total. Whether this difference is due to the English-language bias of the indices or to the scientific importance of the journals, I cannot say. The journal selection criteria on the Thompson-ISI web site states that “ISI is committed to providing comprehensive coverage of the world’s most important and influential journals for its subscribers’ current awareness and retrospective information retrieval needs. But comprehensive does not necessarily mean all-inclusive.”
A better source, perhaps, for the number of Frenchlanguage business journals and their significance is HEC’s journal classification list (HEC, 2001). This list is used to assess research performance in an elaborate manner, as mentioned previously in the discussion of HEC’s research incentives policy. The HEC list contains a total of 523 journals. Only 35 (i.e. 6.7%) of them have French titles. According to the HEC classification, there are no journals with French titles in Class A (the top 15% of their field). There are only 4 in Class B (top 16% to 50%), and 29 in Class C (the remainder). While we have to be careful about a circular argument here, HEC’s classification (which is not only based on citations) may partly explain why few French journals are included in the citation indexes. There is ample evidence that the preponderance of intellectual impact is created by toptier journals and most papers in other journals do not attract much attention. Hence, even if there is a bias against French-language business journals in the indices, it is rather unlikely that this bias would impact the results of a citation study in a substantial way.
Dery and Toulouse also suggest that the socio-economic context of research would have implications on target journal selection as well as the number of citations. While I concur with the general argument, I believe they overstate their case again. They present a monolithic image of the “American journal” that would not be interested in publishing papers based on a socioeconomic context other than their own. The reality is, however, that most American journals have very diverse editorial boards, referees, and readership. Furthermore, the indexes are not restricted to American journals and include hundreds of journals published in other countries.
Dery and Toulouse take their socio-economic context argument to the next step and suggest that papers published in journals such as Canadian Journal of Administrative Sciences may not receive many citations because researchers in other countries may not be interested in the Canadian context. I believe there is merit to this argument, although it is only a hypothesis until substantiated by research. Yet my citation study data bring a different problem to light than the one mentioned by Dery and Toulouse. Canadian business professors published 145 papers in Canadian Journal of Administrative Sciences between 1990 and 1999, and these papers received 64 citations during this period. Hence, the average number of citations per CJAS paper is 0.44. The maximum number of citations to these papers is 6, and 107 of the 145 papers have received zero citations. The evidence leaves little room for interpretation; even Canadian authors are not citing papers published in CJAS. Hence the low citation counts of Canadian journals cannot be entirely explained by the socio-economic context.
To summarize, the main argument in the Dery and Toulouse commentary is that the measurement in Erkut (2002) is so badly flawed that it is of little use. My main argument above is that while any attempt to measure something as complex as business school research is bound to be imperfect, the metrics I used capture much of the essence of business school research. Academics engage in ranking or performance evaluation exercises with partial information all the time. How do we admit students to our programs? How do we decide on a short list of prospective faculty members to interview? How do we evaluate student performance? How do we select teaching award winners? In all of these exercises, we use incomplete information of varying degrees necessitated by the infeasibility and impracticality of collecting complete information.
While Erkut (2002) does not offer a comprehensive assessment of research output, the world described in it is not as small as Dery and Toulouse contend. It satisfies Pareto’s Law by measuring most of the research activity with a reasonable effort. Although I believe paper and citation counts capture much of the research output and impact, there is certainly value in providing a more complete picture. If Dery and Toulouse are interested in taking the lead in measuring some of the cognitive outputs that I did not measure, I would be happy to cooperate with them (or any other academic interested in such a project) to produce a more comprehensive picture of research output in Canadian business schools.
Recruitment and Retention
Although not touched on by either Baba or Dery and Toulouse, I believe there is one other issue that needs to be raised in any discussion on the future directions of Canadian business schools. I am concerned that both current and future recruiting and retention issues will have a serious impact on Canadian business schools’ missions and aspirations. In the period 1999-2000, Canadian business schools were able to fill only 64% of their open tenure-track positions (Feltham, Pearson, & Ford, 2001). Salary surveys by CFBSD and AASCB indicate that the average 1999-2000 Assistant Professor salaries in Canada and the U.S. are approximately the same nominally (CDN$70,000 vs. US$70,000). However, the unfavourable exchange rate and the relatively high tax rates in most provinces (made worse by recent suggestions of further tax cuts in the U.S.) put Canadian business schools at a serious disadvantage in recruiting. To make matters worse, the AACSB survey indicates that average new Ph.D. salaries increased 15.4% in the two years between 1997 and 1999, and that many U.S. deans believe that the market for new doctorates will become even more competitive. (Six-figure offers to top talent in most research areas are already fairly common at the majority of large U.S. universities.) With our comparatively smaller budgets, it is difficult to see how Canadian business schools will be able to compete in this market.
In addition to rising salaries, the situation is exacerbated by the decline in supply of business Ph.D.s. The production of doctoral degrees in business in the U.S. declined gradually from 1,327 in 1995 to 1,165 in 1998 (LeClair, 2000). The 17 doctoral programs in Canadian business schools produced an average of 68 new Ph.D.s per year over the three-year period ending in 2000, but Canadian business schools employed only 47% of these graduates. One-third joined schools abroad, and the rest took up careers in industry or government (Feltham et al., 2001). It is clear that Canadian production of business Ph.D.s is inadequate and that recruiting from the U.S. will become increasingly more difficult. This problem has the potential of becoming even more serious in the next 10 years if we consider the demographics of our current academics, and the fact that a great many of them will soon be retiring.
Administrators must find ways of attracting top talent to Canadian business schools. One option is to increase the Ph.D. production of Canadian business schools (which currently sits at about half of the U.S. production on a per capita basis). Another is to offer scholarships to Canadians for a Ph.D. abroad, with a condition of employment at the sponsoring institution for a period of time upon graduation (a practice that has been employed by a number of business schools in Quebec with some success). Enhancing salary packages for new recruits, in the form of junior chairs, for example, may require some creative arrangements with granting agencies or local businesses.
American business schools are facing the same demographic problem of a large number of faculty members retiring within the next 10 years. In the absence of dramatically increased production of new Ph.D.s in the near future, it is conceivable that American business schools will engage in poaching faculty members from other countries, and Canadian business schools are an obvious target. Hence, Canadian schools must find ways to retain their stars and rising stars. One obvious option, which I mentioned in my paper, is the establishment of endowed chairs. Schools should aggressively seek federal research chairs (Canada Chairs, NSERC Industrial Chairs), as well as major local funding to create endowed chairs, in order to retain our top researchers. In his commentary, Baba exhorts us to “start probing into the roles of scholarship and research in business and how they contribute to the mission of the business school.” This is no doubt an important undertaking, but if no serious efforts are made to deal with the recruitment and retention problems, the labour market will define the Canadian business school for us. As a result, our research output will suffer, and our business schools may be reduced to mere training institutions.
In closing, I would like to invite the readers to engage in this discussion using the conference board linked to the citation study web site (http://www.bus.ualberta.ca/citationstudy2/).
Armstrong, J.S. & Sperry T. (1994). Business school prestige-research versus teaching. Interfaces, 24 (2), 1343.
Baba, V. (2002). Beyond measuring Canadian business school research output and impact: A commentary. Canadian Journal of Administrative Sciences, 19, 207-209.
Dery, R. & Toulouse J.M (2002). Beyond a “small world”, reallife: Multi-faceted phenomenon of research in business schools. Canadian Journal of Administrative Sciences, 19, 209-216.
Erkut, E. (2002). Measuring Canadian business school research output and impact. Canadian Journal of Administrative Sciences, 19, 97-123.
Feltham, T.S., VL. Pearson, & Ford, D. (2001). Supply and eemand for Canadian business Ph.D. graduates: A quest for greater understanding. ASAC 2001 Conference Proceedings, http://www.hec.ca/cfbsd/anglais/page3a.html (accessed March 2002).
Gomez-Mejia, L.R. & Balkin, D. (1992). Determinants of faculty pay: An agency theory perspective. Academy of Management Journal, 35, 921-955.
HEC (2000). Politique d’incitation A la recherch6. http://www.hec.ca/recherche/politique-incitation_recherc he_html.htm (accessed January 2003).
HEC (2001). Liste de classification des revues. http://www.hec.ca/recherche/liste-revues_html.htm (accessed January 2003).
Kirkpatrick, S.A. & Locke, E.A. (1992). The development of measures of faculty scholarship. Group & Organization Management, 17, 5-23.
LeClair, D. (2000). Business faculty recruitment and retention in the United States. Presentation made at the CF3SD
conference on Business Research, Faculty Recruitment and Retention, Ottawa, http://www.hec.ca/cfbsd/anglais/page3a.html (accessed March 2002).
Locke E.A. & Kirkpatrick, S.A. (1994). Pitfalls in the interpretation of Armstrong and Sperry’s data. Commentary on Armstrong, LS. & Sperry T. (1994). Business school prestige-research versus teaching. Interfaces, 24, 13-43
Locke, E.A., Smith K.G., Erez M., Chah, D., & Schaffer, A. (1994). The effects of intra-individual goal conflict on performance. Journal of Management, 20, 67-91.
Lodge, D. (1984). Small world. New York: Macmillan.
Lu, J.W. (2003). The evolving contributions in international strategic management research. Journal of International Management, forthcoming.
McMillan, M.L. & Datta, D. (1998). The relative efficiency of Canadian universities: A DEA perspective. Canadian Public Policy, 24, 485-511.
Pierson, F.C. (1959). The education of American businessmen: A study of university-college programs in business administration. Carnegie series in American education. New York: McGraw-Hill.
Porter, L. & McKibbin, L. (1988). Management education and development: Drift or thrust into the 21st century? New York: McGraw-Hill.
Rothkopf, M.H. (2002). Leveling the field? The fourth Interfaces ranking of universities’ contribution to the practice literature. Interfaces, 32, 23-27.
Stahl, M.J., Leap, T.L., & Wei, Z.Z. (1988). Publications in leading management journals as a measure of institutional research productivity. Academy of Management Journal, 31, 707-720.
Swanson, E.P. (2002) Publishing in the majors: A comparison of accounting, finance, management and marketing. Texas A&M Working paper, http://papers.ssrn.com/abstract= 345340 (accessed January 2003).
Trieschmann, J.S., Dennis, A.R., Northcraft, G.B., & Niemi, A.W. (2000). Serving multiple constituencies in business schools: M.B.A. program versus research performance. Academy of Management Journal, 43, 1130-1141.
University of Alberta
*School of Business, University of Alberta, Edmonton, AB, Canada T6G 2R6. E-mail: email@example.com
Copyright Administrative Sciences Association of Canada Dec 2002
Provided by ProQuest Information and Learning Company. All rights Reserved