How newspapers use readership research

How newspapers use readership research

Beam, Randal A

The use of readership research(1) is becoming increasingly common in newsrooms,(2) and as daily newspapers have struggled to arrest a four-decade slide in household penetration and readership, some have turned to readership research to learn what readers say they want and need from their newspaper.

Though the use of research to guide product development is common for most businesses, this practice has become controversial or newspapers and other mass media. Critics say it has led papers to pander to readers, contending that a growing emphasis on giving readers information that they say they want, rather than information that journalists believe they need, is eroding the quality of American journalism. And they argue that as the quality of journalism declines, the capacity of newspapers to foster thoughtful citizen participation in government is undermined.(3)

Supporters of readership research say, however, that it is essential to the survival of he U.S. newspaper industry. They argue that readership research helps journalists serve readers better because it provides insights into what readers like and don’t like about their newspapers.(4)

Though much has been written about the purported good and evil of readership research, systematically collected information about how mass media actually use this research is sparse. Most accounts tend to be anecdotal experiences of a handful of organizations.(5)

This picture, which is based on findings from a survey of editors at 78 daily newspapers, includes information about types of readership research that U.S. papers conduct; about the extent to which editors say such research has influenced their decisions on editorial content or editorial decision-making; and about characteristics of the newspaper organizations that are associated with the use of readership research. These findings should help give a fuller sense of the ways that U.S. dailies are using — and not using — readership research to shape the news and editorial content that is the window on the world for millions of citizens each day.


The findings come from a mail survey conducted during an eight-week period in 1991. Three hundred and sixty middle- and senior-level editors were contacted at 100 U.S. daily newspaper companies. These editors were in supervisory capacities, usually at the department-head level and above.(6) They were selected as potential respondents because they would be in a position to know when content changes being made by their staffs were influenced by readership research. The papers were selected using standard probability sampling techniques.(7)

Of the 360 editors contacted, 167 provided usable responses, for an individual response rate of 46.5 percent. These editors represented 78 newspaper companies, yielding an organizational response rate of 78 percent. The latter is the more critical figure, as the findings reported here describe characteristics of the newspaper rather than of the individuals working at those newspapers.(8)

Types, uses of readership research

Not surprisingly, the survey findings suggest that readership research is common in U.S. daily newspaper newsrooms. Roughly half of the papers responding reported that they had conducted their own research at some point, and about 85 percent reported having contracted for research with an outside consultant. Five papers indicated that the paper’s corporate parent had done readership research for the organization. Only four of the 78 papers responding to the survey indicated that they had not conducted such research to help guide editorial decision-making. Those four were among the smallest newspapers in the survey.(9)

Two relatively inexpensive techniques for gathering information about readers or potential readers — the focus group and the clip out-mail back survey — were the most common among papers surveyed. About 81 percent of the papers responding had conducted focus groups and 73 percent had done clip-out surveys. Though affordable, these techniques have the drawback of producing results that cannot be generalized to a newspaper’s readership because they don’t rely on probability sampling techniques. However, scientific telephone surveys also had been done by about 70 percent of the responding papers. The least-common readership research techniques were scientific in-person surveys (42 percent), mail surveys (41 percent) and voluntary call-in polls (36 percent).

A series of questions asked editors how much readership research had influenced major decisions on 19 kinds of content published at their newspapers and on other editorial changes that the newspaper might have undertaken during the previous two years. The findings show that some kinds of content or editorial changes are more likely to be influenced by readership research than others. (see Tables 1 and 2.) (Tables 1 and 2 omitted)

Overall, editors said readership research had most influenced decisions on comics and on news about entertainment, business and sports. Least-affected were three kinds of hard news content — news on national government, international affairs and science. The survey found minor differences among small, medium and large newspapers, though the numbers of papers in these sub-groups were small and it would be hazardous to make much of these differences.

That said, editors at small papers appeared the least likely to say readership research had influenced content decisions. It may be that small papers, which presumably have more homogeneous audiences than medium- and large-sized papers, are the least likely to need research to provide insights about readers. It also may be that small organizations tend to adopt innovations — the use of readership research by newsrooms would clearly be an innovation in journalistic practice — more slowly than larger organizations.

The survey also asked editors about five other possible content-related changes that might have been influenced by readership research. These were changes in the paper’s graphic design, its Page 1 content and its beat structure, as well as decisions either to purposely increase or to reduce “coverage of any particular content areas.”

Generally, the survey findings indicated that readership research was most likely to have influenced decisions to increase coverage or to modify the content of Page 1. (see Table 2) It was least likely to have influenced changes in a paper’s beat structure or to have caused the paper to reduce coverage in a particular area. These editors’ responses seem somewhat inconsistent with critics’ assertions that readership research is being used to wean newspapers from certain kinds of reporting or from publishing certain kinds of information. Though the magnitude of the responses (2.23 for reduce coverage and 2.14 for change beat structure) corresponds to some influence in these areas, the findings show that the more likely effect of readership research appears to be adding content, not taking it away. Again, editors at small papers were the least likely to report readership research having influenced such decisions.

Variables associated with readership research

Studies have found that newspaper practices often are associated with various characteristics of the newspaper itself. In this study, five organizational characteristics were examined to see if they might be associated with a newspaper’s uses of readership research: the size of the newspaper company as measured by 1990 daily circulation; whether the newspaper was independently owned or part of a group; the size of the group as measured by the group’s aggregate 1990 daily circulation; whether the newspaper’s corporate parent was a publicly held company; and a measure of the newspaper’s circulation performance.(10)

Because the survey included more than two dozen questions about a newspaper’s use of readership research, five indices were created from these questions to make it easier to see possible relationships between research practices and the organizational characteristics listed above.

* A summed index indicating the number of different kinds of readership research that the paper undertook — focus groups, clip out/mail back surveys, scientific telephone surveys, scientific in-person surveys, voluntary call-in polls and scientific mail surveys.(11) (In Table 3, the analyses using this index appear under the Kinds of RR column.) (Table 3 omitted)

* A summed index indicating how often the newspaper conducted readership research.(12) This index was developed from answers to questions about how frequently newspapers conducted the kinds of readership research mentioned above. (In Table 3, the analyses using this index appear under the Freq. of RR column.)

* A summed index indicating the degree to which editors said readership research had influenced major decisions the paper had made about 19 specific kinds of content (i.e., sports, entertainment, local government).(13) (These content areas are shown in Table 1. In Table 3, the analyses using this index appear under the Content Impact column.)

* A summed index developed from questions asking whether a newspaper had made major changes in its graphic design during the previous two years, whether it had changed the kinds of content published on Page 1, whether its beat structure had been changed, whether it had purposely increased coverage in particular areas and whether it had purposely decreased coverage in particular areas.(14) (In Table 3, the analyses using this index appear under the General Change column.)

* A summed index indicating the degree to which editors said readership research had influenced those broader changes that the paper had made graphic design, Page 1 content, etc.(15) (In Table 3, the analyses using this index appear under the General Impact column.)

The relationships between the readership research characteristics and the organizational characteristics were assessed using Pearson product-moment correlation coefficients. These coefficients measure the strength of a relationship between two characteristics, or variables. The coefficient can vary from a minus-1 to a plus-1, though coefficients toward either end of that continuum would be rare in this kind of study. It would be more common to find coefficients in the minus-.50 to plus-.50 range.

A positive coefficient indicates a positive relationship between two variables, and a negative coefficient indicates a negative relationship between two variables. In Table 3, for example, the coefficient for Paper Size and Kinds of RR (.34) is positive and fairly strong. This means that for the sample of newspaper firms being considered here, size (as measured by daily circulation) is positively related to the index for the kinds of readership research conducted; that is, as papers get larger, they’re more likely to undertake a wider variety of kinds of readership research. (If the relationship were negative, that would mean that as papers got larger, they would be likely to undertake fewer kinds of readership research.(16))

Relationships with research practices

The findings showed several organizational characteristics to be related to readership research practices at the 78 newspapers surveyed. The size of the paper and its parent company (if the paper was part of a group) were strongly correlated with Kinds of Readership Research Index (see Table 3). In addition, larger papers tended to conduct such research more frequently than smaller papers, as indicated by the .41 correlation coefficient for Paper Size-Frequency of Readership Research Index.

In part, these relationships could reflect resource differences. Readership research is relatively expensive. Larger newspapers or groups, which have greater resources, may be more able to afford to learn about their readers this way than smaller newspapers or groups. But these strong correlations also may have resulted from factors other than greater resources. One factor may have been that bigger papers have more diverse audiences — audiences that may be harder to understand only through journalists’ personal experiences — than smaller papers. In larger cities, readership research may be more useful for fathoming reader interests than in smaller, more homogeneous communities.

Other analyses done as part of this study show that the racial heterogeneity of the city in which a paper is located is positively associated with both the kinds and frequency of readership research conducted. That suggests that newspapers in more racially diverse cities tended to do more kinds of readership research and do such research more often. This relationship persisted even after statistically eliminating the effect of city size.

The ownership structure of the paper was another organizational characteristic associated with the variety of readership research conducted (Kinds of RR Index). Newspapers of publicly held companies tended to do more kinds of readership research than those of privately held firms, and the data suggest that they may do readership research more frequently, too.

While the .17 correlation coefficient between public-private ownership and the Frequency of Readership Research Index isn’t considered statistically significant here, it’s still a moderately strong correlation for this type of research.

These findings do not appear to simply be a function of the fact that most large newspaper groups in the United States are publicly traded companies. The relationship between public-private companies and the Kinds of Readership Research Index remains statistically significant even after taking into account the size of the group to which a newspaper belongs. The magnitude of the correlation coefficient is smaller, however, at .22. It may be that publicly traded groups are under more shareholder pressure to maintain profitability levels than privately held companies. As daily readership has stagnated and household penetration levels have declined, public companies may have turned more quickly to readership research as a tool to win back readers and maintain profit levels.

One somewhat surprising finding was that the newspaper circulation performance indicator used here was not strongly associated with either the Kinds or Frequency indices, though both correlation coefficients are negative, as might be expected. (A negative coefficient indicates that as performance declines, the variety and frequency of readership research increases.)

Further analysis shows, however, that these correlation coefficients both increase substantially once the size of the organization is taken into account statistically. The coefficient for performance-Kinds of RR increases to -.21, which is statistically significant at the .10 level, and Performance-Frequency of RR rises to -.10. If newspapers are turning to readership research, in part, because of concerns about declining readership, these would be the relationships we’d expect to find. The relatively weak relationship for Frequency, however, suggests that poor performance may not lead newspapers to do readership research significantly more often. Cost would presumably be a limiting factor.

Relationships with content changes

Several organizational characteristics also were associated with the use of readership research to guide content changes. (see Table 3) When these associations occurred, they were more likely to relate to decisions on general content changes (e.g., design or graphics, Page 1 story play, etc.) than to decisions on the way specific kinds of content were handled (e.g., sports, business, international affairs, etc.).

The size of the newspaper was positively related to the influence of readership research on general changes (General Impact Index, e.g., design or graphics, Page 1 content, etc.) that the paper had made. And the size of a newspaper group was strongly associated with both the likelihood that general changes had been made and with the likelihood that readership research had influenced those changes. Papers in larger groups seemed more aggressive in both respects, as were papers that were part of publicly held groups. These findings were not particularly surprising, given that larger papers also tended to have the strongest correlations with the Kinds and Frequency indices. The assumption here would be that when readership research is done by an organization, it is used.

What was surprising was that none of the organizational characteristics was correlated with the Content Impact Index, which assessed the degree to which readership research had influenced changes in 19 specific kinds of newspaper content. To some extent, this finding may be a result of the kind of statistical analysis done here. The Pearson correlation coefficient is intended to assess the strength of a linear relationship between variables — that is, a relationship where the increase (or decrease) in one variable is accompanied by a corresponding increase (or decrease) in another. There is some evidence that the impact of readership research on specific content changes may be most pronounced among medium-sized papers. A curvilinear relationship such as this is more difficult to evaluate using Pearson coefficients.

Another explanation may be that these changes simply vary widely across different sizes and kinds of newspapers, producing no discernible pattern.


The findings reported here confirmed what the trade press suggests that readership research has become common in many U.S. newsrooms. But the findings also point to things that do not get as much attention in the trade press — that newsrooms vary considerably in their specific readership research practices, with larger papers doing more kinds of research and doing research more often than smaller papers; that papers that are part of larger, publicly held groups do the widest variety of kinds of research; that some kinds of content changes, particularly general content decisions, such as graphic design or story play, are more likely to be influenced by readership research than others; and that some kinds of changes are more likely than others to be associated with various characteristics of a newspaper.

In short, these findings suggest that the ways in which newsrooms use and don’t use readership research are varied. And, as with so many social phenomena, the broad generalizations about the impact of readership research on journalism don’t do justice to the complexity of the situation. That underscores the need for more rigorous investigations into this journalistic practice.

High on the agenda for researchers should be questions such as these:

* How is readership research being used — if at all — to make decisions about specific kinds of content that newspapers publish, particularly public affairs content? What’s the nature of these decisions do they pertain to how newsrooms collect information, process information or present information? Is the research being used to make decisions about what newspapers will publish or what they won’t publish or both?

* How much does the quality of such research vary across organizations? And what is the relative impact of each of the various kinds of readership research (i.e., focus groups, telephone surveys and so forth). To what degree are newsrooms basing content changes on information collected from non-generalizable readership research strategies, such as focus groups? And to what degree are changes being based on results of scientific surveys, which may be more generalizable?

* How do decisions based on readership research translate into journalistic practice? Are the actual criteria used for judging newsworthiness changing as a result of this research? Does readership research figure into day-to-day decision-making about news, or is it used more for long-term planning?

* How do newspapers that rely most heavily on this kind of research differ from those that rely least heavily on it? What characteristics related to ownership, structure or strategy differentiate these organizations? Is the content of newspapers that say they rely heavily on readership research substantively different from the content of newspapers that don’t?

Clearly, readership research has found a place in newsrooms. It remains to be seen, though, exactly what that place is.


1. Readership research, one kind of market research, is defined as formal efforts undertaken by the newspaper to learn what kinds of content readers or potential readers say they want or need in their newspaper. It includes focus groups, telephone surveys, in-person surveys, mail surveys, reader call-in polls and clip-out/mail-back surveys.

2. John C. Schweitzer, Marketing Research in the Newspaper Business, Readings in Media Management, Stephen Lacy, Ardyth Sohn and Robert Giles, eds. Columbia, S.C.: Media Management and Economics Division of the Association for Education in Journalism and Mass Communication, 1992, pp. 153-180; Doug Underwood, When MBAs Rule the Newsroom. New York: Columbia University Press, 1993, pp. xi-xii, 110, 111-116.

3. John H. McManus, Market Driven Journalism: Let the Citizen Beware? Thousand Oaks, Calif.: Sage Publications, 1994, pp. l-4; Underwood, op.cit., xi-xix..

4. Philip Meyer, The Newspaper Survival Handbook: An Editor’s Guide to Marketing Research. Bloomington, Ind.: Indiana University Press, 1985, pp. 8-10.

5. McManus, op.cit., p. xii; some recent exceptions are Schweitzer, op.cit.; Doug Underwood and Keith Stamm, Balancing Business with Journalism: Newsroom Policies at 12 West Coast Newspapers. Journalism Quarterly, 1992, pp. 301-317; Keith Stamm and Doug Underwood, The Relationship of Job Satisfaction to Newsroom Policy Changes. Journalism Quarterly, 1993, pp. 528-541.

6. About a third of the respondents were senior editors (editor, managing editor, assistant managing editor, deputy managing editor), about 30 percent were sports, features or business editors and virtually all of the rest were city, state, metro or news editors. Less than 1 percent were editorial page editors.

7. The sample of newspapers was drawn by taking a list of the 1,529 U.S. daily newspaper firms in business in 1990, which was ordered by total daily circulation from largest to smallest. That list was divided into three groups, each accounting for about 21 million of the total U.S. daily circulation of about 62 million. The 40 largest U.S. daily newspaper firms (daily circulation 274,000 and up) comprised the large paper group, the next 188 firms the medium paper group (daily circulation 59,700 to 273,999) and the remaining 1,301 firms the small paper group (daily circulation 59,699 and down). Thirty newspaper companies were selected from each of the first two groups using an interval sampling technique with a random starting point. Forty companies were selected in a similar way from the remaining group. This sampling strategy assured representation of large- and medium-sized newspapers, which are less numerous than small dailies.

8. For variables based on individual perceptions, an organizational-level value for a newspaper company was computed. This was done by averaging the responses of individuals within each company on that variable, a common practice in organizational research.

9. A note of caution about this finding: The sampling strategy used for this survey was weighted toward larger newspapers, which tend to do more readership research than smaller newspapers. See Larry S. Lowe and George deTarnowsky, The Need is High, Participation Low, and Consultants have Opportunities, Journal of Professional Services Marketing, 1991, pp. 143-152..

10. Performance was measured by the difference between the percentage growth (or decline) in total daily circulation from 1980 to 1990 and percentage growth (or decline) in the number of households in the newspaper’s core market during that same period..

11. The mean was 3.44, and the standard deviation was 1.86.

12. This index was developed by recoding open-ended questions that asked how frequently an organization undertook various kinds of readership research. Mean of 8.57, standard deviation of 5.36 and alpha reliability coefficient of .81.

13. The response scale was a four-point Likert-type scale. Responses were summed to create the Content Impact Index, which had a mean of 49.99, a standard deviation of 11.66 and an alpha reliability coefficient of .95.

14. The response scale was yes/no. Responses were summer to create the General Change Index, which had a mean of 6.9 and a standard deviation of 1.15.

Randal A. Beam is assistant professor in the School of Journalism at Indiana University in Bloomington.

Copyright Ohio University Spring 1995

Provided by ProQuest Information and Learning Company. All rights Reserved