Banner advertisement pricing, measurement, and pretesting practices: Perspectives from interactive agencies
This study reports findings from a survey of media directors of interactive advertising agencies regarding how they price, evaluate, and pretest banner ads. Results suggest that more than 90% of the responding agencies used cost per thousand frequently to price banner ads, whereas about 33% used click-throughs. In addition, a majority of the agencies used click-throughs and outcomes (e.g., inquires, purchases) rather than exposures to gauge banner advertising effectiveness. Although few agencies pretested their banner ads on a regular basis, most perceived the lack of measurement standards and independent auditing of Web sites as major problems facing Internet banner advertising. Findings from this study should provide benchmarks for future research on the topic and help facilitate the process of developing viable pricing and measurement standards on the Internet.
Within a short span of six years, the Internet has evolved into an important medium for advertisers and marketers for both branding and directselling purposes. As a central part of the fast expanding digital economy, the Internet has attracted an enormous amount of advertising revenues. According to an industry report, Internet advertising revenue reached $4.62 billion in 1999 (Internet Advertising Bureau 1999). Although this is a small portion of overall advertising spending (estimated to be over $200 billion in 1999), total spending on the Internet has now exceeded that of outdoor advertising. Some predict that, by 2003, Internet advertising spending in the United States could grow to more than $13 billion (Internet Advertising Bureau 1999; Krishnamurthy 2000). Driving this rapid growth is the fact that the Internet has been drawing sizable audiences from other media. The number of Americans using the Internet has grown exponentially in the past few years, from fewer than 5 million in 1993 to as many as 110 million in 1999 (Department of Commerce 1999).
Technological innovations also have made the Internet an attractive medium for advertisers. Today, server-based technologies enable advertisers to display banner ads according to user profiles and interests and in ways that were not possible before. As an advertising medium, the Internet offers all the elements of other media and much more. Banner ads can now include not only graphics and texts, but also streaming audio and video. Java and Shockwave technologies can be used to deliver highly dynamic and interactive banner ads. Such interactive and personalization technologies have made the Internet an effective and accountable medium with unlimited creativity.
Despite the Internet’s phenomenal growth, measurement and pricing practices on the Web are far from being standardized. Most advertisers recognize that, for the Internet to become a fully viable advertising medium, there must be uniform measures so that they can make apple-to-apple comparisons with other media in campaign planning and evaluations (Coffey 2001; McFarland 1998). Because the Internet enables advertisers to track responses to on-line ads, some reason that advertisers should pay for their Internet ads on the basis of responses or performances (Ephron 1997; Parsons 1997). Others argue that such pricing and measurement methods would dismiss banner advertising’s brand-building value and force Web publishers to assume accountability for the creativity and effectiveness of messages, because the role of the media has traditionally been to offer access to an audience, not to share in the responsibility for the quality of the advertisement itself (Parsons 1997; Zeff and Aronson 1999).
But how do agencies handle Internet banner advertising pricing and measurement now? And what do they perceive as challenges facing banner advertising measurement? How and to what extent do they pretest banner ads? Although many of the current discussions have focused on the merits and demerits of metrics and measures, there has been no research on what interactive advertising agencies are using to buy, measure, and pretest on-line ads. To fill this void and obtain some preliminary insights into these issues, this study reports findings from a national sample of interactive advertising agencies. These findings may provide advertising professionals with a better understanding of the current practices of interactive agencies and help facilitate the ongoing discussions of measurement standardization.
One of the principal elements that drive advertising in all media is ratings. Each medium, be it print, television, radio, or the Internet, may use different names, but the concept is the same. Ratings point to the percentage of people that have the opportunity to be exposed to the advertising messages. As advertising on the Web has exploded in recent years, the need for accurate information and tracking of site traffic, advertisement delivery, and user response has grown increasingly important for both Web publishers and advertisers (Krishnamurthy 2000). Publishers need a simple way to understand and communicate the results of ad delivery on their sites. Advertisers and media buyers need reliable and standardized reports to plan their buys (Zeff and Aronson 1999).
A set of universal measures will make advertising on the Web easier and more efficient for both advertisers and publishers. Today, the Internet measurement-from terminology to technology-is still being developed and perfected. Part of the difficulty of developing universally acceptable standards is that the Web is a highly fragmented medium, with millions of sites and Web pages that accept advertising. Complieating this difficulty is the existence of myriad ways of advertising, from banner ads, text links, and sponsorships to affiliate programs. The lack of standardized measures has prompted on-line publishers and interactive advertising networks to come up with many homegrown measures. Unlike television, for which Nielsen dominates the program ratings, the Internet has many players and proprietary measurement programs. The lack of standards in measurement extends from terminology to pricing models (Novak and Hoffman 1996; Zeff and Aronson 1999).
In an effort to provide standardized guides, leading Web publishers formed the Internet Advertising Bureau (IAB) in 1994. Two years later, the IAB issued guidelines on banner ad sizes. Today, virtually all the banner ads on the Internet use one of the eight sizes recommended in the IAB guidelines. However, Internet advertising measurement and pricing standardization has not been as successful. In 1997, the Coalition for Advertising Supported Information and Entertainment (CASIE) and Future of Advertising Stakeholders (FAST) both published voluntary guidelines for the measurement of on-line advertising, including standard definitions of metrics (see CASIE 1997; FAST 2000). Their hope was that agencies and publishers would gradually adopt these measurement guidelines as the medium matures.
Internet Measurement and Pricing Models
In the past, researchers have identified several Web pricing and measurement models (see Ephron 1997; Hoffman and Novak 2000). This study considers three banner ad pricing and measurement models on the basis of how the user interacts with ads: (1) the exposure-based model assumes that advertisers pay for impressions or opportunities to see, much like they pay for ads on television and in other media; (2) the interaction-based model assumes that advertisers pay each time a user interacts with or clicks the ad; and (3) the outcome-based model assumes that advertisers pay for performances, such as inquires and purchases.
The exposure-based pricing method includes flat fee and cost per thousand (CPM). Flat-fee pricing, the earliest form of Web pricing, consists of a fixed price for a given period of time. Alternatively, CPM is popular because it is a traditional media term that is readily comparable to other media. Several studies have reported, for example, that the average CPM of banner ads ranged from $20 to $40, substantially higher than that of television and magazine ads (Hoffman and Novak 2000; Meeker 1997; Zeff and Aronson 1999). Although it enables advertisers to make easier cross-media comparisons, CPM is nevertheless considered less accurate and less accountable than other measures (Ephron 1997).
The interaction-based model includes clicks or clickthroughs, which are often considered more accountable ways of charging for Web advertising. A pricing model based on clicks guarantees not only exposure, but also interaction with the banner. The major problem is that only a small percentage of on-line visitors actually click on banner ads (Sweeney 2000; Zeff and Aronson 1999).
The ultimate goal of advertising is the outcome. For most banner ads, the outcome could be lead generation, on-line inquiry, registration, order, or purchase. Payment based on outcomes is considered more accurate than mere exposures or banner clicks, but not all banner ads are designed to generate immediate behavioral changes. Using outcomes alone disregards the branding objective of advertising and, at the same time, forces Web publishers to rely on the quality of the advertisers’ creatives to generate revenues. Traditionally, publishers and broadcasters have been loathe to take on such risk-sharing pricing practices (Parsons 1997). Therefore, it is unclear how agencies and on-line publishers will accept performance-based models for compensation purposes. Part of the objective of this paper is to find out how frequently interactive advertising agencies use any of the three models to price and measure banner ads.
As a new medium, the Internet has its share of problems in measurement. One of the current problems is the lack of standardization, as has been recognized by several authors (see Dreze and Zufryden 1998; Ephron 1997; Krishnamurthy 2000; Novak and Hoffman 1996; Zeff and Aronson 1999). Despite the voluntary guidelines developed by industry groups, disagreements abound. For example, just the term “impression” can be measured and interpreted in different ways. Some count an impression as each time a page is loaded, whereas others count it as each time an ad is loaded. Compounding this problem is the lack of third-party auditing of Web site traffic. Traditionally, media such as newspapers and magazines can be audited to verify publications’ circulation and readership (Ames, Lindberg, and Razaki 1999; Krishnamurthy 2000).
Caching and proxy servers are two other problems that researchers have identified as major hurdles for Internet measurement (Krishnamurthy 2000; Zeff and Aronson 1999). Caching, when used in relation to Web pages and ads, refers to the process of storing Web pages on a hard disk or server to speed downloading. Although it eliminates the need to download the same pages each time they are requested, caching can prevent publishers from receiving an accurate count of the Web traffic or ad impressions. Although companies such as MatchLogic have come up with software solutions to the problems of caching, for most advertisers, it remains an issue. Proxy servers act as gateways from inside firewalls to the outside world. When proxy servers download Web pages, a publisher’s log file may identify only one user with one IP address (or the proxy server) when hundreds of computers within a company or organization could have requested information from the site.
Pretesting Banner Advertisements
Pretesting advertisements, or copy testing, is a common practice in advertising. Although the Internet has been considered an ideal and efficient medium to pretest advertisements, there is little information about how interactive agencies conduct pretesting. The history of pretesting can be traced back to the turn of the twentieth century, when recall and memory were measured to test the effectiveness of print advertisements. Today, television commercials are often pretested. For example, King, Pehrson, and Reid (1993) find that more than 80% of surveyed agencies pretested television commercials. However, no parallel research exists for Internet advertising. This study therefore seeks to determine the extent to which agencies pretest banner ads using Internet measures, such as click-throughs, outcomes/actions, banner exposures, and ad viewing duration, as well as conventional measures, such as brand attitude, ad memory, brand awareness, and purchase intention. These conventional measures have been used in the past by agenties to pretest ads for traditional media such as print and television (see Boyd and Ray 1971; King, Pehrson, and Reid 1993).
As previously stated, the overall purpose of this study is to investigate what interactive agencies are currently using to price, measure, and pretest banner ads. There is little available research to support the framing of any hypotheses. Therefore, this study was designed to address the following research questions: (1) What methods and measures are used most frequently by interactive- agencies to price banner advertisements? (2) What methods and measures are used most frequently by interactive agencies to gauge the effectiveness of banner advertisements? (3) What do agencies perceive as major problems facing banner advertisement measurement? (4) How often do agencies pretest banner ads? What measures do they use to pretest banner ads?
The questionnaire was organized around four major sections that correspond to the study’s research questions. A majority of the questions were derived and developed from previous research and relevant literature (Hoffman and Novak 2000; Jobber and Kilbride 1986; King, Pehrson, and Reid 1993; Uyenco and Katz 1996; Zeff and Aronson 1999). The first section contained questions regarding the frequency with which agencies used five different pricing methods (Hoffman and Novak 2000; Zeff and Aronson 1999). An open-ended question was added to allow respondents to describe any additional pricing methods they may have been using.
The next section of the questionnaire included nine questions about how frequently agencies used different measures to gauge the effectiveness of banner ads (Jobber and Kilbride 1986; Zeff and Aronson 1999). The third section asked respondents to indicate what they saw as the major problems facing banner ad measurement (Krishnamurthy 2000; Zeff and Aronson 1999). In the final section, respondents were asked how often they pretested banner ads and what measures they used in pretesting (Jobber and Kilbride 1986; King, Pehrson, and Reid 1993). The questionnaire also included a series of questions about the nature and size of the agencies and respondents’ positions at the agencies.
Sample and Procedure
The mail survey was sent to the top 164 interactive agencies in the United States. The agency sample was drawn from lists of top interactive agencies compiled by two leading industry publications, Adweek and Advertising Age. In addition, names and addresses of media directors of interactive advertising agencies were searched from Adweek’s on-line agency directory. After overlapping addresses and titles were merged, a total of 164 agencies were obtained for the survey sample. When the names and titles were not available, the correspondence was addressed to the “Media Director.”
Each questionnaire was accompanied by a cover letter and a postage-paid return envelope. Four weeks after the initial mailing of the survey, a second questionnaire was mailed to the nonrespondents. All questionnaires analyzed for this study were received before the end of February 2000, four weeks after mailing the final questionnaires. A total of 51 completed and usable questionnaires were returned. This resulted in an overall response rate of 31.1%.
Of the agencies that responded to the survey, 30 were full-service interactive agencies, 6 were ad networks, 14 were interactive divisions of traditional full-service advertising agencies, and 1 was an interactive creative shop (see Table 1). Although the sample size and response rate were relatively small, it appears that respondents represented a diverse sample of agencies involved in interactive advertising. Approximately half the agencies had more than 100 employees, which is a good size for an interactive advertising agency. Approximately 55% of the respondents were media directors, and the remainder of the respondents held positions of various responsibilities in their respective agencies.
The first section of the questionnaire asked agencies how often they had been using different banner pricing methods. Results showed that more than 90% of the respondents always or frequently used CPM in pricing (see Table 2). The other frequently used pricing method was click-throughs. Approximately 33.4% always or frequently used click-throughs. As Table 2 further indicates, flat fees, unique visitor information, and cost per action were less frequently used. A ranking of the pricing methods based on respondents’ mean scores shows that CPM (Mean=3.94) was the favorite pricing method used (see Table 2). Clickthrough (Mean=3.06) was the second most frequently used measure, whereas cost per action and unique visitor were the least frequently used methods.
To summarize, the survey reveals that CPM is the favorite pricing method for banner advertisement buying and selling. Click-through remains a distant second pricing method despite the Internet’s unique ability to track reactions to banner ads.
The second section of the questionnaire asked agencies how often they used various measures to gauge the effectiveness of banner advertising. It is clear from Table 3 that an overwhelming majority of the agencies (86.3%) indicated that they always or frequently used click-throughs. Approximately 10% used it sometimes, and only a few agencies (3.9%) seldom or never used it. The next most often used measure was outcomes such as inquires or purchases of products. More than 72% of the respondents indicated they used this method frequently. Banner ad exposure is the third most often used method, with 53% using it frequently. It is worth noting that approximately 27% of the agencies seldom or never used exposure to measure banner ad effectiveness.
The other measures-brand awareness, ad awareness, brand attitude, memory and purchase intention– were less popular, used by only 35% or fewer of the agencies surveyed. The least used measure was banner ad viewing duration, with a mere 15.6% using it frequently. Table 3 also presents the ranking of these measures. It shows that click-through (Mean=4.17) was the most popular measure, and outcomes/action (Mean=3.82) was the second most popular measure.
It therefore appears that an overwhelming majority of agencies considered click-throughs and actions as the best benchmarks of banner ad effectiveness. This makes intuitive sense, because the Internet offers the unique ability to track the behaviors of visitors. However, this conflicts with the results from the previous section, which showed that click-throughs and actions were used less often than CPM in pricing. Clearly, a disparity exists in what agencies pay for and what they consider good measures of advertising effectiveness on the Internet.
The next section of the questionnaire asked respondents, “In your opinion, what are the major problems currently facing banner ad measurement?” Respondents were given five potential major problems or issues that many in the industry have discussed (Ott 1999; Uyenco and Katz 1996; Zeff and Aronson 1999). Approximately 60% of the agencies revealed that lack of standards in measuring banner ads was a major problem (see Table 4). This finding is consistent with industry reports from both agencies and advertisers (Mottl 1999; Krishnamurthy 2000). Approximately 45% of the agencies considered the lack of independent auditing as the next major concern.
Forty-three percent of the respondents indicated cache/caching and 41% indicated proxy servers as other measurement problems. However, only 11% indicated that cookies were a problem. This is because, in most cases, a cookie serves as a unique identification for a computer user, and many see it as a useful tool for targeting and profiling rather than as a potential problem. In response to the open-ended question, one agency indicated that the cost of measurement services and the time for analysis were also measurement problems.
In short, despite efforts by various industry groups to provide voluntary measurement and pricing guidelines, many still see the lack of standards as the major problem. Caching, proxy servers, and a lack of site traffic auditing were the next three major problems according to the respondents. The challenge for the industry in the near future is to continue to explore the development of viable measurement and auditing standards. At the same time, technologybased solutions to proxy servers and caching will have an impact and ease some concerns in the future.
Banner ad pretesting was not widely used among the surveyed agencies. Although more than half of the agencies indicated that they pretested banner ads at least occasionally, only about one-third (37%) always or frequently pretested ads (see Table 5). Nearly 45% of the agencies never or seldom pretested banner ads. Overall, compared with television advertising, the frequency of pretesting banner ads is relatively low. Prior research, for example, found that 82% of the surveyed advertising agencies pretested television commercials (King, Pehrson, and Reid 1993). This difference can be attributed to the fact that the Internet is still a relatively young medium, and advertising spending on the Internet is still a fraction of that for television. As the Web matures and grows, it will be interesting to see whether pretesting advertisements on the Web becomes more common.
Those that pretested banner ads at least occasionally were asked to indicate what measures they had used in pretesting. Results indicated that approximately 64% used click-throughs and 45% used outcomes/actions in pretesting (see Table 6). Approximately 21% of the respondents used exposure and memory to pretest banner ads. Less than 18% of the agencies used awareness, purchase intention, attitudes, or ad duration as pretest measures. Traditionally, behavioral measures such as product choices and purchases have been considered highly valuable (Boyd and Ray 1971), but they were rarely used in pretesting (King, Pehrson, and Reid 1993). The Internet offers the technical ability for advertisers to track immediate responses to their ads, which could be the reason interaction and performance-based measures were more popular in pretesting banner ads than were traditional communication measures.
Summary and Conclusions
The results of this survey have provided some initial insights on how agencies price, measure, and pretest Internet banner ads, as well as what they perceive as the problems in measurement. Although the findings are preliminary, they nevertheless raise several major points that deserve the attention of both professionals and academics.
* An overwhelming number of the agencies surveyed frequently use CPM to price banner advertisements. Approximately one-third of the agencies use click-throughs. Cost per outcome/action is the least used pricing method. This shows that, despite the technical ability of the Internet to track more precise user responses and actions, CPM remains a favorite pricing method.
* When using measures to gauge advertising effectiveness, most agencies indicate using click-throughs and outcomes/actions rather than exposures or impressions. This indicates that, when it is feasible, interaction and performance-based benchmarks are favored as measures of advertising effectiveness.
* Despite industry efforts to provide voluntary measurement metrics and guidelines, many agencies still consider the lack of standardization and auditing as major problems facing Internet banner advertising.
* Generally speaking, pretesting banner ads is not common practice among interactive agencies. For those using pretests, the measures commonly used are click-throughs and outcomes. Traditional measures such as awareness, attitude, and memory are less popular.
The most intriguing finding of the study is that impression- or exposure-based pricing models are more widely used in pricing banner advertising than are interaction- and performance-based models. However, advertisers often pay high CPMs for banner advertising on the premise that it offers a higher degree of accountability than does traditional advertising (Hoffman and Novak 2000; Zeff and Aronson 1999). Several reasons could explain this contradiction. First, a lack of uniform measurement and auditing standards could hamper agencies from using performance measures. Second, publishers might resist the use of performance measures because exposures are more readily comparable to television and print media’s buying and selling practices. Moreover, the responsibility of publishers has been delivering the opportunity to see. Charging by exposures may be more in alignment with traditional media buying practices (Zeff and Aronson 1999).
The findings indicate that interaction and performance metrics are favored by agencies in measuring and pretesting advertising. This is most likely due to the metrics’ ability to assess the impact of advertising spending in a more precise and efficient manner. The need for accountability is especially important for clients and agencies that pay relatively higher CPMs to employ banner ads in promotional and direct response campaigns. However, for ads that are designed for branding purposes, CPM will likely remain the primary unit for media buying and planning. Compared with traditional media, such as television, the Internet is unique in that it enables advertisers to use banner ads to pursue different marketing objectives within the same medium. It also has the technology to track consumers’ various responses. To maximize the economics for both advertisers and Web publishers, the advertising industry clearly needs a multi-tiered pricing structure that incorporates interaction, performance, and exposure measures. A system that ties multiple pricing mechanisms to marketing objectives could propel the Internet to realize its full potential as a truly unique and more attractive advertising medium.
It should be noted that this study describes what agencies have been using to price, measure, and pretest banner ads rather than nonbanner promotional techniques, such as affiliate programs, referrals, or classifieds. Therefore, the reported findings do not necessarily apply to many nonbanner techniques that are more likely to use performance-based pricing measures. Industry reports indicate that the use of nonbanner advertising has been gaining increasing popularity in recent years (Hyland 2001; Sweeney 2000). In light of that, future research should study the changes in pricing and measurement practices regarding both banner and nonbanner advertising. For example, as the use of nonbanner advertising grows, it will be important to determine whether performancebased measures, such as cost per action or per lead, will take over exposure-based impressions and become the predominant pricing model in the future. In addition, this study does not directly address why certain measures are favored by agencies. More research is needed to identify factors influencing such practices. Finally, researchers should explore the extent to which the views and perceptions of the interactive agencies are shared by advertisers and on-line publishers. Future research efforts in these directions can be key steps toward narrowing the differences among the major players and helping develop optimal Internet advertising pricing and measurement mechanisms.
Ames, Gary Adna, Deborah L. Lindberg, and Khalid A. Razaki (1999), “Web Advertising Exposures,” The Internal Auditor, 56 (5), 51-54.
Boyd, Harper W. and Michael L. Ray (1971), “What Big Agency Men in Europe Think of Copy Testing Methods,” Journal of Marketing Research, 8, 218-133.
CASIE (1997), Glossary of Internet Advertising Terms and Interactive Media Measurement Guidelines, New York: Association of National Advertisers.
Coffey, Steve (2001), “Internet Audience Measurement: A Practitioner’s View,” Journal of Interactive Advertising, 1 (2), [http://www.jiad.org/].
Department of Commerce (1999), The Emerging Digital Economy, Washington, DC: Department of Commerce.
Dreze, Xavier and Fred Zufryden (1998), “Is Internet Advertising Ready for Prime Time,” Journal of Advertising Research, 38 (3), 7-18.
Ephron, Erwin (1997), “Or Is It an Elephant? Stretching our Minds for a New Web Pricing Model,” Journal of Advertising Research, 37 (2), 96-98.
FAST (2000), “Principles of Online Media Audience Measurement,” [http://www.fastinfo.org/measurement].
Hoffman, Donna L. and Thomas P. Novak (2000), “Advertising Pricing Models for the World Web,” in Internet Publishing and Beyond, D. Hurley, B. Kahin, and H. Varian, eds. Cambridge: MIT Press, 45-61.
Hyland, Tom (2001), “Web Advertising a Year of Growth,” [http:// www.iab.net/].
Internet Advertising Bureau (1999), “Metrics and Methodology,” [http://www.iab.net/].
Jobber, David and Anthony Kilbride (1986), “How Major Agencies Evaluate TV Advertising in Britain,” International Journal of Advertising, 5, 187-195.
King, Karen W., John D. Pehrson, and Leonard N. Reid (1993), “Pretesting TV Commercials: Methods, Measures and Changing Agency Roles,” Journal of Advertising, 22 (3), 86-98.
Krishnamurthy, Sandeep (2000), “Deciphering the Internet Advertising Puzzle,” Marketing Management, 9 (3), 34-40.
McFarland, Doug (1998), “Web Measurement Needs Standards, But Whose?” Advertising Age, (May 11), 48.
Meeker, Mary (1997), The Internet Advertising Report, New York: HarperCollins.
Mottl, Judith (1999), “The Trouble With Online Ads,” Information Week, (October 11), 90-93.
Novak, Thomas P. and Donna L. Hoffman (1996), “New Metrics for New Media: Toward the Development of Web Measurement Standards,” [http://www2000.ogsm.vanderbild.edu/].
Ott, Karalynn (1999), “Seeking Ad Measurement Standards: Internet Ad Community Trying to Agree on Terms for Comparison,” Advertising Age’s Business Marketing, 84 (September), 24-25.
Parsons, Andrew J. (1997), “The Impact of Internet Advertising: New Medium or New Marketing Paradigm?” in Understanding the Medium and Its Measurement: ARF Interactive Media Research Summit III, New York: Advertising Research Foundation, 5-16.
Sweeney, Terry (2000), “Web Advertising: Money to Burn – Clickthrough Rates on Online Ads Are Declining, But True Believers Say There’s No Place Like the Web to Build the Brand,” Internet Week, (October), 57-58.
Uyenco, Beth and Helen Katz (1996), “Mastering the Web: What We Have Learned so Far?” in Bringing Clarity to New Media: ARF Interactive Media Research Summit II, New York: Advertising Research Foundation, 138-169.
Zeff, Robbin and Brad Aronson (1999), Advertising on the Internet, New York: John Wiley & Sons.
Fuyuan Shen (Ph.D., University of North Carolina-Chapel Hill) is an assistant professor, College of Communications, Pennsylvania State University.
The author thanks the reviewers for their helpful comments.
Copyright American Academy of Advertising Fall 2002
Provided by ProQuest Information and Learning Company. All rights Reserved