Clemenger Media Sales is content marketing.

Clemenger / Clemenger Advertising Sales / Clemenger sales OR CMS is niche media – what is your target market. Clemenger’s advertising / media / marketing sales team can help you own your target: do you want front covers and lead stories = do you need to own your target market?


Clemenger – CMS can help you own your customers – but you have to invest in content. You have to invest in building and managing relationships.

Clemenger can help you add value.

Clemenger Media Sales is the niche media, the content marketing, the media partnership experts (we have dozens of media / publishing clients ; hope the following news helps –


Using recognition-based tracking to compare the ROI of print, radio and TV

Ad campaigns often include a variety of media, which often leads to questions about effectiveness. This article is a case history from a continuing recognition-based tracking study by BlueCross BlueShield of Minnesota that shows a way to answer all the questions that arise regarding which medium is best.

Authors: Don Bruzzone,Lizabeth Reyer

Editor’s note: Donald E. Bruzzone is president of Bruzzone Research Company, Alameda, Calif. Lizabeth L. Reyer is market research manager at BlueCross BlueShield of Minnesota, Eagan, Minn. This article is based on a presentation made before the Advertising Research Foundation Conference in New York on October 26,1998. The full article was published in eXperts Report on Media Research – Information or Currency?: Print; TV; Interactive and Accountability. October 1998. Copyright 1998 Advertising Research Foundation.

It is very common to have campaigns that include advertising in a variety of media: print, radio, TV, etc. It is not as common to have a good answer to the question “Which is best? Which reaches and affects people at the lowest cost?” In short, which provides the most “bang per buck” for my product, in my markets, today? Can you even make valid comparisons of the impact of ads and commercials?

This is a case history from a continuing recognition-based tracking study by BlueCross BlueShield of Minnesota that shows a way to answer all of these questions.

Ever since health insurance became so competitive with the introduction of HMOs and all the other new forms of coverage, BlueCross BlueShield of Minnesota became very serious about marketing and the role of advertising. They weren’t sure they wanted to go along with the conventional wisdom about the need for advertising and the types of advertising that worked best. They wanted solid evidence.

A team from BlueCross BlueShield of Minnesota looked at the more traditional telephone tracking surveys where you ask if people recall seeing or hearing any advertising for health care plans, and if so, which ones. They had two major concerns about that approach.

First, it is not very accurate or precise. When somebody says they recall your advertising you don’t know if they recall your present advertising or your previous advertising. Or even if it is your advertising. They could be remembering your competitors’ advertising. When a person looks at an ad and says “Yes, I recognize that as an ad I have seen before,” you have a much more accurate and discriminating measure of the advertising they were actually exposed to.

Secondly, ad recall does not do a very complete job in identifying those actually reached by the advertising. The team was impressed with the evidence showing the number that can recognize advertising they have seen before is two to three times greater than the number that can recall that same advertising from memory. When you are trying to see if the advertising affected the people who noticed it, recognition gives you a much more complete picture of the number who actually noticed it. 1

The team was also impressed with the evidence on the limitations of recall-based research from two major industry-wide studies in the early ’90s. First was the ARF’s Copy Research Validity Project. It showed the standard day-after recall test, which had been the standard of the industry for over a quarter of a century, didn’t perform much better than flipping a coin when you were trying to predict if Commercial A was going to be better than Commercial B. 2

That created enough of a furor that a second industry-wide study based on a larger collection of the same kind of expensive, but highly reliable, split cable tests addressed the issue: IRI’s “How Advertising Works.” The evidence was the same, but the conclusion was stated even more strongly. Ad recall was not related to sales.

BlueCross BlueShield of Minnesota didn’t want to spend any more than was necessary. So they also looked at a variation of recall-based telephone tracking surveys. To try to avoid the problems of recall and get closer to recognition, the interviewer reads a description of the ad or commercial to the respondent. Results from this approach published by Eric DuPlessis convinced them this was only a halfway solution. The number that qualified as having seen the ads increased, but only half as much as they do in a true recognition test.

Another consideration was the variety of advertising they wanted to test, and that got back to the basic objectives of the study. They wanted to find out where the firm stood in terms of all the advertising about health plans being conducted in the market during that period. And, the campaign was a brand-building effort so they wanted to see if any of it was having an effect on the image of health plans among the population as a whole.

The advertising to be evaluated included seven print ads, six TV commercials and two radio commercials. Only about half were for BlueCross BlueShield of Minnesota. The rest were for competitors.


How do you get a recognition-based test of media as dissimilar as that? One way is to approach people at random in malls and invite them into an interviewing facility where all the ads are shown and all the commercials are played. That is a perfectly valid approach, one that BRC has used thousands of times. But one of the requirements here was to keep the cost down. So we wanted to avoid those relatively expensive personal interviews.

How? We started by calling a random cross section of households throughout their marketing area. The objective was to see if the advertising was increasing people’s awareness of the firm and making them think more favorably of it. So the first two things we asked were how familiar they were with the health care plans in the area, and which were best in a number of attributes. Then we played a radio commercial to them over the phone. The name of the advertiser had been bleeped out, so in addition to asking if they recognized it, we could find out if it communicated the most important piece of information any advertisement has to get across. We asked if they remembered who it was for.

The phone interview was kept short and stripped to the essentials for good reason. At the end we said we would like to show them some pictures and ask questions that would be in a questionnaire we would mail to them; 650 said they would fill it out and return it. After a single follow-up, 62 percent did. That gave us 405 respondents who completed both the telephone and the mail surveys. This is the sample that the remainder of the results are based on. Each page of the mail questionnaire contained BRC’s standard battery of diagnostic questions. Pages for the print advertising included a copy of the ad with the name blocked out (Fig. 1, p. 23).

To see if they recognize the TV commercials we show a photo board and script (Fig. 2). Again, all references to the advertiser are blocked out, and the same set of diagnostics are included.

BRC has been testing the recognition of commercials using both personal interviews, where people see the actual commercial on a monitor, and this type of mail questionnaire for more than 20 years. We have scores of directly comparable parallel tests that show there is a correlation of .88 between the percent that recognize commercials in these mail surveys and the percent that recognize them when they see the actual commercial. That means the mail surveys give you 77 percent, or most of what you get from personal interviews.

But the cost of doing it this way is just about half of what it costs to do it with shopping center intercepts and personal interviews. So you are trading-off a known and relatively modest decrease in accuracy for a substantial decrease in cost. With budgets for print advertising often being smaller than TV, this can make recognition-based tracking of print feasible for a lot more print campaigns. Further, this is a service targeted to virtually everybody, so this approach also helps overcome the upscale skew inherent in mall intercept interviewing. What did we find in this case?


Figure 3 shows the percent that recognized each of the 15 ads and commercials in the test. It varied widely, from a high of 53 percent for one TV commercial to a low of 2 percent for one of the print ads. The two radio commercials came in second and third.

Figure 3 shows why those favoring print might be reluctant to get into a head-to-head comparison between media. Usually you will find fewer people noticing ads. But you don’t usually spend as much on your print campaign, and that has to be factored into a study like this. Figure 4 shows what was spent to run each of the ads and commercials in this study.

Expenditures that covered the year before the test varied widely. But the variations don’t match the differences in recognition – not one bit! The odds that we are going to turn up meaningful differences in advertising efficiency have just increased.

The recognition scores also show something else: the percent of the public being reached by each medium, and by each combination of media. The pie chart in Figure 5 shows print reached a total of 23 percent. TV reached 41 percent. That was a relatively small difference, considering the average expenditure for airing the TV commercials was more than double the amount spent to run the average ad.

Why were people more likely to notice some of these ads or commercials and ignore others? The battery of diagnostic questions in Figures 1 and 2 produces the advertising response model, or ARM, shown in Figure 6, and that tells you why. This ARM is for the commercial shown in Figure 2.

This model was described earlier in a 1996 ARF talk 5, so we won’t go into detail about it here. It is enough to note the white areas. They show people noticed this commercial because its warmth and appeal generated a greater than average amount of empathy. Further, the high level of relevance people found in the message also contributed to the greater than average score for purchasing interest. That’s unusual. Today most commercials capture attention through their entertainment value.

Two of the below-average performance scores were for a lack of humor. The third gives reason for concern. The number of recognizers who knew who it was for was below average. A commercial can’t help you if people don’t realize it is about your product or service. This wasn’t a fatal flaw because all of the advertising it was competing against was also below average in getting the name across. And although the number that knew who it was for was below average, the number was still substantial: 37 percent.

Capturing attention and getting the name across are two essentials that are still overlooked all too frequently in advertising research. If you don’t capture attention and get the name across, the magic of advertising doesn’t even have a chance to start working. But now that we have measured both, we are ready for the next key question.

Did these ads and commercials have any effect on the people they did reach? We measure that with attitude shifts. We asked which health care plan they felt was best on 10 different attributes. We asked that in the first part of the telephone interview so the advertising we showed them later couldn’t affect their answers. Then we looked to see if those who recognized an ad were more likely to name the advertiser than those who didn’t see the ad. The results are shown in Figure 7.

If the percent naming the advertisers was significantly higher among those who noticed the advertising, that generated one of the bars on this chart. You’ll find that seeing some ads and commercials is associated with significant lifts on almost all attributes. Others only had shifts on a few attributes, and some showed no significant effects. You see some differences between the media. Earlier we had seen that print ads tended to be noticed by a smaller segment of the population. Here we see those who noticed print ads tended to show the biggest improvements in attitudes. That was true for some, but not all of the ads.


This type of information can be used to answer many key questions about the performance of advertising campaigns. But this is a session where we are comparing media, so Figure 8 shows how this type of tracking data can be used to compare media and see if there is any synergy to be gained from using a combination of media.

Starting at the left, the white bar shows when we averaged the scores for all the attributes, we found that among those who didn’t recognize any of the advertising, 9 percent named the advertiser as best. The next three show the results for those that recognized advertising from only one of the three media. It shows that in this case print was better than TV and about the same as radio, but the differences were relatively small. The last three bars show larger increases are associated with exposure to several media. Those reached by print and radio show the two work better in concert than separately. Those reached by print and TV show no such synergy.


To get to our final measures of return on investment we combine all of this into a measure of overall impact for each ad and commercial. We take the percent that recognized an ad or commercial and multiply it by the average shift among recognizers – the increase in the percent saying the advertiser is best. For the population as a whole, that gives us the lift that was related to noticing that ad or commercial. In short, the percent reached and affected. We projected that to the population, and divided the total amount spent to run the ad or commercial by the number reached and affected. That produced Figure 9 showing the cost per person reached and affected by each of those ads and commercials.

Now we have some results that are clear-cut and easy to read. Some of these ads and commercials are reaching and affecting people at a very low cost per person. Others are off the chart. The mathematically sophisticated may recognize this as an asymptotic function. As the number of people reached and affected by an ad or commercial approaches zero the cost per person approaches infinity. Hence the off-the-chart scores.

These final results are specific to this type of product, in one market, where one specific set of ads and commercials had been running prior to the test. Our intent is to show this type of study can be done for any product in any market. We don’t mean to imply the differences we found among media in this test are typical. But with those important qualifications out of the way, let’s look at what we found. Print ads were certainly competitive with commercials. Each medium had at least one ad or commercial that was unusually cost efficient.

Both print and TV had ads and commercials that were so inefficient that they were off the scale. We didn’t have enough radio commercials to show how much variation we might find in that medium.

The most important thing this study showed was there are enormous differences in the cost per person reached and affected by ads and commercials. A company that is running those efficient ads and commercials is getting a lot more for their advertising dollars. These differences are related to the creative side of the advertising. It is not related to the medium. The best of the print ads reached and affected people at a lower cost than the worst of the TV commercials. And the best of the TV commercials were more efficient than the worst of the print ads.

Did we meet our objectives? We wanted to get a fair comparison among media, so we used recognition to see how many actually noticed each ad and commercial. Recognition isn’t anything new in testing print advertising. Daniel Starch started using it in the 1920s. Starch tests and similar ad readership studies are still being done. But there is a key difference: They check recognition of ads among people who have read a specific issue of a magazine. That can certainly be useful, but it doesn’t give you a basis for comparing ads and commercials. We met that need by doing recognition-based tracking among a cross section of the entire market. We didn’t incur the higher cost of using personal interviews to show things to people and control the order in which they are exposed. We did it all with a combination phone and mail survey.

Valid comparison

This, then, is what we worked out as an answer to that opening question. We feel it shows you can make valid comparisons between ads and commercials, and when you do, you can find situations like this where advertising in print can prove every bit as effective as advertising on TV.


1 Documentation of these points and additional references are found in several sources: Schaefer, Wolfgang: “Recognition Reconsidered,” Marketing and Research Today, ESOMAR, May 1995; Singh/Rothschild/Churchill: “Recognition vs. Recall as Measures of TV Commercial Forgetting,” Journal of Marketing Research, 2/88; Krugman, Herbert E., “Low Recall and High Recognition of Advertising,” Journal of Advertising Research, Feb/Mar 1986: BRC Technical Memo # 58, BRC, 1983.

2 Haley & Baldinger: “The ARF Copy Research Validity Project,” Journal of Advertising Research, April/May 1991.

3 Lodish, et al: “How Advertising Works,” Journal of Marketing Research, May 1995.

4 DuPlessis, Eric: “Recognition vs. Recall,” Journal of Advertising Research, May/June 1994.

5 Bruzzone, Donald E. and Deborah J. Tallyn, “Linking Tracking to Pretesting with an ‘ARM’,” Journal of Advertising Research, May/June 1997.