A study with Chicago listeners back in the early days of PPM had participants keep track of their listening using a diary while at the same time carrying a PPM meter.
The study is important because it was one of the few opportunities we had to see directly how the meter compared to recording listening in a diary.
When Arbitron compared the self-reported diary to what the meter reported it found that people wrote down more listening than the meter recorded.
So which was correct?
Did listeners write down listening that didn’t take place, or did the meter fail to identify listening that people were exposed to?
Broadcasters were told at the time that the meter was more reliable, that people didn’t accurately record their listening, but today we have to wonder.
We now know that the meter does not capture 100% of exposed listening in a typical setting.
If it’s true that the meter misses some listening, isn’t it possible that the participants in Chicago were right and the meters were wrong?
In light of what we know now, it’s a reasonable question, but during the early days of PPM the questioner would have been dismissed if not burned at the stake.
It was a time when Arbitron took every opportunity to present upbeat PowerPoint presentations about how buyers were excited about PPM, that they were anxious to get more timely and detailed ratings data, and that PPM would bring radio into the 21st century.
But did Arbitron sell radio a bill of goods?
Arbitron funded a PPM Economic Impact Study that predicted gains of $700 million in annual radio revenue.
At the time Gary Fries, head of RAB declared:
Advertisers and agencies are eager to embrace electronic measurement. Moreover, it is apparent that there is a risk of loss of advertiser dollars for media that do not advance to a more reliable-and better-measurement platform.
In a 2006 stock holders letter (PDF) Steve Morris, then President and CEO of Arbitron, declared:
I believe the time is right for the radio industry to upgrade its measurement system to one that more reliably measures listening habits, does so with 21st century technology, and puts our radio customers in the position of being leaders in answering the call of major advertisers for more accountability for their media expenditures.
Is it just coincidence that both Fries and Morris use the word "reliable" in touting PPM?
In retrospect the debate over the value of PPM was never really a debate. Arbitron simply stated as fact that PPM was more reliable, more accurate, and a vast improvement over the diary.
Industry leaders supported it, the Arbitron Advisory Board rolled over buying Arbitron’s upbeat story-line, and PPM moved forward.
Yet despite all the assurances that PPM was more reliable, despite upbeat predictions about the positive financial impact of PPM, there were tell-tale signs that all was not right with PPM.
We heard a lot about reach.
According to PPM, people listened to more radio stations than diaries showed they did. PPM reach (cume) exceeded diary cume.
What we didn’t hear about were the metrics that mattered, the metrics radio sold.
We didn’t hear about quarter-hour ratings (AQH).
Presentations rarely addressed PPM’s lower time spent listening (TSL). Reviewing numerous client presentations over this period we couldn’t find a single AQH rating graph comparing diaries to PPM.
Pre-currency books were run in each new market with Arbitron producing both a diary book and a PPM book so we could compare diary and PPM ratings.
Every new PPM market was seeing a decline in AQH ratings. It varied slightly from market to market, but PPM was saying that people weren’t listening as much as the diary said they were.
The sales implications were clear.
A station working with lower PPM AQH ratings had two choices: It could either cut rates to keep cost-per-points the same, or it could pitch higher cost-per-points and hope that buyers would go along.
Despite the promise of greater credibility with agencies and larger buys if radio switched to PPM, stations were looking at the prospect of lower rates, lower revenue, and as a bonus, a 30% higher Arbitron bill.
To Arbitron,it was no big deal.The company had its increased profits baked into PPM contracts.
However, Arbitron knew it had an awkward PR problem on its hands. Lower revenue was not part of the PPM narrative, so Arbitron needed a solution.
So Arbitron told radio that it should just tell buyers that spots are going to cost more.
Just tell them!
One hundred rating points using PPM numbers is going to cost more than 100 points using diary numbers, by Arbitron’s estimate, about 43% more!
What could go wrong?
Rather than pitch it that bluntly, Arbitron got clever. They came up with the slogan,"70 is the new 100" when it comes to PPM rating points.
Arbitron put together fancy four color print brochures (PDF) with smiling panelists on the cover.
To make it look scientific, the company calculated a new set of AQH rating numbers for each market.
For example, in New York 80 points equaled 100, in Los Angeles 74 equaled 100, and in Chicago 85 PPM rating points was just as good as 100 diary points.
Arbitron justified their new math by pointing out that 0 degrees Celsius is the same as 32 degrees Fahrenheit. Really, they did.
You can imagine how well the 70=100 program was received.
At the time we called it Arbitron’s fuzzy math, because fuzzy it was.
Arbitron persisted for two years trying to sell the idea to radio and media buyers, but then quietly buried the campaign.
While the campaign is gone, the implications are still with us.
Arbitron declared that PPM was right and the diary was wrong. And radio bought it.
But given what we now know about PPM’s flaws, isn’t it more likely that the diary got it right (or at least closer to the truth) than PPM?
Isn’t it time to reopen the matter?