This is the most confusing issue we will address in these questions. You may have to read what follows several times to understand the implications, but it will be worth your while.
One of Arbitron’s most effective negotiating tricks is to focus attention on a problem that can be fixed at minimal cost so as to distract from broader problems that would cost much more to fix. One tool in this effort is DDI, Designated Delivery Index.
Arbitron sets quotas for age, sex, ethnicity, and other criteria in an effort to mirror the population of each market. It then recruits families matching the desired criteria. Unfortunately, not all panelists actively participate by carrying their meters, so each month’s In-Tab, those panelists who actually participated in the month’s survey, may not mirror the population.
DDI is a device to obfuscate how far off the in-tab is. Arbitron sets a modest goal of how close it needs to get to the quotas (currently 75- 80% in the most difficult cells), and then indexes the actual in-tab to this goal. The effect of setting a goal well below Arbitron’s own quota and then indexing to this goal is to appear to be reaching desired in-tabs while falling woefully short.
If you are confused by all these arithmetic gymnastics, you are supposed to be. That’s the point. If Arbitron is supposed to have 100 people in a cell and they end up with 80, it is clear that they only achieved 80% of their quota. But if they arbitrarily decide that while their goal is 100, they will settle for 80, then hitting 80 equals a DDI of 100. So they can declare victory, all the while never discussing the actual numbers.
The broader goal of obfuscation is to focus on cells and direct the discussion away from the total in-tabs. As long as Arbitron discusses in-tabs as percentages of goals, they never have to address the drastically lower PPM in-tabs.
Houston’s last diary book had approximately 4,000 diaries in the metro. Last month Houston had 1,435 active panelists. Houston has seen almost a two-thirds drop in the in-tabs going from diaries to PPM.
New York has dropped from 13,000 diaries to about 3,500 active panelists. Philadelphia has dropped from about 4,500 diaries to about 1,400 active panelists, 12+. And so on.
PPM markets have about one-quarter to one-third of the in-tabs they had before.
As PPM rolled out, Arbitron assured stations that even with fewer participants, we would have more accurate ratings compared with the diary. As with so many aspects of PPM, no one bothered to question Arbitron’s assertion. And when it comes to accuracy, PPM has delivered less than the promise.
The formula shown at the top of the page is the calculation for margin of error. (For the sake of brevity, we’re going to skip a great deal on the background of error. Go here if you want to understand more about margin of error.)
All you need to understand about error is that ratings are estimates. Estimates based on surveying a sample of people are subject to error (wobbles). The smaller the sample, the greater the error, the greater the wobbles.
The standard error formula applies to surveys like Presidential elections, but when it comes to radio listening estimates, things get a little more complicated. Using statistical analyzes, Arbitron claims that their rating estimates are more accurate than the traditional formula for calculating error. They use a different formula and something they call ESB, Effective Sample Base.
The ESB is larger than the actual in-tab. For example, New York’s 12+ in-tab in May was 3,822, but Arbitron claims an Effective Sample Base of nearly 32,000. In other words, Arbitron claims that their overall AQH estimates are as accurate as if they surveyed 32,000 people.
That seems like a lot of people until we look back at New York diary surveys. New York’s diary ESB was over 37,000 people. In other words, by Arbitron’s calculations, the Effective Sample Base has declined. According to the ESB numbers, Arbitron lost 5,000 in-tab persons in the switch-over to PPM.
Because of the nature of ESBs, they drop very quickly when looking at demos and dayparts. Looking at W25-49 in morning drive, ESB in New York has dropped from over 8,000 to less than 6,000, a drop of 25%.
We won’t debate whether Arbitron’s ratings are as accurate as they claim. Let’s assume they are. According to their own estimates, cutting in-tabs by two-thirds has significantly reduced the accuracy of radio listening estimates, particularly in standard sales demos and dayparts. Arbitron might be justified in reducing in-tabs with PPM, but not to the extent they did.
You can find out what kind of hit your market has taken by going to your Arbitron ebook, turning to the Methodology tab and selecting Table B. Select a demo and daypart, find the number, square it, and that is your ESB. Then pull out your last diary book, go to the back few pages and find Table B there. Square that. Compare the PPM and diary numbers. That’s the impact of cutting in-tabs by two thirds. If you have any trouble, give us a call.
So here’s our question to Arbitron:
Comments