Too close to call. The race is within the margin of error. When political poll results are reported, we often hear the phrase without giving it much thought.
But we should.
That’s because audience measurement is like political polling. Polls have margins of error and so do every audience measurement estimate. The difference is that while political polls generally tell you that a race is too close to call, we never hear that a ratings race is too close to call.
But we should.
That’s because ratings like Nielsen Audio’s PPM numbers have margins of error too, larger than you might think.
Take a look at the graph shown here. (Click on the graph to enlarge it.) It’s a ranker, but probably not in a format you’ve seen before. It shows a Nielsen Audio ranker, but displayed as a graph. The red hash marks represent the official 6+ share estimates for the top eleven stations.
The vertical gray lines extending above and below the red bars tell us how high or low the ratings could have been and still be a reasonable estimate of the size of a station’s audience.
The vertical lines represent what’s called the confidence interval, essentially the range of possible shares you might expect with a certain confidence, in this case 95%.
Ratings are estimates. And by definition, estimates can (and often do) differ from the number of listeners you actually have. The amount of variance is determined (among other things) by the number of PPM meters in a market. The more meters in a market, the less the estimates vary from the real audience numbers.
So while we talk about a station having (say) a 6.8 share, it would be more accurate to say the station has an estimated share of 6.8, though it could be as high as a seven, or as low as a five.
It might surprise you that the published share estimate of a station might be off by as much as a share or two either way, but that’s the nature of estimates.
When we take into account the range of uncertainty for each station’s ratings, some interesting things become clear.
The upper numbers show the highest estimate and the lower numbers shows the lowest estimate for each station. For example, for the month shown WAXQ is somewhere between a 3.1 and 5.3, WFAN is somewhere between a 2.6 and a 4.5, and so on.
Take a look at the horizontal dotted line running from WLTW to WKTU. The line shows that the ratings of all six top ranked stations in New York overlap when we take into account each station’s confidence interval.
In other words, any of the top six stations could theoretically be ranked number one or number six. Statistically speaking, WKTU has the same chance of showing up number one as WLTW. And WLTW has the same chance of being sixth as WKTU.
So what can we be certain of when we look at rankers? When two station’s confidence intervals don’t overlap, we can confidently say that the one station has a larger audience than the other.
For example, based on this ranker, we can say with 95% certainty that WCBS-AM has fewer listeners than WLTW. It often takes about six places in a ranker before you can be relatively certain that the one station has more listeners than the other.
Fortunately, we don’t see huge swings in the 6+ numbers from month to month, so the theoretical confidence intervals are probably a bit overstated, but we know that confidence intervals increase as sample sizes decrease.
The full week 6+ numbers are the most reliable estimates Nielsen produces because they include all the meters. But what about day-parts or demographics? What happens then?
Nielsen Audio tightly controls the release of ratings, and we are not allowed to show anything but full week 6+ shares, but anyone who tracks age cells or day-parts knows what happens.
The monthly swings can be severe with stations continually moving up and down rankers from one month to the next. It is easy to find five and six rank swings in some demos and day-parts. You’ve probably seen it in your market.
Think about what confidence intervals mean about monthly trends. If a top ranked station can theoretically have its share estimate move up or down a full share without gaining or losing listeners, then what about the monthly moves we see?
It means that most monthly changes in share or rank are just wobbles within the expected range of estimate uncertainty. In fact, when we take into account confidence intervals virtually no stations move enough from one month to the next to indicate real change.
Which means all the analysis that you see explaining why a station went up or down is mostly guessing.
You can determine the confidence intervals for stations in your market by going to the eBook, clicking on the Methodology tab, and then the Audience Estimate Reliability tab.
It takes a little arithmetic to get the results, but it’s a good exercise to prepare for those inevitable bad months. You really didn’t have a bad month. You’re still within the margin of error. You may even be surprised to find that you’re tied with the top station...within the margin of error.