Your ratings are just like Presidential polls: Both are estimates, an approximation of something that can’t be precisely measured.
Presidential pollsters like to present their poll numbers as accurate and precise, but there is a limit to their precision and accuracy.
You may have noticed that every Presidential poll mentions a margin of error or confidence interval.
Margins of error are statistical calculations that indicates how much variance we should expect from one poll to another.
The larger the margin of error, the more the numbers are going to jump around.
Nielsen too likes to present their ratings as precise, an accurate measure of how well each station is doing.
However, like Presidential polls, Nielsen ratings are also estimates that have a certain margin of error. The "real" numbers might be higher than reported by Nielsen. They might be lower.
Unfortunately in analyzing Nielsen numbers we rarely acknowledge the uncertainty associated with polling.
We talk about "wobbles" as aberrations, but the truth is that every Nielsen number is a wobble.
Radio's problem is that while Presidential polls always clearly state their margins of error, variance of radio ratings is rarely reported.
You have to dig pretty deeply into the book to find it, and even then it isn’t obvious how much the numbers can vary and still be "within the margin of error."
Here’s one example that illustrates what you would find.
The first graph is a cume ranker of seven stations in one medium size PPM market. The ratings and stations are real but we’ve labeled them generically to avoid running afoul of Nielsen’s "fair use" rules.
They are ranked by Nielsen’s published estimates, but rather than show a single number for each station we’ve shown the range of ratings that would be accepted as "normal," that is within Nielsen’s confidence interval.
For example, Station 1 has a cume audience somewhere between 280,000 and 370,000. Meanwhile, Station 2 has a audience that could be as high as 370,000 or as low as 280,000.
This means that while Nielsen ranked the first station over the second, we really can’t say whether Station 1 or Station 2 has a larger audience anymore than a Presidential pollster can say which candidate is really leading if the lead is within the margin of error.
But it gets a lot more complicated than that.
Draw a horizontal line across at any point and see how many stations it touches.
Draw the line at 350,000 persons and it will run through three stations. That means that the top three stations are "within the margin of error," essentially tied.
Draw a line at 300,000 persons and the line runs through six stations.
It means that any one of the top six stations could be first...or sixth! We just don’t know.
You’re looking at 6+ full week numbers, the numbers with the lowest variance and we can’t definitively say which of the top six stations is number one.
Imagine what the variance is when we get into smaller demos or specific day parts.
The second graph is kind of messy but even more important. Take some time to study it.
The graph shows one station’s four month trend as you’ve never seen trends displayed. Again, the numbers are real but we can’t tell you the market or station. Here we’re looking at share.
The solid green line in the middle is the published share for each of four consecutive months. The blue line is the lower confidence limit according to Nielsen. The red line is the highest confidence limit.
The gray area is the area that falls within Nielsen’s total person estimate variance. In other words, the station’s share could be reported anywhere within the gray area and fall within Nielsen’s confidence interval.
What this means is that it is highly likely that the station’s share is somewhere between the blue and red lines, but we can’t say for sure exactly where it is within it.
So officially month 2 was a good month for the station, gaining nearly a full share. The station then lost ground for two months in a row.
But is that what really happened?
The margin of error for this station is about three shares. What that means is that the station has to gain or lose three shares before we can confidently say something real happened to the station.
(I’m simplifying here since Nielsen margins of error are asymmetrical, but you get the idea.)
The dashes show three very different trend scenarios, all equally likely to have happened.
First, the yellow dashed line shows the station flat over the four months. No change, solid as a rock.
Next look at the turquoise dashed line. It shows a steady uptrend for the station, from a mid four to nearly a six share in three months. Good job!
Now look at the purple line. From a mid six share the station has fallen below a four share!
Imagine how you would feel losing a third of your audience in a few months.
A flat trend, strong growth, or a free fall. All equally possible.
Two of the three plausible scenarios are wrong, but which two? And what if you react to one of two wrong scenarios by making changes to the station?
And keep in mind that our illustration is total week 6+ shares. You are probably studying demos, and typical programming demos have a variance two or three times the 6+ numbers.
This is why we always caution stations to resist the impulse to react to monthly changes in the numbers.
Wobbles are not aberrations. Wobbles are baked into ratings.
Next month when your new numbers roll out you might want to reread this post. And if it's a bad month you might want to make copies and pass them out.