Minute by minute meter counts give the illusion of granularity, an impression that you are looking at virtually real-time actual behavior of your listeners.
But it is only an illusion.
PPM measures and reports aggregated audience behavior. PPM estimates how many listeners you have and how long they listen, but that’s pretty much all.
If you want to improve your radio station and grow your audience, audience behavior data won’t help you.
You need to understand listener behavior, what we call respondent level data.
You may think that Nielsen’s minute-by-minute is respondent level data, but it’s not.
It is aggregated audience numbers, no more useful than PPM ratings.
Here’s an illustration to show you why minute-by-minute can be misleading.
Let’s say Nielsen’s minute-by-minute shows that at :00 you had five panelists (listeners), at :01 you had two, and then at :02 you had five again.
What have you learned about listener behavior? Not much.
Meter counts can’t help you fine-tune your product because they are just net counts.
Yes, you can see the so-called “switch-in” and “switch-out” numbers and even identify stations, but you can’t track actual listener behavior over time.
By the end of the three minutes are the same panelists in the count or have they been replaced by all new panelists?
You don’t know.
Take a look at two scenarios shown above that produce identical net listening reports.
In the first case the station begins with five meters and ends the period with the same people listening. We lost Jack and Jane for a minute, but they returned.
(We’re ignoring editing artifacts like lead-in minutes in these examples to keep things simple. We’ll address editing in a follow-up post.)
If the pattern continued we could conclude the station had good quarter-hour maintenance.
Now look at the second scenario. We have the same pattern of five meters dropping to two meters in the second minute, and returning to five meters the next minute.
Notice the big difference, however.
We lost all five original meters in the first minute, replaced two of them in the second minute, and replaced them all again in the third.
In the first scenario we had five panelists. In the second we had twelve. Yet our switch in/switch out report hides the critical difference.
That’s just in three minute’s time. Imagine the possibilities over an entire daypart let alone a full day.
Minute-by-minute panelist counts are not respondent level data because from one minute to the next you don’t know who’s who.
Remember that study Arbitron produced that claimed that people sit through long commercial breaks?
It was based on comparing meter counts at the beginning of a break to meter counts at the end of the break.
It’s the same problem. Net meter counts do not track panelists. And it is what people do over time that matters most.
Here’s a variation of the above example that illustrates how misleading minute-by-minute can be over longer periods of time.
Let’s say you’re looking at your minute-by-minute over a quarter-hour and it is rock solid. Most minutes you’re right around five meters, maybe one additional meter from time to time, maybe one less meter from time to time.
Nothing to worry about there. Clearly you’ve been able to engage those five panelists over an entire quarter-hour.
But how do you know that?
What if in truth no panelist stuck around for more than a minute, maybe two? What if the counts totaled five each minute but they weren’t the same panelists?
Your seemingly solid minute-by-minute actually is hiding the fact that you suck. None of your panelists liked what they heard and they all left soon after they arrived.
This may seem far-fetched, but it happens all the time. Take spot-breaks, for example.
If you trust minute-by-minute to make programming decisions, then you have to believe what it tells you about spot-breaks.
Look at a day's worth of spot-breaks and you'll find that around half your breaks have meter counts higher at the end than at the beginning of the break.
Maybe your breaks are compelling and a great way to keep listeners glued to the station, but a more likely explanation is that you are churning through panelists.
The panelists at the end of the break are not the same panelists that were listening (exposed) at the beginning of the break.
The only way to tell what’s really going on is to know which panelists were there in the beginning of the break, and whether those same panelists were there at the end.
Nielsen won’t tell you that. They won’t even sell you that information.
Aggregation and using net meter counts is just one problem trusting what minute-by-minute is telling you. We’ll be discussing other problems in future posts.