But they don’t.
The meters miss listening. And worse, the amount of listening the meters miss depends on formats, programming, and the environment in which a person listens.
PPM’s 1980s technology simply can’t capture 100% of listening 100% of the time. So PPM meters end up with gaps, logged periods of unidentifiable listening.
Arbitron knew this even before PPM launched, so in an effort to fill in the unidentifiable listening--to patch the problems, the company created editing rules.
Computers review meter logs searching for gaps in listening. Algorithms then use rules to credit the gaps to radio stations.
The problem is that algorithms can’t know for sure what a person was actually listening to. It can only guess based on the information it has, and given PPM’s flaws the information can be very misleading.
Because PPM favors some formats over others the algorithms can end up just making things worse.
The computers give some formats even more quarter-hours than they deserve while adding insult to injury by punishing other formats even more.
Let’s say within a quarter-hour a panelist listens to Station A for four minutes, switches to station B listening seven minutes, and then switches to Station C for four minutes. (See the first illustration below.)
Because it takes at least five minutes of station listening to qualify for a quarter-hour, only Station B should get credit for the quarter-hour.
That’s if the meter accurately logs the listening to all three stations.
But what if Stations A and C are music stations that the meter can easily identify but Station B is difficult for a meter to identify, say maybe a morning show or New/Talk station?
The editing rules that Nielsen applies in this situation could end up awarding all the minutes to Stations A and C, and Station B ends up with nothing!
There are two kinds of gaps.
There are gaps where the meter kind-of knows which station it might be but isn’t sure, and there are gaps where the meter only knows some radio station is playing.
The first is called a one-off incomplete edit while the second is called a medium edit.
If your head isn’t already spinning and you really want to know even more about editing, the Nielsen people have put together a lovely nine page glossy booklet explaining all this entitled From Codes to Credit: Data editing in our radio service.
It is just as interesting as the title suggests. Be sure to ask your Nielsen rep for a copy today (because we can't link to it).
But if you just want to know how Station B got screwed out of a quarter-hour, read on.
The computer looks on either side of the gap to see what the panelist was listening to before and after the gap. In this case it was Station A before, and Station C after the gap.
Since it never identifies Station B in our scenario, the computer splits the gap between Stations A and C.
The additional minutes qualify both stations for a quarter-hour they don’t deserve, and the most listened to station that quarter-hour gets nothing. (See the second illustration.)
The editing rules look reasonable and fair in the overly simplified brochure Nielsen has produced, but in the real world the algorithm can arbitrarily divvy up quarter-hours.
If anything, the rules exacerbate the rating differences between formats that encode well and those that don’t.
Stations that encode well will be given a higher proportion of the gaps because they are more likely to be accurately identified within the editing windows.
The stations that don’t encode could very well represent a higher proportion of the unidentified listening for the very fact that they don’t encode well. However, they will be given a smaller proportion of the credit.
There’s just no way for editing to patch the gaps and make sure every station gets all the credit it deserves. Only 99% decoding accuracy can do that.