Pretty much everything you think you know about PPM is wrong.
Do you think that PPM gives you an accurate picture of how people listen to the radio?
Wrong.
Do you believe what you were told at convention sessions about how to compete in a PPM world?
Wrong.
Do you believe the dozens of Arbitron presentations on how people listen?
Wrong.
Pretty much everything radio stations do in the hopes of boosting ratings are based on myths.
The reason is that PPM is less accurate and less precise than we were led to believe.
Audio watermarking, the technology behind PPM, works well under the right conditions. However, PPM’s use of watermarking is flawed and leads to gaps in the decoding process and unreported listening.
Worse yet, the amount of listening that is lost depends on a station’s format, the content broadcast, and the environment in which panelists listen.
Two equally popular formats might have different ratings just because PPM can easily identify one station but not the other.
Remember all those discussions about how listeners don’t like jock talk? Think that PPM proved it? Wrong.
A lot of jocks lost their job, and the ones that remained were told to shut up.
Now we learn that meters don’t decode talk as well as music.
The drop in minute by minute when jocks come on is not because people are tuning away. It’s because encoding shuts down.
So it turns out that it isn’t listeners who don’t like talk. It’s the meters.
While we think of encoding as a continuous process, for most formats it is actually a start-stop-start-stop kind of process, depending on what you’re playing.
The encoding process requires content to “hide behind” so the identifying encoding tones are not audible.
That’s easy with typical music content. All you have to do is be playing something around 1-3 kHz, a high C for you musicians.
No content around a high C? No encoding. No credit.
How about silence or long pauses? No encoding. No credit.
So every time we look at PPM ratings, we aren’t seeing all the listening that panelists were exposed to.
We’re seeing little snippets when the type of content enables the encoder to encode.
Were all formats and stations equally likely to lose credit, at least we’d have a level playing field, but that’s not the case.
Some formats encode better than others, so even if some stations get 99% of the credit they deserve, other stations in other formats might get considerably less.
PPM creates winners and losers not just based on popularity, but also based on format and content.
What about minute by minute? We were told that the granularity of PPM would enable us to see listener reaction to individual elements like songs.
However, if PPM reacts differently depending on content, then we can’t be sure whether panelists tuned away or encoding stopped.
Is a drop in meter count due to bad programming or just inadequately encoded programming ?
When meter counts go up are we broadcasting a better product or just a product that encodes better?
All the PPM truisms, all the tricks and ways we thought we were gaming PPM turn out to be an illusion.
We weren’t looking at a real picture of how people listen to the radio, so we weren’t really learning how to better serve them.
Until Nielsen fixes PPM so that every quarter-hour of listening is accurately logged, we really can’t be sure of anything PPM tells us.
Still trying to get up to speed on the new PPM revelations? Here are links to posts that help you learn what it's all about:
Is Your Competitor a Super-Achiever?
The Myth of Minute by Minute and PPM Granularity
Did Arbitron Sell Radio a Bill of Goods with PPM?
The PPM Presentation Video You Need to Watch
What’s All the Fuss Over Voltair? About $2 Billion
Did Arbitron Mislead the RAB About PPM’s Missing Listening?
The Nielsen Press Release that Kills Voltair
Nielsen’s PPM, Ethics, and Voltair
The PPM Talk that Nielsen Should Have Given
MRC to Dictate Audio Processing?
At the NAB: Processing for PPM
Comments