Some in the industry really want to believe that PPM is as good as Arbitron claimed when it pitched the service to radio.
If Voltair really does increase ratings, it challenges the accuracy of PPM, and some people just can’t accept that.
It has led to dubious claims based on shaky statistics, misguided assertions, and baseless conjecture.
The latest example is a series of anti-Voltair screeds penned by consultant Randy Kabrich.
Randy makes his motives clear at the very beginning:
I am doing this because Voltair has caused tremendous turmoil in the industry that is now spilling over to questions about the currency.
I want this controversy ended as soon as possible and the least amount of damage done.
Damage to whom, you might ask. Apparently he is speaking about damage to Nielsen.
Were it not for Voltair we would not be discussing PPM flaws.
Were it not for Voltair Nielsen might not see any need to update their encoders or tweak their algorithms to fix the flaws.
The flaws of PPM have destroyed formats, driven talented personalities out of the business, distorted what songs we play, and punished radio stations for no reason other than the format they broadcast.
PPM has done irreparable harm to local radio, yet Randy only fears that Voltair may sully Nielsen’s reputation.
And he seemingly is willing to mock, mislead, and distort the facts to protect PPM and Nielsen’s reputation.
Most difficult to understand is how we reconcile this rabid defense of PPM with the admissions of Nielsen.
In their presentation Nielsen admitted that Voltair boosts ratings for “outlier” formats. That means some formats encode better than others.
Nielsen has agreed that encoders need updating and they are rolling out a fix this year.
Why accuse those who claim Voltair works as delusional when Nielsen itself admits that some formats benefit from using Voltair?
At first we were going to ignore his transparent attempt to help Nielsen save face, but his assertions have gained some traction within the industry and clients have asked us to comment on his claims.
Rather than post a lengthy critique here, the rest of this post will be at the link below. If you choose to drop down into the rabbit hole, be our guest.
Randy has posted a prodigious amount of data ostensibly proving that Voltair hasn’t impacted radio listening as measured by Nielsen. He claims that his voluminous data trumps the anecdotal data presented by others including this writer.
The problem is that volume is no substitute for relevancy.
His numbers are accurate, but selectively chosen. On top of that, they prove nothing.
And most ironically, he attempts to bolster his assertions with weak anecdotal observations, the very sin he accuses others of committing.
We’re not going to step through every logical flaw and every mistake he commits, but we’ll illustrate the speciousness of his argument with a few examples.
The defense of PPM begins with the claim that it can’t be flawed since it has international acceptance.
The truth is that PPM is used in only eight countries. Countries like Denmark, Singapore, and Iceland.
Maybe for some PPM’s acceptance in countries like Iceland is proof enough that PPM is accurate, but we suspect Icelandic broadcasters know as little about PPM’s inner workings as US broadcasters.
PPM’s use internationally reflects only the influence that US radio has on other countries. It is totally irrelevant for this discussion.
Then the author turns to Media Monitor’s minute-by-minute to prove that meters capture enough listening for stations to get proper quarter-hour credit.
He produces a single hour of WMMR as proof.
Rock is one of the better encoding formats and consequently performs pretty well in PPM. It is one reason there are so many rock stations today–they encode well so they get good numbers.
Why doesn’t he show any minute-by minute graphs for other formats–-like Talk?
Perhaps because Spoken Word formats (along with talk-oriented morning shows) are some of the most difficult programming to encode and he would have had to show graphs with huge gaps and zero credited listening.
Without Voltair the gaps can be two and three quarter-hours in length resulting in no reported listening.
Even Nielsen admits that the “encodability” of some formats is boosted by Voltair.
Randy goes into great detail on why PPM does a great job identifying stations because the encoder sends data every five seconds.
Surely with 180 codes broadcast every quarter-hour the meter can figure things out. Even if it can’t, editing rules are designed to fill in the listening that the meter misses.
If that were true, then why has Nielsen promised to increase the density of the codes, essentially increasing the number of codes per minute.
Correction: My choice of words was inexact. The density of codes will increase by lengthen each code. The number of codes will remain the same. That's even worse because codes are more likely to be truncated and audible. It is also an admission that Voltair works.
Nielsen has also admitted that they are reviewing the editing rules that Randy claims fix what the meters miss.
The latest posts feature a torturous analysis of AQH ratings.
We question the purpose of looking at year-over-year changes for a single month. One month does not make a trend.
As we have repeatedly pointed out, most rating moves are random. They are statistical noise.
Even at the market level we have to look at year-over-year trends over multiple months to see if changes are a result of random noise or something real.
Let’s look at July, and then August. That’s when we’ll start to have a good sense of whether Voltair is growing radio listening.
As an aside, we find the analysis ironic because when we tried to use AQH ratings to show radio's strength, Arbitron immediately threatened us demanding that we take down the post. See what's left of the post at the link.
We were told that under no circumstances would we be allowed to share 6+ AQH ratings let alone 25-54.
Yet the post shows 25-54 AQH ratings for groups and markets. We guess different rules apply when you’re defending PPM.
At several points along the line the author offers conjectures that we know are off base. For example, he writes:
Why are the majority of up markets (and biggest gains in the markets) in markets 26-52 and not 1-25? If you were a Major Group and could not afford to put a $15k Voltair in all markets, would you not first put a Voltair in NYC, Chicago or Dallas - instead of San Antonio, Cleveland or West Palm Beach?
Maybe that’s what the groups should have done, but that’s not what happened.
We know of several groups that opted to place Voltairs first in their smaller markets. In one early instance a large group asked for a volunteer to test the box and the only station willing to try it was in a small market.
Another group allowed managers to make the call and it was the smaller market managers that were more willing to roll the dice and spend the money.
In fact, most groups installed Voltairs in their largest markets last.
So the analysis is based on a guess that turns out to be wrong.
Instead of proving that Voltair can’t be behind the gains in smaller PPM markets, his data actually support the relation between Voltair installs and rating boosts.
As we wrote in 2009:
Radio has not been served by blind acceptance of Arbitron's PPM. Arbitron's rush to monetize PPM combined with radio's unquestioning naivete regarding PPM has stifled the debate on this and many other PPM issues. It is never too late to start asking the important questions.
Pressing Nielsen to fix PPM flaws that Voltair revealed may be damaging to Nielsen’s reputation, but radio deserves the best audience measurement possible.
Voltair shone a light on PPM’s dirty little secret. Nielsen’s reputation will ultimately rest on what they do now.