The host is a well known national host that had been rumored to “encode poorly.”
We wanted to see the impact of Voltair on this one show and see if it resulted in better ratings.
Once again you may be frustrated by the lack of detail here.
We’ll be discussing the results at only one of the host’s affiliates. We’d prefer to look across stations, but only one station was willing to provide data.
We won’t identify the station, the host, or the specific time period we are looking at.
Until Nielsen issues its own communique you can expect stations to remain extremely cautious about releasing rating information or even admitting they’ve installed Voltair.
We’re working with the few cooperative stations that have come forward with data.
To determine the impact of Voltair we compared listenership for two weeks in 2014 and the same two weeks in 2015.
The days were chosen at random. We had not seen the data before choosing the days.
We focused on one quarter-hour, fifteen minutes of data. The quarter-hour was chosen randomly. We didn’t cherry-pick this particular quarter-hour.
We did, however, look at other quarter-hours after the fact to see how similar they were. They varied somewhat, but this quarter-hour appears representative of the entire show.
That said, the graph tell the story.
Listenership significantly increased after the station installed Voltair, not by a little but a lot. Each average minute in the chosen quarter-hour rose.
Much of the growth was fueled by fewer zero listener minutes.
Yes, non-zero minute totals increased too, but the noticeable decline in zero minutes had a bigger impact on the averaged numbers.
The 2014 shows had large gaps with zero listeners that might last for three or more minutes at a time.
The 2015 shows with Voltair still had occasional zeros, but they were fewer in number as well as shorter in duration.
As we’ve written, the purpose of Voltair is to increase the likelihood that codes will be strong enough to be consistently counted by decoders.
This should manifest itself as fewer drop-outs--fewer zeros--that indicate the possibility that decoders are missing listeners.
We would expect then higher more consistent listening numbers, which is what we see here.
So how did it work out for the station?
Show TSL rose 50% and cume rose 60%. This led to a nearly 90% increase in share. The show almost doubled its numbers.
We expected an increase in TSL. However, like the morning show we looked at earlier, Voltair also increased cume.
We have some theories about why increasing code reliability may be impacting cume as well as TSL, but we’ll save that for another post.
We would like to emphasize again that this is one station and one show, one that already has a reputation for poor encoding. We do not represent this to be any more than a single experience with Voltair. Your mileage my vary.
It is Nielsen’s job to prove that gains with Voltair are not real, that the box is creating listeners rather than helping decoders capture listeners that meters would have otherwise missed.
Unless it can prove that PPM today is capturing all listening and Voltair is artificially inflating the numbers, Nielsen has no right to tell radio stations that they cannot use Voltair.
Nielsen should put accuracy ahead of saving face.