Arbitron claims that highly successful radio stations share similar characteristics that distinguish them from average stations.
Analyzing top stations in 33 PPM markets, Arbitron concluded that TSL didn’t differ between top performers and average stations, but that top performers had greater numbers of sessions (occasions) than average stations.
The analysis and conclusions were widely reported, and the notion that the key to impacting PPM is increased occasions entered PPM folklore.
Unfortunately, the analysis suffered from several problems raising doubts about the conclusions. A closer look at the relationships between share and the other rating variables suggests that occasions are not as important as Arbitron claims.
We believe that several technical mistakes led to the misguided conclusions. For example, Arbitron simply averaged share, cume, TSL, and occasions. Averaging can be misleading because averages are distorted by outliers, the few stations at the extreme top and bottom of any ranker.
What is the average share for stations in this hypothetical four station market?
Station A 9.7
Station B 2.9
Station C 2.5
Station D 2.0
The arithmetic answer is 4.3, but does a four share average really capture the essence of the market?
The “average” station is in the twos. Yes, there’s a ten share station, but this one “top performer” station has pulled up the market average by nearly two shares.
Understanding how the two share station can beat the 2.9 station is far more valuable than understanding the ten share station.
Because many markets have the equivalent of our hypothetical ten share station, rating analyses across markets should use what’s called the median, the station in the middle, rather than the average. In our example, the median would be 2.7, a more realistic view of the “average” station.
A second problem with the analysis is the number of markets and stations involved. If one is going to aggregate ratings data for disparate sized markets, the data needs to be “pretreated” to equalize the impact of each market.
To better understand the role of occasions in impacting PPM, and illustrate the flaws in the Arbitron analysis, Harker Research examined a large Midwest market. We began by graphing the relationships between share and cume, TSL, and occasions for each of 30 stations with a 1+ share.
Share is the vertical axis, and the horizontal axis is one of the other three measures. Each station is shown as a marker.
The second graph is share charted against weekly TSL. Note that except for a couple of outlying stations, the station cloud is more or less round. This indicates a very weak relationshup between share and TSL.
The circular pattern indicates that TSL is far less predictive than cume of ratings success.There are stations with four shares that have the same TSL as stations with two shares.
The third graph is share charted against weekly occasions. Note that the pattern is very similar to the TSL graph. There is no directional pattern to the cloud. Stations with a wide range of share have the same number of occasions.
So neither TSL nor occasions is very predictive of PPM success.
These graphs show what’s called correlation, how share changes as the other measures change. Correlation can be expressed as a percentage, an estimation how much a change in cume, TSL, or occasions impacts share.
Cume explains about 68% of the change of share compared to 25% for TSL, and 21% for occasions.
In other words, occasion is the least predictive measure. It has an impact on share, of course, but increasing occasions has the smallest relative impact on share.
On top of that, there’s no practical difference between TSL and occasions when it comes to impacting share.
What this says is that both total TSL and occasions move in the same direction at the same rate. With PPM they are inseparably intertwined.
Arbitron’s notion that TSL is not the key, but occasions is, is demonstrably false.
At the weekly level (which is what matters most in the ratings), there’s no distinction between TSL and occasions. Both impact ratings equally, and neither have nearly the impact of cume.
Arbitron used daily estimates to reach its conclusions and looked at a narrow demo. The graphs and conclusions presented here are based on weekly all person estimates. This is another reason we believe our analysis is more generally applicable than Arbitron's.
We believe that analyzing daily estimates of a narrow demo involves too few panelists to be conclusive. The difference between the average station and top performers was a one single occasion.
The bottom line is that cume is by far the most predictive measure of ratings success. If you increase your cume, you have a very good chance of increasing your share. Not so much with TSL or occasions.
Isn’t it ironic that the single most important tool to grow an audience, external marketing to grow cume, was the first thing cut when radio got in financial trouble?