In an effort to reduce research costs, some radio stations are doing home-brew on-line research with their "loyal listener" database. If your station is one and you are making programming and marketing decisions based on the results, a growing body of evidence raising doubts about the reliability of on-line research should raise alarm bells.
Proctor & Gamble, the world’s biggest buyer of survey research, spends $200 million each year on research. The company fielded two identical on-line studies one week apart. According to Kim Dedeker, Vice President of consumer and market knowledge, the two studies delivered diametrically opposed results. And this is not an isolated case. There’s growing evidence that on-line surveys do not replicate well. In other words, the results from one study to the next can vary so much as to make the research worthless.
ComScore Networks, a company that offers on-line surveys using panels of non-professional respondents did a study of on-line research. They found that only 0.25% of the on-line population accounts for 32% of responses in other on-line surveys. Less than 5% account for more than half of the responses. This may be part of the explanation for wildly differing results of on-line studies. When the participants of a study are a small unrepresentative sample of a much larger universe, the results become very unstable. Two on-line studies fielded identically can produce entirely different results.
Harker Research has studied members of radio station loyal listener clubs and found results that parallel the findings of ComScore. Members of a loyal listener club are not representative of a broad cross-section of a station's listeners. They are part of a station’s long tail of listeners, a relatively small group of listeners who are opinionated activists with views more extreme than the majority of the station’s listeners. On-line surveys of listeners in a station's database combine the unrepresentativeness of on-line survey participants with the unrepresentativeness of database members, to produce entirely unpredictable results. Making programming and marketing decisions based on the results of a survey of this atypical portion of an audience can have a destructive impact on ratings.
The Advertising Research Foundation has announced that it is forming a council to draft new standards aimed at, "stemming erosion of client credibility" regarding on-line research. According to the ARF, "Reports of the failure of on-line studies to replicate when repeated are becoming more common. The influence on results of the ‘heavy on-line survey responder’ is worrisome, but has not yet been studied in a disciplined and objective manner." It seems the growing evidence regarding the unreliability of on-line surveys has become obvious to the P&Gs of the world, and it’s hurting the on-line research business.
Comments