You’ve probably made the pilgrimage to Columbia, or before that to Beltsville to review the diaries. You sit in a small room with your call letters on the door as an Arbitron staff member brings in tray after tray of diaries. It is a ritual as old as Arbitron.
Reviewing the diaries is a great learning experience, because we can essentially watch the process of filling out a diary unfold as we turn the pages. We can see the sloppy handwriting Arbitron editors have to deal with. We can see how loyalty and favoritism impacts the numbers as listeners draw lines down the page rewarding some stations and punishing others.
A diary review also confirms what we’ve always known: The process is not flawless. Virtually every review uncovers miss-credited listening, unidentified listening, and other problems that cost us a few quarter-hours. Rarely are these problems serious, but now and then a diary review forces Arbitron to reissue a book.
If you’re old enough, you may also remember Arbitron’s Mechanical. Long before PD Advantage and Maximi$er, we could see a printout of every keypunched diary. The Mechanical showed us every diary by age, sex, county, ZIP code, the time of tune-ins and tune-outs for each listening episode, total number of quarter-hours the person listened, quarter-hours that went to us, and even the person’s PPDV, person per diary value.
Programmers who have never seen a Mechanical probably don’t appreciate how much information it provided. It was like bringing those trays of diaries back to the station, but better. We poured (and sometimes anguished) over each listener, one by one.
The Mechanical is an example of respondent level data. We can see how one single individual impacts the process. It provided a sniff test for each book, a gut check, if you will. It allowed us to see the raw data that went into each book. Sometimes it raised doubts, sometimes it gave us greater confidence, but in either case it was priceless.
While the Mechanical has been replaced with software that gives us a less visceral feel for listener behavior, at least stations in diary markets can still take a look at the diaries. Stations in PPM markets have lost all contact with listeners.
Our PPM clients have asked Arbitron for respondent level data. We have requested PPM respondent level data directly, but all requests have been denied. The only entities with whom Arbitron has shared this information are research companies who ultimately provide Arbiton favorable "research."
We’ve asked Arbitron why they refuse to release respondent level data, and gotten a number of different answers. One argument is that when reviewing a book we are looking at past participants. Now that panelists are on-going participants, their confidentiality must be maintained.
Arbitron can cite MRC rules that require that the anonymity of participants be maintained, but the MRC also states that raw data must be available for inspection. The Mechanical maintained complete respondent anonymity and yet provided the raw data that MRC requires. A PPM Mechanical could do the same thing.
One wonders why so basic a principle as allowing Arbitron clients to review individual participants has been eliminated in the transition to PPM. It would be simple to resurrect the Mechanical so that we can see the same things in PPM markets that we used to see in diary markets.
How can PPM measured stations trust data that they are not allowed to see? How can a station accustomed to the openness of the diary process be expected to have the same confidence in a completely closed process?
There is nothing more essential in creating confidence in PPM than pulling back the curtain and allowing broadcasters to see what is really happening with PPM. We need to see the raw participant data. PPM stations need the process opened in the same way the diary process is open. Without that, we’ll always wonder what Arbitron is hiding with PPM.
So here’s the most important question we need Arbitron to answer: