Think that the technology behind PPM is simple, flawless, and without potential pitfalls?
You should have been in Las Vegas to hear Dr. Berry Blesser, Director of Research for The Telos Alliance, describe the science and technology behind watermarking like PPM, the process of embedding and then retrieving information concealed within audio.
It was the second NAB engineering talk on PPM at this year’s convention, this one delivered by the technical expert the US government called on when the Feds decided to look into complaints about PPM.
The issues and challenges Dr. Blesser discussed impact every radio station in PPM markets.
He delivered the sort of information Arbitron should have delivered in 2007 instead of the happy talk that filled NAB convention PPM sessions back then.
We’ve asked Dr. Blesser for permission to reproduce his PowerPoint presentation, and we intend to publish the presentation at Radio Insights. In the meantime here are some key issues to ponder.
The ability of PPM to determine which station a person is listening to depends primarily on the content the station is broadcasting. The greater the proportion of content between 1 kHz and 3 kHz, the “louder” the embedded information, and the more likely you’ll get credit for the listening.
That’s because PPM codes ride beneath the content in this range, so if there is no content, there is no code. You might as well be off the air.
The amount of information in this range varies by format, but even within formats, the density of 1-3 kHz information can vary, so the belief that some formats do better than others in PPM is only partially correct.
Our take: Start monitoring the spectral content of your programming. At least download a Spectrum Analyzer iPad app and pay close attention to how much content falls between 1 and 3 kHz.
Talk stations and music stations with morning shows that talk a lot may be surprised how seldom they broadcast 1-3 kHz content.
A second issue is ambient noise in the listening environment.
Ambient noise can compound the problem of getting credit. The PPM encoder adjusts the level of the codes based on the level of the content, maintaining a level that is a fraction of the content.
The problem is that the encoder has no idea what environment a listener is in, or what the ambient noise level is.
If your 1-3 kHz content is low, the code level is low too. But what if it’s drive-time and a large proportion of your panelists are listening while driving? There’s a good chance you’re not going to get credit for a portion of that listening.
Ironically, this point was driven home during a separate presentation by John Kean, Senior Technologist at NPR. He presented results of an NPR study showing how people listen to the radio in noisy environments.
People do not turn up the radio to fully compensate for greater noise. They just listen at what effectively is a lower level.
Worse yet, a great deal of ambient noise like road noise falls into the 1-3 kHz range, further burying the PPM codes.
Combine a low level code with a noisy environment, and you are probably going to lose credit for some quarter-hours.
We recently wrote that PPM can kill formats, but it turns out that the problems are much broader than we thought. Every format has some content that encodes poorly.
Radio stations using Voltair are finding that some songs don’t encode well. They are either pulling the songs or giving them sound codes so they don’t play around each other. They also may only play them late in the quarter-hour.
It turns out that compressed audio like MP3s don’t encode well. MP3s tend to lack a dense mid-range so there’s less energy for the PPM codes to hide behind.
After the presentation we tuned around the Las Vegas radio dial with a Spectrum Analyzer. The first graph at the left is a well known talk show host speaking. The energy (think volume) of the content is the vertical axis. The frequency is the horizontal axis. (Click on them to enlarge.)
Notice the absence of content in the 1-3 kHz range. This is an extreme example where the host was speaking quietly for effect. At other times there were opportunities for the encoder to inject information, but for much of the time the encoder was probably silent.
The second example is a music station close by on the dial.
The song shown here is a pop alt hit, and while it looks much different it too shows low content in the critical range relative to the rest of the spectrum. There’s more content for the PPM codes to hide behind, but not as much as other songs the station played.
Listening to Dr. Blesser speak we were reminded of the many panels at NAB Radio Shows where program directors talked about PPM and how they were shutting up their jocks.
The conventional wisdom was that PPM panelists don’t like talk. They just want lots of music. Little did we know that it was the meter not the panelist that didn’t like talk.
Much of what we’ve “learned” about PPM is just plan wrong.
I would like to share some real-world observations about the detectability of PPM encoding, and its effect on the selection of program content.
Citing the phenomenon of inadequate 1-3 kHz audio content in some spoken word programming: While programming a N/T station in Phoenix, I employed a host whose performances (and I mean EVERY performance) would inevitably and predictably trigger PPM off-air alerts. The 1-3K fundamentals in their voice were weak and therefore the PPM encoding during their shift was frequently undetectable. You would never guess that from listening to the programs. But the PPM monitor knew. We altered mic processing, swapped mics, changed the PPM encoder. I think I made it onto (then) Arbitron's "Ten Most Despised" list by being a very loud and frequent voice about PPM's inability to detect certain program material which was neither loud nor frequent enough for the technology.
How about music? PPM dramatically changed the tempo balance of all broadcast formats. Ballads that traditionally tested high in active listener evaluations were generating low PPM detectablity (less present in the 1-3K range, less peaky).
Lackluster M-Score? The song is banished from airplay. Even when considering the effects of such forces as natural demographic evolution, cultural trends and technology, compare playlists of pre-PPM and post-PPM Adult Contemporary stations for tempo balance. It's hardly the same format.
In a win-at-all-costs world, you alter your content to generate the highest score. So, we've reshaped program content (down to the selection of hosts and songs) factoring in a design flaw with the measurement technique. What Frank Foti appears to be doing is unmasking the flaw and providing a path to better evaluate what people like and listen to. Bravo!
You still need cume/engagement to show up in the numbers. That requires attracting more ears, and having programming with immense listener appeal does that. So, Nielsen: admit that PPM has significant merits, all has not been lost, continuous improvement is a virtue, not a sin, and get on with making things better. I (somewhat) understand the financial implications of that...but to continue the muse that 'all is OK' is a larger tragedy. The cat is out of the bag.
I am not presently active in the industry, but remain an ardent fan of the medium.
Posted by: Smokey Rivers | April 19, 2015 at 10:32 AM
Talk content is very high in the 1-3kHz range where voice lives. The problem are pauses. This is where something like Voltare "may" help, but most of this won't cover up the horrific programming we hear on news/talk stations nowadays. It's still programming.
Posted by: Bill Simmons | April 18, 2015 at 07:21 AM
I'd really pursue this if I wasn't a radio ghost!
And what about the data compression in NexGen?
Posted by: Bob Wood | April 16, 2015 at 11:35 AM