Did you think Hillary would win the election? Her campaign certainly did.

Why wouldn’t they? Nearly all polls showed her leading right up to the election.

The final Real Clear Politics poll of polls had Hillary ahead in 12 of 13 polls it tracked.

Fivethirtyeight said her chance of winning was 71.4%. The New York Times gave her an 85% chance. British odds-makers put the chance at 80%.

Fox News, of all sources, flatly declared, “Trump is headed for the worst (Republican) defeat since 1984.”

Yet Trump won. Donald J. Trump will become America’s 45th President on January 20th.

Remember all those wrong polls next time you’re trying to make sense of the latest Nielsen ratings.

Nielsen ratings are just like the Presidential polls.

And Nielsen ratings can be just as wrong as all those polls that said Clinton would win the election.

Nielsen ratings are estimates based on polling listeners. And like every other poll Nielsen ratings have a margin of error.

The problem is that the potential mathematical error inherent in polls, the so-called margin of error, doesn’t capture the extent of the unreliability of polls including Nielsen’s ratings.

Improper weighting of responses can have a bigger impact than sample size on the accuracy of a poll. And critics point to misapplied weighting for leading pollsters astray.

Each month the political pollsters weighted their responses. They “adjusted” their numbers so that the make-up of each month's sample reflected their expectation of who would vote.

The problem is that small variations between the make-up of each monthly sample can be amplified by the weights pollsters apply to the numbers.

Misapplied weighting has the potential to exacerbate month to month swings of a poll as well as the differences between one poll and others.

That’s why even on the eve of the election the gap between polls was six points, exceeding the theoretical potential for error.

Now think about Nielsen ratings.

Nielsen ratings have their own theoretical margins of error. But do those calculations really reflect the entire uncertainty of the numbers?

As with political polls Nielsen’s listening data have to be weighted each book to better reflect population demographics of each market.

For a diary market, each diary is assigned a person-per-diary value (PPDV) that is applied to each individual’s diary based on sex, age, and ethnicity.

In PPM markets the weighting that Nielsen applies is much more complicated.

Each day each active panelist is re-weighted based on the day’s usable meters to match Nielsen’s best guess about the make-up of the market.

One day a given panelist might (say) represent 2,000 women, the next day she might represent 1,000 women. The next day it might be 3,000.

Small changes in a meter’s weighting is amplified across meters resulting in swings that are potentially well beyond the theoretical accuracy of the numbers.

Challenge Nielsen about their rating estimates and they will confidently tell you that they are accurate and any shift in the numbers from once month to the next is real.

How is this different from the pundits who confidently used the polls to claim that Trump had virtually no chance to win?

Next time you look at the Nielsen ratings think about poor Hillary Clinton and her supporters. They all trusted the numbers.

Don’t make the same mistake.