Sergey Nivens - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

How predictive modeling and forecasting failed to pick election winner

Nearly all predictive modeling algorithms were way off in picking the winner of the presidential election. What went wrong can strike any predictive analytics project if data scientists and other analysts aren't careful.

Prior to the 2016 presidential election, nearly everyone -- from data science guru Nate Silver's FiveThirtyEight website to The New York Times -- was predicting a huge likelihood of a comfortable victory for Hillary Clinton. And then their models broke.

What went wrong for the forecasters was hardly a unique set of problems, and it can strike any predictive modeling and forecasting project if analytics teams go down the wrong path. It involved a mix of overconfidence, poor data quality and mistaking a statistical likelihood for an ordained certainty.

"Unfortunately, [forecasters] give these numbers to one decimal place, and it sounds like it's a scientific formula, but it's not," said Pradeep Mutalik, an associate research scientist at the Yale Center for Medical Informatics, who blogs about elections for Quanta Magazine. "It's the overselling of certainty, and they ended up with egg on their faces."

Predicting the unpredictable

The day prior to the election, The New York Times Upshot election forecast gave Clinton an 85% chance of victory. The Huffington Post's model gave Clinton a 98% chance of winning. The FiveThirtyEight forecast was among the most modest, giving Clinton a 71.4% edge.

These forecasts weren't wrong, per se. The FiveThirtyEight model essentially said Donald Trump won three out every 10 of its simulations. Even The Huffington Post's model, bullish as it was about a Clinton win, didn't completely discount the possibility of a Trump victory.

And to be fair, Nate Silver tweeted just after 6 p.m. EST on Nov. 8, "This doesn't seem like an election in which one candidate had a 99% chance of winning," and frequently talked about the uncertainty surrounding polls and forecasts in the weeks ahead of the vote.

But that wasn't how a lot of forecasts were promoted by the prognosticators or interpreted by the public. By providing such fine-grain detail in their forecasts, modelers gave the public an impression of certainty.

People don't understand probabilities

"The problem with that is that it's a probability, and people don't understand probabilities," Mutalik said. "I think it was a problem of data presentation. It's very irresponsible to present data like this to a lay public. I think that probability should not have been used to score the race."

Mutalik added that forecasts like the Cook Political Report, which gave a qualitative scale based on which way certain states were leaning, rather than trying to quantify likely votes, did a better job of describing the uncertainty of the race.

One of the reasons forecasts missed the mark was an overreliance on poll data. Today's forecasters develop their models by aggregating as many polls as they can get their hands on. Every poll has a margin of error, but forecasters assume bringing together polls from different sources cancels out this error. The presumption is each poll will have different reasons for error, like oversampling one demographic group. As long as each poll doesn't have the same reason for error, the overall strength of the aggregated polls compensates for the weaknesses of individual polls.

But in this election, there may have been more error in the polls than was recognized at the time. There's been a lot of talk about shy Trump voters who found it socially unacceptable to admit even to pollsters who they supported, and this common cause of polling error could have pulled aggregations wide of the mark.

Forecasters discount significant events

There's also the issue of enthusiasm. Michael Cohen, an adjunct professor in George Washington University's Graduate School of Political Management and CEO of Cohen Research Group, a public opinion and market research firm in Washington, D.C., said forecasters discounted the large crowds at Trump's rallies and the strong engagement the candidate garnered on Twitter.

These factors are harder than poll data to work into predictive modeling and forecasting, but, ultimately, they pointed to voters who were more willing to show up at the polls on Election Day than nominal Clinton supporters.

"When you're trying to understand what's going on in the country, or in your company, you don't just look at one piece of data," Cohen said. "The bottom line for me is that polling can't be the only data you look at."

Ultimately, the industry that's built up around predictive modeling and forecasting for elections may be due for a reckoning. James Taylor, CEO of consultancy Decision Management Solutions, said an election between two specific candidates is a one-time event that will generate its own circumstances. As a rule, one-time events can't be predicted well using historical data. "Basic statistics mean that one-off events can't be analyzed for accuracy," he said.

The notion of assigning a single number probability to a particular outcome can be more challenging than we've come to believe and may not be that helpful to the way average voters think. "It's human nature," Mutalik said. "Even when polls give the margin of error, people just take the expected outcome."

Next Steps

Data visualization plays key role in developing predictive modeling algorithms

How PayPal uses predictive modeling to stop fraudsters

How predictive analytics can answer complicated business questions

Dig Deeper on Predictive analytics

SearchDataManagement
SearchAWS
SearchContentManagement
SearchOracle
SearchSAP
SearchSQLServer
Close