Stanford experts discuss polling challenges during the 2016 presidential election cycle
The polls leading up to the Nov. 8 presidential election showed Clinton with a clear lead, but Trump won the election. The reasons for that discrepancy range from who participates in polls to statistical errors.
As most of the votes rolled in on the night of Nov. 8 in the 2016 U.S. presidential election, many people across the country expressed surprise over the numbers they were seeing in front of their eyes.
Polls leading up to Election Day showed Hilary Clinton leading Donald Trump on average by 3.2 percentage points, 46.8 to 43.6 percent, according to Real Clear Politics, a nonpartisan website that aggregates national and state polls.
As of Nov. 15, with some states still counting ballots, Clinton got 61,318,162 votes, or 47.8 percent, and Trump garnered 60,541,308 votes, or 47.2 percent. Although Clinton appeared to have won the popular vote about by 776,854 votes, Trump’s distribution of votes across the country granted him the most votes in the Electoral College, making him the winner of the race.
In the days following the election, many people expressed surprise and frustration that the polls were so inaccurate.
“Ultimately, pollsters are not Nostradamus,” said Bill Whalen, a research fellow at Stanford’s Hoover Institution. “When it comes to polls, at best, you’re showing the current state of the race but you have no idea who will show on the actual Election Day.”
Whalen and other experts at Stanford provided a few reasons for why the 2016 presidential race proved more challenging for pollsters.
A close race
A big challenge during the 2016 election cycle was that the race between Clinton and Trump was incredibly close. The closer the race, the harder it is for any poll to capture an accurate snapshot of people’s sentiments toward the candidates.
“The basic factor in this election is just that it was a tight race,” said David Brady, a professor of political economy at Stanford Graduate School of Business and a senior fellow at the Hoover Institution.
“These were two very complicated candidates for people to process,” Whalen said. “And this causes problems in the polls.”
Whalen also argued that lately there has been a change in how people perceive pollsters. During the 2012 election cycle, several pollsters, including journalist and statistician Nate Silver and former Stanford professor Simon Jackman, accurately predicted the number of electoral votes President Barack Obama would receive, and those correct forecasts may have made people expect the same accuracy in 2016.
“The polling industry is now becoming a little too celebrity,” Whalen said. “It’s one thing to present a poll. But it’s another thing to be able to say what’s going to happen.”
Despite national random sample polls being “strikingly accurate in recent years,” the polling process always presents two major challenges: Some respondents don’t know who they are going to vote for and some people don’t want to tell the pollsters who they’re going to vote for, said Jon Krosnick, a professor of political science and of communication at Stanford.
The number of people who fell in both of those categories made up several percentage points leading up to the last days of election. For example, Bloomberg’s poll showed that 6 percent of respondents either weren’t sure or didn’t want to tell who they were going to choose in the election. Pollsters use different models to predict what that group of people is likely to do.
“This time the pollsters may have guessed wrong for people who said I don’t know,” Krosnick said. “In this case, they may have caused a couple of percentage points of underestimation for Donald Trump.”
Usually only about half of the country votes, so figuring out which survey respondents will actually turn out on Election Day also presents a challenge.
“It’s more of an art than a pure science,” Krosnick said. “Different survey organizations deal with that challenge differently.”
Trump may have succeeded in bringing out rural low education voters who usually don’t make it to the polls and are usually not predicted to vote by pollsters, Krosnick said.
“This causes the pollsters to trip – not a huge trip, but a trip,” Krosnick said. “But you can’t blame the pollsters. They have to deal somehow with those analytical challenges.”
Faulty state polls
When it came to some states in the upper Midwest, especially Wisconsin, Michigan and Ohio, the results of the election differed even more significantly from the polls.
“State-level polling is more difficult than national polling,” said Douglas Rivers, a senior fellow at the Hoover Institution and a professor of political science at Stanford. “State polls traditionally have much more variability. The data is iffy.”
It’s not clear yet why the polls in the upper Midwest in particular were more off, but experts are proposing several theories.
One possible reason is that those state polls didn’t properly control for the urban and rural difference in the electorate. Another possible theory is the prevalence of “shy” voters in those states, or people who did not disclose who they are going to vote for, Rivers said.
Krosnick pointed out that the vast majority of state polls are not done with scientific, random samples of residents and are instead surveys of people who volunteer to do regular surveys for money.
“Unfortunately, the world’s appetite for polls has escalated, but available budgets have not, especially at state-level, so we should not expect the non-scientific statistics to be accurate, and they were not,” Krosnick said.
Rivers said figuring out what happened with those Midwestern state polls will be a goal of the task force he is part of at the American Association for Public Opinion Research. A report is scheduled to be released in spring of 2017.
“I suspect there is a common cause and effect, but I don’t know what it is,” Rivers said. “My guess is there is some variable that was skewed in the sample in the upper Midwestern and in the few of the battleground states.”