Could The Polls Be Wrong? It’s Possible.

Over this final weekend before the election, Nate Silver FiveThirtyEight has taken fire for being too cautious in forecasting the outcome of tomorrow’s election. Some have charged that his probabilities for a Clinton win are too low. Frankly, I don’t blame him. I would be cautious too.

Silver and crew use a results-validated model to make election predictions. Like any model, the results are only as accurate as its inputs. Garbage in, garbage out.

Silver’s models contain a large number of public phone-based polls that typically include some combination of landline and cell phone interviews. This would make me nervous. Very nervous.

The accuracy of phone polling is declining and has been declining for years for a host of reasons. It’s one reason I’ve shifted everything to online polling. Also, media sponsored public polls are generally less reliable than private campaign polls.

Polling by phone is an increasingly expensive and difficult endeavor. Today, phone polling should include some degree of cell phone interviews to augment standard landline phone interviewing. Adding cell phone interviews to this stew helps, but is not a panacea for reaching the 40% to 50% of the population without a landline phone.

Missing from most cell phone surveys are interviews with voters who are not on a cell phone contract. Pre-paid and pay-as-you-go cell phones are two growth areas in the cell phone industry and at least a quarter of the cell phone population is probably missing from most polls. In the UK and Europe, pre-paid is the bulk of the cell phone market.

Also, including cell phone interviews is very expensive. Legally, cell phone interviews must be hand dialed which increases the labor expense for phone polling and reduces the quantity of available public polls. This is especially true at the state level where local media lack the funds to conduct a series of strong phone polls.

However, simply including cell phone interviews doesn’t cure all the issues faced in polling by phone. Getting a representative sample of respondents willing to complete a survey is difficult. On a good day, we may get 11% of a sample to fully respond to a poll by phone. This is tragically and historically low.

Response rates to surveys also vary by season. Rates are low during holidays (i.e. December) especially among consumer audiences. It’s also difficult to get a representative sample during the summer when people are on vacation, traveling, watching youth soccer and baseball, etc.

October is an equally problematic time for polling. Historically, response rates for October phone-based polls can decline up to 25%. October is the first month of the last quarter of the year and is typically a huge month for business travel. You also have Halloween, a bank holiday, benchmark tests for school students under “No Child Left Behind”, fall breaks, youth sports, etc. October isn’t as bad as December for polling, but it’s not far off. It’s entirely possible that respondents to phone polls in October are not representative of who will show up on election day.

This non-response issue is the likely reason we’ve seen such crazy numbers from the October polls. Take a look at the NBC News/Wall Street Journal polls from October and November. The swing of Clinton from +6 to +9 and +11 and back to +4 are more likely the result of changes in survey response rates than a genuine change in the voting intentions of the electorate.

NBC News/Wall Street Journal Poll

It’s wild swings like we see in the NBC Poll and a lot of prior research on phone polling that makes me question most of the current phone based polls. As I mentioned earlier, I’ve shifted to online polling where I typically see more stable numbers without the wild fluctuations we have seen in the phone polls.

Sadly, there are not that many online polls this cycle. Outside of YouGov and newer entries from Google Surveys and SurveyMonkey, we just don’t have a lot to work with. We also don’t have a lot of information on the source of the online sample used by these groups.

Most online surveys use an online panel of pre-recruited individuals or households who have agreed to take part in online market research surveys. Most are compensated in some way and the quality of these samples vary. The quality of the respondents matters a great deal, so it’s hard to gauge the accuracy of these polls without knowing who and how their survey respondents were recruited.

Two polls widely viewed as outliers this cycle are the IBD/TIPP Tracking poll (Phone) and the LA Times/USC Tracking poll (Online). Both polls have been remarkably stable over the final months of this election and both indicate a much closer race than the majority of other public polls. Also, both polls have given Trump leads over the past few months.

One possible reason for the outlier status of these two polls is the use of weighting.

Weighting is a technique in survey research where survey results are re-balanced to more accurately reflect the population you’re polling. A demographic profile (based on known data such as a census age distribution) is often used to rebalance survey results to better reflect real-world results.

From what I can tell, both the IBD/TIPP Tracking poll and the LA Times/USC Tracking poll heavily weight their results beyond measures such as gender, age, party affiliation, and ethnicity. If their use of weighting cures some of the deficiencies in terms of non-response (i.e. households who don’t respond or answer a phone poll) and coverage (i.e. prepaid cell phone households not in the cell phone samples), they might not be outliers after all.

This is especially true for the LA Times/USC Tracking poll which has consistently given Trump a better chance of winning than pretty much any other polling organization. This poll is conducted online, uses a different set of questions than most “traditional” polls, and weights the results across a broad range of measures. This poll is conducted among a sample of respondents that were recruited specifically for this type of research. Their methodology could be an antidote for the problems with traditional polling that I mentioned earlier, or it could be a major factor in being very wrong about this election. They are basically in a go big or go home situation that I totally support. Stick to your guns and if you are wrong, tell us why.

There is some recent precedence for properly weighted online polls outperforming phone polls, namely, Brexit.

Online polls outperformed phone with the UK Brexit vote earlier this year. Phone polls tended to show a win for the “Remain” camp. But online polls by TNS UK and Opinium Research accurately predicted a “Leave” win, and were both viewed as outliers by most pundits. Both also used extensive weighting to ensure their results properly reflected the views of potential voters.

It’s entirely possible that the phone polls of this election could be wrong, and if they are, a large component of Silver’s data-driven election model, will also be wrong. Nate Silver and crew are right to be cautious. It’s not out of the realm of possibility that we are shocked by the election results tomorrow night.

Based on the public and private polling I’ve tracked and conducted, I can easily envision three very different scenarios occurring tomorrow night.

Scenario 1: Clinton cruises to an easy three or four-point win in the popular voter and handily wins the electoral vote

Scenario 2: The election is exceptionally close late into the evening and we are up waiting for results from Colorado, Arizona, and Nevada to find out the winner

Scenario 3: Trumps win the popular vote but loses the electoral vote

Scenario three is my nightmare situation, leaving everyone unhappy and providing zero closure to an overly-stressed electorate.

Conventional wisdom has been wrong all year, and I’m totally expecting some type of surprise tomorrow night. It could be a decisive win by Clinton, a close win by Trump, or a deadlocked election.

So forgive me if I don’t jump on the Nate Silver bashing bandwagon. This has been a crazy year and I expect tomorrow to be the same. I just hope we can learn something from this whole episode that helps us learn more about our fellow citizens and how to properly capture their actual thoughts, concerns, and beliefs for future elections.