Category Archives: Opinion polls

More about horse race polling post mortem

As pollsters continue to explore theories of what went wrong, an interesting picture is emerging. So far, little statistical support was found for the “shy Trump voters” theory (see this 538’s analysis) . Opinions seem to converge on polls underestimating the unity of the republican base and the eleventh hour splash in enthusiasm that the polls were slow  to capture. Strategies focused on fragmenting the republican base didn’t work as expected. This goes well with the conclusion we reached above: in case of republican women party identification was a much stronger factor of their opinions and voting behavior than gender identification. Similarly, post-election analysis shows that an attempt to influence larger numbers of Latino voters  – another target group – wasn’t too productive either, despite earlier expectations that GOP party, and its base, would splinter .

An additional issue that we also noted in the previous post is that horse race polls impose a potentially misleading identity framework, don’t sufficiently support micro-targeting and also are slow to reflect shifts in the electorate. Broad characteristics of the electorate measured by such polls (gender, party affiliation, ethnicity) cannot compete with detailed user profiles accumulated by social networks and search engines – which allow for a much more precise targeting. Neither can traditional polling compete with the speed of measuring feedback in social networks – in particular the effects of micro-targeted fake news, which appear to have been a factor in boosting or depressing enthusiasm. Analysis of larger surveys, such as ANES, can help with micro-targeting – but certainly won’t help with the need to measure responses at the Internet speed. The three types of analysis – large infrequent surveys, horse race polls, and social network analysis – should be done in combination.

Elections polling post mortem

While polling and forecasting of the 2016 presidential election may have been not too far off (within error margins), the outcome was the opposite to the widely perceived Clinton advantage. So many people were so surprised, the pundits and pollsters not less so than the public! Many explanations of this shocking miss have been offered: respondents too shy to say that they were planning to vote for Trump, media’s focus on scandals at the expense of analysis and the false sense of parity when discussing opposing positions, FBI’s intervention 10 days before Nov 8th – too late for polling to appreciate, demotivating effects of polls anointing Clinton too early, etc. Many more explanations are yet to be offered. But one undisputed issue is that horse race polls are not designed to uncover and explain fundamental patterns and shifts in the opinions of the electorate. Sufficient depth is provided by larger surveys, with many questions. In stock market terms, technical analysis should be supported by the analysis of fundamentals. Not only we need better data, but we also need better ways to query, visualize and understand it. Let’s try to find some explanations in the 2016 American National Election Survey (ANES) Pilot. With all of its drawbacks (it is an opt-in online panel, and we are looking at the initial release, from January 2016, it may contain data errors), it has a wealth of information on political preferences and stereotypes.

One of the key issues in this electoral season, from the Clinton’s side, was engaging women. However, republican women preferred Trump by a wide margin. Was trying to encourage them to vote for the first female presidential candidate based on issues that should be important to them as women, the right strategy? Reasonable questions would then be: do opinions of republican women align more with republicans or with women? Should they be considered a separate target group in a campaign?  That would be natural based on horse race polls where questions about gender and party affiliation are normally asked – but is that right or would it skew campaign planning? What are attitudes to women-related issues among republican women?

The 2016 ANES Pilot provides some interesting answers. Open this survey in SuAVE and explore along.

  1. Let’s switch to bucket view (second icon at top-right), and zero in on republican women. Type “Gender” in the “Search variables…” search box, expand it, and check “Female”. Then search for “repub” by typing it in the same search box, expand “PartyID Republican First”, and check “Republican”. Now you see icons that represent rep
    87 percent of republican women voted in 2012

    87 percent of republican women voted in 2012

    ublican women, and can explore how they responded to survey questions. Just click on any icon to see answers of that respondent, or select a variable from the drop-down list at top-right, to examine distributions. For example, here is how this group voted in 2012: 87% definitely participated in the elections – so this is a solid group to follow and try to influence.

  2. Let’s now see where this group stands with respect to common controversies/stereotypes. Type “Obama” in the search box, and then select “Barack Obama a Muslim is/is not a Muslim” from the drop-down. See it in SuAVE: 68% of republican women think he is a Muslim. Click “more info” next to the red box with the
    61 percent of republicans believe that Obama is a Muslim; 68 percent among republican women

    61 percent of republicans believed that Obama is a Muslim; 68 percent among republican women

    count and percentage of respondents. This will give you interesting numerical measures of the pattern you observe: both factors (being a republican and being a female) have positive contributions to the response “Obama is a Muslim.”  If we remove gender from this equation, the statement “Obama is a Muslim” will become less accurate (by 7 percentage points), because the percentage of republicans who support this stereotype is 61% (or 75 out of 123 republicans in the survey). See this result in SuAVE. In other words, republican women are to the right of republican men on this question. Only 54% of republican men think that Obama is a Muslim.

    Women's rights most important for choosing a political candidate only ranked fourth by one person among republican women

    “Women’s rights most important for choosing a political candidate” only ranked fourth by one person among republican women


  3. Now let’s see how this political-demographic group responds to women’s issues. Type “women” in the search variables box and select “Women’s rights most important for choosing” [a political candidate] from the drop-down box (you can see how all questions and responses were formulated in the survey codebook – click About Survey at the top right). There will be just one respondent who included this issue in her preferences (and gave it the 4th rank). See it in SuAVE. If you remove “republicans” as one of the filters to
    Overall, 92 women ranked women's rights among first four priorities

    Overall, 92 women ranked women’s rights among first four priorities

    see how women of any political persuasion define their preferences wrt women’s issues – you’ll get a very different picture: 92 women rank women’s issues among their top four preferences.  You can type “most important” in the search box to explore importance of other issues for these groups, from gun control and terrorism to health, national debt, environment, inequality, poverty and morality.


What do we have in the end? Republican women as a group are relatively less concerned about women’s rights, and may be to the right of republican men on some political questions. Another way to put it is that there are few gender differences in terms of how policy-related questions are answered by respondents who identify as republicans, and therefore treating republican women as a separate group of voters makes little sense.

As a side note: it reminds me of the first large public opinion survey during Soviet times, led by Boris Grushin in the city of Taganrog – that was during a brief period when sociology was acknowledged as a science. One of the interesting outcomes was that there were no discernible gender differences in how respondents answered questions.

Note: the above is a software demonstration of what one can do with a large survey dataset in SuAVE, and it focuses on only one issue where a more in-depth analysis might have resulted in a better strategy and forecasting. It is not an analysis of the reasons for election outcomes; for the latter see a list of 24 theories from CNN, and more detailed studies that will eventually follow. A discussion among pollsters at about what went wrong is another excellent source.

– Ilya Zaslavsky