Conventional Polling And Ballot Proposals – A Bad Combination

Leave a comment
Uncategorized
Reprinted from MichiganLiberal.com
June 10, 2009

 

Candidate polls work.  First, let’s talk about candidate polling.  Political polls are conducted all the time to assess which candidate is ahead in some race, and to predict who will win the election.  Each published poll has a “margin of error”, based on statistical theory, that states how far off its prediction is likely to be.  If you look back after an election, you almost always find that the polls were reasonably accurate in predicting which candidate would win, and by how much.

Ballot proposal polls don’t work.  It’s completely different with ballot proposals.  After an election, if you look at the final predictions in the polls, it’s common to find them missing by 10%, 20%, or even 30%.  A poll with a “margin of error” of 5% (which requires 400 interviews) should NEVER be off by 10%, not even one time in a thousand.  But missing by only 10% would be counted a success by most pollsters.

Let’s take a specific example:  the proposal in 2002 to repeal single-action straight-ticket voting in Michigan.  Various polls were published, mostly showing the proposal was strongly supported by the public.  The final poll of the campaign, published six days before the election by EPIC-MRA, showed the proposal with 77% support, compared to 21% opposed, and only 2% undecided.  The margin of error was approximately 5%, but on election day, the voters rejected the proposal by a 60% to 40% margin – meaning the poll was 37% off.   Of course, after the fact, we heard the usual gibberish about “late-deciding voters” and so on, but the truth was that the poll was completely worthless for predicting the outcome of the election.

When ballot proposal polls turn out to be completely wrong, a number of explanations are heard, none of which are reliable:

  • “When the voters don’t know how to vote, they choose NO.”   (What about when a proposal that’s supposed to lose instead wins overwhelmingly, like 2006-4?)
  • “It’s important to read the exact language from the ballot.”  (Firms that follow that practice have results that are just as bad.)
  • “It’s hard to poll emotional issues – public opinion is apt to be volatile.”  (The biggest errors seem to occur on issues that have very little public awareness at all.)

Why are ballot proposal polls different?  There’s no theory that predicts that candidate and ballot question polls should be different, but since they are, we need to create a theory to explain it.  Over the past fifteen years, I’ve gradually worked out an explanation, made public predictions based on it, and been backed up by the election results.  This theory has nothing to do with statistics, and very little to do with conventional political science.  It’s simply based on what I’ve observed about polls that were either accurate or not, and the factors that seem to predict their accuracy.

First, we ought to ask why are candidate polls (reasonably) accurate?  The answer seems to be that when a voter talks to a pollster they make roughly the same choices they would if they were marking a ballot.  So if you ask a representative group of voters whether they plan to vote for McCain or Obama, you get more or less the “right” answers, so the laws of statistics determine how closely you can guess the actual election results.  That may seem obvious, but given all the practical problems of conducting polls (people who can’t be reached, who refuse to answer, who lie, who don’t end up voting, et cetera) it’s almost amazing that candidate polling works as well as it does.

A well-conducted candidate poll can furnish useful information about what will happen, whether spending additional money on a campaign would be a good investment, and even which issues are affecting voters’ decisions.  That’s what we mean when we say that candidate polling “works”.

In contrast, polling ballot proposals doesn’t work.  That is, even a well-conducted telephone survey of randomly chosen voters doesn’t provide accurate information about which side will win, or whether it’s worthwhile to invest more money, or what is really driving voters’ decisions.  As I said at Bill Ballenger’s “Pundit Summit” after the 2006 election, it would be much cheaper to buy three goats, have them ritually slaughtered, and then have the entrails professionally read – and the margin of error would probably be better.

The problem appears to be that when voters actually cast their ballots, their decision is mainly based on the actual language printed on the ballot, which they read under some time pressure and in secret.  The decisions that come out of that process simply aren’t the same as the ones they make when they are asked over the phone by a survey taker about the same issue.  It’s not clear which exactly what causes the responses to be different – maybe it’s different for different ballot proposals – but the differences are often very large and there doesn’t seem to be any way to “adjust” for them statistically.

A theory of ballot proposal testing.  Pollsters assume that voters have opinions about everything, and all you need to do is ask them, to find out what they are.  Ask them: “Would you support increasing fuel taxes by five cents per gallon in order to repair Michigan roads?” and they’ll tell you.  If their responses aren’t what you want, you can buy advertising and wage a public campaign to change their opinions.  On election day, the ballots are just an official method of recording and counting their opinions.

But on most issues, voters don’t have a ready, pre-formed opinion.  Voters are full of “attitudes” which can be brought to mind by a trigger, but those attitudes don’t precisely answer a question like “Would you support increasing fuel taxes by….”.  Depending on the circumstances, that kind of question could bring a complaint about taxes in general, about the bad condition of the roads, about the dysfunctional state government, about a particular official who recently made some public statement, or a host of other responses.   Being telephoned to answer a survey will cause a representative sample of voters to awaken and summarize their latent attitudes, which will result in a hard-and-fast percentage that say they would vote “YES” on such a proposal.  Then the survey firm will perform the necessary calculations to project the election result from those answers.  And those answers will have virtually no connection to what would really happen on election day.

Instead, it appears that the election day result depends on which of the voters’ latent attitudes are triggered at the moment each voter, having finished voting on all the questions higher on the ballot, finally reads the headline and 100-word description on the ballot.  There’s nobody to answer questions or even read the language.  Nobody can be impressed by how public-spirited the voter is, or how self-centered.  What images flash through the voter’s mind:  Potholes?  Lazy government workers?  A gas-pump’s electronic price display?  Whatever each voter thinks about at the instant before they mark their ballot is what determines how the votes are cast.

Why would the decisions made by a voter at the voting booth would be different from the decision made in response to being asked during a survey, since there’s no such problem with candidates?  Maybe reading a name on the ballot is so similar to hearing it over the phone that the same choices are made, which the more complicated ballot language results in differences.  It’s not theory that predicts candidate and proposal surveys will be different, but the evidence is overwhelming that they ARE different.  And the differences between polls and election results aren’t consistent enough to adjust them away by applying a simple rule or multiplying by a factor.

How can we accurately poll proposals?  The only solution we’ve found is to mimic the voting situation as closely as we can.  After a lot of trial-and-error, we’ve created a survey method that we’ve tested on a dozen different proposals, and we’ve always gotten acceptable results.

First, we create a “sample ballot” which includes much of the actual ballot the voter will face.  (We don’t include every single race, simply because the ballot varies in every precinct of the state, and perfect imitation doesn’t turn out to be necessary.)  Our ballot might have the candidates for Governor, Attorney General, Congress, State Senate, MSU Trustee, Supreme Court, Court of Appeals, and a series of ballot proposals.  (In other words, it would omit Secretary of State, UM Regent, Probate Court, et cetera.)

We send paid canvassers to randomly selected neighborhoods throughout the state, where they knock on the doors of registered voters and ask if they’d be willing to complete our survey.  We explain that it’s anonymous and they can mail it back in a provided envelope when they finish it.  Almost everyone approached typically agrees, and then they are pleasantly surprised when the canvasser hands them a shiny new dollar coin, “for your time”.  About 55% to 60% are typically returned.

The results.  Our “Straw Ballot” technique isn’t perfect;  it seems to be slightly less accurate than statistical theory predicts, possibly because of all the compromises that are necessary to put it into operation.  Instead of missing by 4% or 5%, we typically miss by 6% or even 8%.  But never by 20% or 40%.

In addition to conducting polls immediately before elections to predict their outcome, we have used “straw ballots” to test language before an issue is placed on the ballot, in order to determine whether a campaign is feasible or to help fine-tune the language, by circulating multiple versions that vary in some interesting way.  Those tests have been conducted for paying clients, and so are not available for disclosure, but their accuracy has been similar to our pre-election surveys.

Interestingly, our results for partisan candidates have not been particularly accurate – it appears ordinary telephone surveying is superior for that purpose.

There were five proposals on Michigan’s statewide ballot in 2006.  We’ve compiled all the polls cited in the Detroit Free Press and Detroit News during the final 60 days of the campaign, comparing them to PPC’s straw ballot results, and the actual vote totals.

Proposal             Polling Firm    Published     Yes    No

1 Trust fund      PPC                       —–           84.4   15.6
ACTUAL RESULT                81.0    19.0

2 Affirmative   Selzer                    Nov 5          39        49
Action              EPIC-MRA          Oct 27        40        44
EPIC-MRA           Nov 4        45        40
EPIC-MRA           Nov 7        41         46
PPC                        —–        64.7    35.3
ACTUAL RESULT              57.9    42.1

3 Dove                  EPIC-MRA            Oct 27      25          66
Hunting            PPC                          —–     30.4      69.6
ACTUAL RESULT             31.0      69.0

4. Eminent           Selzer                   Sept 4       43          44
Domain            Selzer                   Nov 5        44          46
PPC                       —–       87.2      12.8
ACTUAL RESULT             80.1       19.9
5. K-16                   Selzer                    Nov 5         43          45
Funds                EPIC-MRA           Oct 27        37          36
EPIC-MRA           Nov 7         39          45
EPIC-MRA           Nov 4         38          43
PPC                        —–        40.5       59.5
ACTUAL RESULT               37.7       62.3

The Author

Mathematically inclined voter list jockey. The last practicing hippie politician in America. Was elected forty years ago, at age 23, to the Ingham County Board of Commissioners, representing the Michigan State University campus - and I'm still there, now representing some of the grandchildren of my original constituents. A sometime attorney, whose practice is closer to a hobby than a profession.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s