Public Opinion Quarterly, Vol. 75, No. 5, 2011, pp. 962–981
THE EVOLUTION OF ELECTION POLLING IN THE
UNITED STATES
D. SUNSHINE HILLYGUS
*
Abstract Public opinion polls have long played an important role in the
study and conduct of elections. In this essay, I outline the evolution of poll-
ing as used for three different functions in U.S. presidential elections: fore-
casting election outcomes, understanding voter behavior, and planning
campaign strategy. Since the introduction of scientific polling in the
1936 election, technology has altered the way polls are used by the media,
public, candidates, and scholars. Today, polls and surveys remain vital to
electoral behavior and our understanding of it, but they are being increas-
ingly supplemented or replaced by alternate measures and methods.
Public opinion polls are now conducted on every topic under the suneverything
from presidential approval to celebrity outfits and sports predictionsbut they
remain especially fundamental to the conduct and study of elections. Elections
and polling are so intertwined that it is hard to imagine one without the other. Poll
numbers provide fodder for media coverage and election predictions, they shape
candidate and voter behavior, and they are the basis of interpreting the meaning of
election outcomes. Public Opinion Quarterly was founded in January 1937 on
the heels of the advent of modern scientific polling in U.S. presidential elections.
The first issue included an essay, ‘Straw Polls in 1936,’’ explaining how George
GallupÕs quota-controlled survey of a few thousand triumphed over the Literary
DigestÕs straw poll of millions in correctly predicting the election outcome
(Crossley 1937).
Election polling has evolved considerably since that inaugural issue. Perhaps
most notably, there has been an explosion in the number of election polls in the
United States. Traugott (2005) estimated a 900-percent increase in trial heat
polls between 1984 and 2000. The number has continued to grow since then,
due largely to the rise in interactive-voice-response (IVR) and Internet polls
since the 2000 election. In the 2008 election, there were an estimated 975
D. SUNSHINE HILLYGUS is Associate Professor of Political Science and Director of the Initiative
on Survey Methodology at Duke University, Durham, NC, USA.
*
Address correspondence to
D. Sunshine Hillygus, Department of Political Science, Box 90204, Duke University, Durham,
NC 27708, USA; e-mail: [email protected].
doi: 10.1093/poq/nfr054
Ó The Author 2011. Published by Oxford University Press on behalf of the American Association for Public Opinion Research.
All rights reserved. For permissions, please e-mail: [email protected]
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
presidential trial heat questions, and well over a million interviews, conducted
between Labor Day and Election Day (Panagopolous 2009). It is telling that
polling for the next presidential election now begins the day after the previous
one. On November 5, 2008, Gallup reported that Sarah Palin led as a potential
Republican candidate for the 2012 presidential election.
1
There has also been a significant evolution in the nature of election polling.
For decades, polls were typically conducted by telephone, using live inter-
viewers, on behalf of media organizations or political candidates. Today, In-
ternet surveys and IVR polls are increasingly common, and polls are often
initiated by entrepreneurial pollsters conducting them not for a client, but
for self-promotion (Blumenthal 2005). The dissemination of poll numbers
has also changed, with many polls now being reported directly on blogs
and polling aggregation websites rather than by the traditional media. Journal-
ists are no longer the formal gatekeepers determining if a given poll is of suf-
ficient quality and interest to warrant the publicÕs attention.
It also seems that we have seen a rise and fall in the credibility of polling since
POQÕs inaugural issue. Reflecting on the Literary Digest prediction disaster in the
1936 election, CrossleyÕs essay asked, ‘Is it possible to sample public opinion suf-
ficiently accurately to forecast an election,particularly a close one?’ (Crossley1937,
p.24). Crossleyargued thatitwas,provideda representative samplewasdrawn. Not
everyoneimmediatelysharedhisview,however.Itwasnotuntilthe1960sand1970s
that surveys became a fixture of political campaigns (J.Converse1987). Early skep-
ticism that a sample of respondentscouldsay anything aboutthe opinions of millions
gave way to a belief in the scientific basis of probability samples. Today, however,
nonprobability samplestypically opt-in Internet surveysare increasingly com-
mon, and probability samples are experiencing significant methodological chal-
lenges, such as increasing nonresponse and cell-phone-only households. We
now hear near constant questioning of the motivation and methods of pollsters, often
instigated by partisan bloggers and pundits dissatisfied with the results of a poll.
There is, once again, a haze of skepticism surrounding the entire industry.
The role of polling in elections has been the subject of numerous books and
articles and has been covered with far more detail, richness, and insight than I
can provide here.
2
In this essay, I will briefly outline the evolution of polling as
used for three different functions in U.S. presidential elections: forecasting elec-
tion outcomes, understanding voter behavior, and planning campaign strategy.
3
1. Gallup Poll, November 2008. Retrieved May 19, 2011, from the iPOLL Databank, Roper Center
for Public Opinion Research, University of Connecticut.
2. For a comprehensive history of polling, see Herbst (1993). Jacobs and Shapiro (2005) edited
a terrific special issue of Public Opinion Quarterly that covers election polling in the new infor-
mation and media environment.
3. Although not addressed in this essay, there is a growing body of research examining the nature
and role of polling outside the United States (e.g., Durand, Blais, and Larochelle 2004) and in non-
presidential races (e.g., Bafumi, Erikson, and Wlezien 2010).
The Evolution of Election Polling 963
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
The common thread throughout is that technology has altered the way polls are
used by the media, public, candidates, and scholars. And while polls and surveys
remain vital to electoral behavior and our understanding of it, they are being in-
creasingly supplemented or replaced by alternate measures and methods.
Forecasting Elections
As long as there have been elections, people have tried to predict the outcomes.
Before polls, knowledgeable observers, political insiders, and bellwether states
were the most commonly used election forecasts (Kernell 2000). Although
GallupÕs quota-selected polls in 1936 marked the beginning of scientific elec-
tion polling, unscientific straw polls date to at least the 1824 presidential elec-
tion, when informal trial heat tallies were taken in scattered taverns, militia
offices, and public meetings (Smith 1990). As the first U.S. presidential election
to be decided by popular vote, it was inevitable that people would try to gauge
public opinion, however imperfectly. Fast-forward to today, and each new elec-
tion cycle brings a wave of horserace polling numbers feeding the insatiable
appetite of media, bloggers, and political junkies trying to predict the election
outcomes.
Unlike most survey research topics, pre-election polls have a truth benchmark
the election results.
4
So, after each new election, there is a postmortem
assessment of the accuracy of pre-election polls to see how closely the polling
industry and individual pollsters matched the official election returns. The
reputation of survey firms rests in no small part on these accuracy assessments.
The death of the Literary Digest has been attributed to their failed prediction of
the 1936 election despite successful predictions from 1916 to 1932 (Squire
1988). More recently, John Zogby, labeled the ‘prince of pollsters’ after
nailing the 1996 election prediction, saw his reputation tarnished by poor
predictions in subsequent years, with NY Times election blogger Nate Silver
more recently calling him ‘The Worst Pollster in the World’ (Silver 2009).
Of course, the very task of assessing accuracy raises questions about how
best to measure it. Should a forecast be called ‘‘accurate’’ if it correctly predicts
the winner, the winnerÕs vote share, or the margin of victory? In 1996, CBS
News correctly predicted Clinton as the winner, but they forecast an 18-percent
margin of victory over Dole, rather than the 8 percent that he actually received.
In contrast, Gallup was off by just 2 percentage points in predicting the margin
of victory in 2000, but they predicted the wrong winner of the popular vote
because they overestimated support for minor-party candidates. In 2004,
Fox News nailed KerryÕs vote share of 48 percent, but they incorrectly predicted
him to be the winner. As these examples make clear, conclusions about accuracy
4. Of course, the butterfly-ballot mess of 2000 highlights the potential for errors in the actual elec-
tion results as well (Wand et al. 2001).
964 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
vary based on the particular yardstick used, and they can be affected by factors
like sample size, treatment of undecided and minor-party voters, and field dates
(Blumenthal 2008). In the aftermath of the 1948 polling debacle, a group of
social scientistsled by Frederick Mostelleroutlined eight different metrics
for assessing polling accuracy (Mosteller et al. 1949); more recently, Martin,
Traugott, and Kennedy (2005) have added a ninth. Most commonly used are
‘Mosteller Measure 3,’’, the average absolute error on all major candidates
between the prediction and the actual results, and ‘Mosteller Measure 5,’
the absolute value of the difference between the margin separating the two
leading candidates in the poll and the difference in their margins in the actual
vote. Increasingly, scholars are also using Martin et al.Õs predictive accuracy
measure, which is based on the natural logarithm of the odds ratio of the out-
come in the poll and the outcome in the election. By all measures, 2008 was
considered a banner year for the polling industry and, by some metrics, could be
labeled the most accurate since 1956 (Panagopoulos 2009). As a whole, the
polling industry has a strong track record (Traugott 2005), but there have been
some embarrassing failures throughout history. Most famously, the polls
predicted that Republican Thomas Dewey would beat incumbent Democratic
president Harry Truman in the 1948 election. More recently, the polls got it
wrong in the 2008 election when they predicted that Barack Obama would
defeat Hillary Clinton in the New Hampshire Democratic primary. Predicting
election outcomes is a difficult and high-stakes business, so it is important to
understand why some polls get it right and some get it wrong.
Like any survey, the quality of predictions can be affected by sampling error
and nonsampling errors, including coverage error, nonresponse error, measure-
ment error, processing error, and adjustment error (Groves et al. 2009). It is
most recognized that random sampling error can produce fluctuations in polling
estimates based on chance alone, simply because a poll includes a sample of
respondents rather than the full population. Such error is expressed with the
margin of error that is typically reported alongside polling estimates, and
the simple (but costly) solution is to increase the sample size. Of greater concern
are the systemic errors introduced by the pollsters (or analysts) and respondents
that can bias the election forecasts.
Pollsters must make a variety of design decisionsabout the mode, timing,
sampling method, question formulation, weighting, etc.and each of these
methodological decisions can potentially bias the results. Research has found,
for instance, that the number and type (weekend vs. weekday) of days in the
field were associated with predictive accuracy, reflecting nonresponse bias (Lau
1994). Mokrzycki, Keeter, and Kennedy (2009) found that telephone polls
excluding cell-phone-only households had a slight bias against the Democratic
candidates, an illustration of coverage bias. Highlighting the importance of
measurement error, Crespi and Morris (1984) demonstrated that question order
produced different estimates of candidate support. As other essays in this
issue discuss in more detail, there are a wide variety of other methodological
The Evolution of Election Polling 965
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
decisions that can directly affect data quality; for pre-election polling, the def-
inition of likely voters and the treatment of undecided voters are of particular
concern.
Election forecasts can go astray simply because they must predict future be-
havior. In his 1937 essay, Crossley wrote that ‘The greatest difficulty of all is
the fact that the election itself is not a census, but an application of the sampling
principle. Every poll is therefore a sample of a sample’’ (p. 25). In other words,
it is an unknown population to whom pollsters are trying to generalize because
we do not know who will show up on Election Day. Different elections can have
different cross-sections of voters, a point highlighted in 2008 when an unusu-
ally large proportion of minorities and young people turned out to vote for
Barack Obama. As such, one of the most consequential methodological
decisions made by the pollster or analyst is the selection of likely voters.
Forecasts can vary wildly based on the particular method used to define the
population of expected voters (Crespi 1988). Erikson, Panagopolous, and
Wlezien (2004) found, for instance, that a 19-point swing in support from Gore
to Bush in the 2000 presidential campaign was an artifact of GallupÕs likely-
voter screen. Every survey firm has its own (often proprietary) method for
defining likely voters, typically relying on self-reported measures of voter regis-
tration or vote history, but rarely do those models engage the most up-to-date
scholarly research on political participation. For instance, pollsters typically
use a single likely-voter model for the entire country, but political science research
has shown that state-level factors such as registration requirements, early voting
rules, and competitiveness can affect an individualÕs likelihood of voting.
Once an assumption is made about likely voters, the task of the pre-election
poll is to predict how voters will cast their ballots. The standard polling question
asks, ‘If the election were being held today, for whom would you vote?’ In
making a forecast, pollsters or analysts must make a decision about how to deal
with the respondents who say they are undecided, a pool of voters that varies
based on the timing of the poll and the methodology used (Fenwick et al. 1982).
Hoek and Gendall (1997) found that reducing the proportion of undecided vot-
ers through various assignment mechanisms did not necessarily improve the
accuracy of estimates. Thus, while it is widely recognized that undecided
respondents contribute to polling error, there is still no consensus about what
to do with them.
Respondents are another source of error in pre-election polls. An accurate
election prediction relies on respondents providing honest answers to the turn-
out and vote intention questions. Extensive research has documented overre-
porting of turnout (and turnout intention), primarily the result of social
desirability bias (e.g., Belli et al. 1999). In 2008, the presence of an African
American on the ticket increased concern that respondents would lie to pollsters
about their vote preference. Previous elections had found evidence of a ‘Bradley
effect,’ in which pre-election polls overestimate support for a black candidate
because white voters tell pollsters they are undecided or will support the black
966 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
candidate when they do not intend to do so. In the end, research found no ev-
idence that polls systematically overestimated Obama support (Hopkins 2009);
in fact, polls were more likely to underestimate support for Obama, likely
reflecting higher turnout among groups often not considered likely voters
(Silver 2008). Future research should consider the variety of other reasons that
respondents might not give incomplete or untruthful answers to the vote choice
questions, such as privacy concerns or respondent competence.
Polling predictions can also be jeopardized by individuals changing their
minds about their turnout and vote intention between the time of the survey
interview and Election Day. Although scholarly research often emphasizes
the stability of vote intention, panel data has found that more than 40 percent
of respondents change their vote intention at least once during the campaign
(Hillygus and Shields 2008). There remains debate, however, about the source
of these individual-level dynamics. Gelman and King (1993) argued that move-
ments in poll numbers reflect predictable movement toward the fundamentals,
but others have shown that specific campaign events produce movements in the
polls (Johnston et al. 1992). In an analysis of the dynamics of pre-election poll-
ing, Wlezein and Erikson (2002) attributed as much as 50 percent of the var-
iability in poll numbers simply to sampling error, but they also found that
campaign shocks produced real movementsearly in the campaign the effects
dissipated quickly, but there were smaller, persistent shocks late in the cam-
paign. There remains much to be learned about who in the electorate is most
likely to change their minds, when they are most likely to do so, and in response
to what stimuli, and such findings will have clear implications for election fore-
casting. Voter instability is considered the primary explanation for the polling
debacle of 1948. Pollsters called the election for Dewey weeks before the elec-
tion, but a sizeable chunk of voters changed their vote and turnout intention in
the final weeks, and they overwhelmingly supported Truman (Crespi 1988). To
minimize sources of error, pollsters now continue to do election polling as late
as the night before the election, and it is this final poll that is used in the post-
election assessments of polling accuracy.
5
Unfortunately, it is often difficult to attribute differences in predictions across
pollsters to any one factor because of the sheer number of design decisions that
are made by each survey firm. One of the greatest obstacles to a better under-
standing of variation in polling predictions is the lack of methodological trans-
parency. An American Association for Public Opinion Research (AAPOR)
committee examining the performance of polls in the 2008 primaries ultimately
concluded that they lacked sufficient information to fully assess what went
wrong in New Hampshire (Traugott et al. 2009). The experience of the
5. Since 1997, the National Council on Public Polls (NCPP) has conducted post-election analyses
of published pre-election polls, available at http://www.ncpp.org.
The Evolution of Election Polling 967
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
committee helped invigorate a new transparency initiative that urges disclosure
of methodological decisions. AAPOR president Peter Miller explained:
Despite decades of work, transparency in public opinion and survey re-
search remains an elusive goal. Often it remains too difficult to get in-
formation about how surveys are conducted. Too many researchers do
not know how to document their work, or are reluctant to disclose their
methods for fear of criticism by non-transparent competitors. Too many
significant questions about survey practice remain unaddressed because
getting information about how studies are done is onerous or impossible.
Too many members of the public have become cynical about survey re-
search because they do not understand how different methods underlie
conflicting findings.
6
In a world in which there is more variation in polling methodologies, it is
more important than ever for survey quality to be evaluated. And this is only
possible with methodological transparency. The importance of transparency
was recently highlighted by two separate cases in which polling firms were
found to have made up or manipulated released survey results during the
2008 election.
7
Polling Aggregation
In recognition that individual poll results are subject to random sampling error and
any potential biases introduced by a firmÕs particular polling methodology, it has
become popular to aggregate across many different polls. The widespread avail-
ability of poll numbers online has made it easy to do polling aggregation to fore-
cast election outcomes.
8
Online poll aggregators include FiveThirtyEight.com,
Pollster.com, the Princeton Election Consortium, and RealClearPolitics.com. Nate
SilverÕs FiveThirtyEight.com, in particular, was a popular sensation in the 2008
campaign; his website reported 3.63 million unique visitors, 20.57 million site
visits, and 32.18 million page views in October alone.
6. http://www.aapor.org/AM/Template.cfm?Section¼Transparency_Initiative&Template¼/CM/
ContentDisplay.cfm&ContentID¼3862.
7. The liberal blog DailyKos discovered that weekly polling results they had paid for and
featured from the organization Research 2000 (R2K) were ‘largely bunk.’ For more complete dis-
cussion of the controversy and evidence, see http://www.dailykos.com/storyonly/2010/6/29/880185/
-More-on-Research-2000. In another case, blogger Nate Silver of fivethirtyeight.com concluded that
pollster Strategic Vision LLC was ‘disreputable and fraudulent’’; see http://www.fivethirtyeight.com/
search/label/strategic%20vision.
8. Prior to free online resources, Polling Report, a compilation of raw poll numbers, was compiled
and distributed for a fee.
968 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
Aggregating polls helps reduce volatility in polling predictions. Although
there is the tendency for news organizations to focus great attention on every
movement up or down in the polls, as Jackman (2005) noted, ‘media-
commissioned polls employ sample sizes that are too small to reliably detect
the relatively small day-to-day or week-to-week movements in voter sentiment
we would expect to occur over an election campaign’ (p. 500). Pooling polls
improves the precision of polling estimatesa larger sample size has a smaller
margin of error. But there are multiple methods for aggregating polls, and it is
not yet clear which one is best.
Some aggregations simply take the average of all available poll numbers.
Yet, naı
¨
vely pooling across polls ignores house effects, all of the methodolog-
ical decisions made by a particular survey firm. Unfortunately, this approach
might not improve accuracy. Simple averaging assumes that the various sources
of bias in the individual poll numbers will cancel each other out, but if biases
work in the same direction, aggregation will not reduce bias. For example, if all
telephone polls systematically miss cell-phone-only households, shown to be
more Democratic-leaning, the aggregation of these polls might produce an es-
timate that is worse than an individual poll.
Other polling aggregations employ various analytic models that attempt to
account for house effects. Some aggregators will exclude certain polls or weight
them according to some criteria, for example excluding partisan polls or those
deemed to be lower quality. Fivethirtyeight.com weights each poll based on the
‘pollsterÕs historical track record, sample size, and recentness of the poll.’
9
Still
others have used algorithms that incorporate historical trends or other sources of
information (e.g., Linzer 2011). Unfortunately, since we cannot assess house
effects until after the election, many studies end up making the strong assump-
tion that the average of all polls is a reasonable gauge for house effects (e.g.,
Jackman 2005). Popular websites have dominated polling aggregation in recent
years, but it seems likely that public opinion scholars will increasingly weigh in
to help identify the most reasonable methods for combining polls.
STATE VS. NATIONAL POLLING
Also reflecting improvements in polling availability and accessibility, many
polling predictions have recently shifted from national-level to state-level anal-
yses. Polling forecasts have historically focused on national-level estimates of
the two-party popular vote even though presidential elections are decided by the
Electoral College.
10
The 2000 election offered a stark reminder of this fact; it
became irrelevant that many pollsters had correctly predicted Gore to be the
9. ‘FAQ and Statement of Methodology FiveThirtyEight.com,’ FiveThirtyEight.com, June 9,
2008.
10. Limited data availability meant that the occasional state-level election forecasts had especially
noisy estimates (e.g., Holbrook and DeSart 1999).
The Evolution of Election Polling 969
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
winner of the popular vote because Bush won the Electoral College. With
changes in polling technology and costs, the ease of Internet accessibility,
and improvements in statistical and computing power, state-level forecasts have
since become routine.
11
According to NCPP analyses, there were 743 state-
level polls in the last two weeks of the 2008 election, compared to 254 during
the same time span in 2004 (the first year they evaluated the accuracy of state-
level polls).
12
Not every state is polled consistently, especially in less compet-
itive states and earlier in the campaign, but forecasts based on state-level polls
were updated on an almost daily basis on Pollster.com and 538.com during the
2008 campaign. It seems clear that the future of poll-based election forecasts are
aggregations of state-level polls to make Electoral College predictions, but there
remains much to be learned about the best methods for aggregating and the best
metrics for assessing their accuracy.
On the one hand, state-level forecasts offer a much higher bar for assessing
the accuracy of individual pollsters since there are 51 predictions to be made,
rather than one. Certainly, we see much greater variability in state-level polls,
which do not converge toward the end of the campaign to the same extent as
national polls. On the other hand, many pollsters do state-level polling only in
a handful of states, and some states are more difficult to predict than others,
making it difficult to know the best way to evaluate the accuracy of a given
pollster. IVR and Internet polling methods also make up a greater proportion
of state level pollsjust 53.3 percent were telephone polls with live inter-
viewers in 2008raising additional questions about the data quality underlying
state-based predictions. As state-based forecasts become more common in the
coming years, researchers should consider whether the traditional metrics used
to evaluate national pre-election polling accuracy remain applicable and infor-
mative.
For example, should accuracy assessments be made based on the final, Elec-
tion Eve polls? It is well established that the accuracy of poll-based predictions
improves closer to Election Day, as the number of undecided voters decline and
vote preferences stabilize. For example, early polling in the 1992 election
showed George H. W. Bush with a healthy lead, but polls converged in pre-
dicting Clinton the winner by the end of the campaign. But it is perhaps less
interesting to get the forecast correct just before the actual outcome is known.
11. Realclearpolitics.com began tracking state polls in the 2004 election, making aggregation sites
like pollster.com and 538.com possible; before that, Polling Report compiled many different po-
litical polls and made hard copies available for a subscription fee.
12. The increase in state-level polls has also led to increased efforts to predict lower-level races and
primary elections (e.g., Bafumi et al. 2010), which are often more difficult because of the lower
turnout and higher number of undecided voters. These two factors were ultimately considered
the key reasons for mistaken predictions in the New Hampshire primary; the number of candidates,
the difference in eligibility across states, and the absence of a party identification cue for voting all
affected turnout in the primaries, which in turn affected polling accuracy (Traugott et al. 2009).
970 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
Moreover, the number of early votersestimated at one-third of the electorate
in 2008further weakens the utility of an Election Eve ‘prediction.’ As an
alternative, especially early in the campaign, some have turned to macroeco-
nomic statistical models or prediction markets to make election forecasts.
BEYOND POLLS: MACROECONOMIC MODELS
Beginning in the 1970s, academics developed macroeconomic statistical mod-
els using non-polling aggregate data to predict election outcomes (e.g., Fair
1978), and since the 1990s, competing model-based forecasts of the two-party
presidential vote have been routinely published by Labor Day before an election
(Lewis-Beck 2005). Most statistical models include measures of government
and economic performance, though there is debate as to the specific economic
indicator to be usedwhether GDP growth (Abramowitz 2004), job growth
(Lewis-Beck and Tien 2004), inflation rate (Lewis-Beck 2005), or perceptions
of personal finances (Holbrook 2004)and about the inclusion of other var-
iables, like polling numbers and war support, in the statistical model.
13
All,
however, are based on the basic theory that voters reelect incumbents in good
times and they kick the bums out of office in bad times (Fiorina 1981).
Election forecasters argue that statistical predictions should outperform other
election predictions because they are rooted in a theory about voter behavior.
Lewis-Beck (2005) argued that other forecasting approaches, such as poll- and
market-based predictions, ‘are not based on any theory of the vote. Instead,
they are merely providing point estimates on a dependent variable. ... It is
my belief that, in the long run, the statistical modeling approach, because it
draws on voting theory, will yield a better performance’ (p. 148).
One criticism of these models is that, like the national-poll-based forecasts,
they typically predict the two-party popular vote rather than the Electoral Col-
lege outcome. Given the two-party system of the United States, the popular vote
typically falls within a rather narrow range of values. A naı
¨
ve prediction based
on a coin toss would predict a 50-percent vote share, which gets pretty close to
the right answer in many election years. Another criticism is that once we ac-
count for the confidence intervals around the point estimate, it becomes evident
that most models predict a wide range of possible outcomes, including victory
by the opposing candidate (Lewis-Beck 2005). There are only a handful of pres-
idential elections for which the necessary aggregate data are available to esti-
mate the statistical models, so predictions are inherently imprecise. Moreover,
according to Greene (1993), the tendency for models to be fitted to previous
outcomesthat is, selecting model specification based on past electionsmeans
that the models underestimate the true level of uncertainty. Vavreck (2009)
argued that economic models have sometimes failed because they have not taken
13. See Lewis-Beck (2005) for a more thorough discussion of the debates within the literature.
The Evolution of Election Polling 971
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
into account the content of the campaign, especially the candidatesÕ messages
about the economy and their attention to other issues.
Reflecting these issues, the track records for individual models are highly
variable; for example, Ray FairÕs model had one of the worst predictions in
1992 despite a previous history of successful forecasts (Greene 1993). Nonethe-
less, it is clear that there are regularities in presidential races that help in pre-
dicting the outcome long before the polling numbers converge on the likely
winner. And there is clearly value in being able to make an early prediction,
provided that the appropriate amount of uncertainty in that estimate is reported.
With the increasing availability of state-level measures and more complex sta-
tistical techniques, the field is poised to make further improvements in the ac-
curacy and reliability of model-based forecasts of Electoral College outcomes
(e.g., Rigdon et al. 2009).
BEYOND POLLS: PREDICTION MARKETS
Another election-forecasting alternative is prediction markets, such as the Iowa
Electronic Market. With such betting markets, people buy and sell candidate
futures based on who they think will win the election; for example, on September
1, 2008, Obama futures were selling for $0.60, indicating that traders expected
Obama to win 60 percent of the vote in the November election. There was an
active and public (although often illegal) election-betting market through much
of U.S. history, but it was only recently that prediction markets have been used
with any regularity by academic forecasters.
14
Relative to polls, the markets rely on very different mechanisms for making
a prediction. The key to an accurate poll-based prediction is a representative
sample of likely voters and truthful responses to the vote choice question.
In contrast, prediction markets aggregate the informed expectations about
the winner from those willing to put money behind their judgments. Traders
are not representative of likely votersthey tend to be young, male, well ed-
ucated, and high incomeand they do not even need to be eligible to vote (Berg
et al. 2001). Traders are just assumed to be making informed judgments.
There is considerable debate in the literature about the benefit of market-
based predictions compared to poll-based predictions. Berg et al. (2001) argued
that markets are more accurate than polls, but Erikson and Wlezien (2008)
showed that the advantage goes away when polls are combined and corrected
for a systematic bias. Rothschild (2009) found that when both polls and market
predictions were corrected for inherent biases, prediction market forecasts out-
performed aggregated polls earlier in the campaign and in more competitive
14. Rhode and Strumpf (2004) offer a fascinating account of historical betting markets, including
detailing that betting volume far exceeded the amount wagered in the Iowa Electronic Market today
and often involved individuals wagering large six-figure sums.
972 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
races. Not surprisingly, the accuracy of both improves the closer you get to
Election Day.
A related approach is to ask a sample of citizens their expectations about who
will win (Lewis-Beck and Tien 1999). Rothschild and Wolfers (2010) argued
that it is possible to forecast the winner asking an expectations question (‘‘Who
do you think will win?’’) of the general public, even without a representative
sample survey. Although more research is necessary to understand the condi-
tions under which such an approach might or might not be accurate, it suggests
that the addition of an expectations question to pre-election polls could yield
a significant advance in the predictive power of polls.
Although prediction markets and statistical models could provide more ac-
curate election forecasts than polls in some circumstances, it is worth noting that
polls play an indirect role even here. Many statistical models incorporate pres-
idential approval numbers. And investors no doubt look to poll numbers in
making betting decisions.
Even if we found the perfect election forecastwhether from polls, markets,
or modelsit does little to help us understand the meaning of the election, what
issues mattered to the voters, or the dynamics of opinions. Although criticism is
most often directed toward tracking polls, election forecasts of all types con-
tribute to the horserace frenzy. As Rosenstiel (2005) argued, media coverage of
the horserace comes at the expense of reporting a candidateÕs record, policies, or
leadership qualities. That is not to say that election forecasting feeds curiosity
alonethe anticipated (and actual electoral outcomes) can affect economic
decision-making; election results have been shown to be associated with stock
prices and exchange rates (Herron et al. 1999). But we generally are interested
not only in predicting the election outcomes but also in explaining why
a particular candidate has prevailed. And for that task, forecasting models
and prediction markets offer little insight; many polls and surveys, however,
have real value.
Understanding Voter Behavior
Survey research is the primary tool for answering questions about electoral be-
havior, including political participation, voter decision-making, public opinion,
and campaign effects. Here again, there has been an evolution in our under-
standing of electoral behavior, and Public Opinion Quarterly has played a fun-
damental role. Some of the most impactful works in POQ have been not about
election prediction, but about explanations of electoral behavior, including the
role and influence of party identification (Belknap and Campbell 1951), voter
responsiveness to campaign information (Converse 1962), and the impact of the
media on voter decision-making (McComb and Shaw 1972) to name a few.
In tracing the development of the voting behavior literaturefrom the
sociological-based analysis of the Columbia School studies to the psychological
The Evolution of Election Polling 973
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
models of the Michigan School and the rational choice perspective (see Dalton
and Wattenberg 1993 for a nice overview)one of the most striking changes in
the field is the extent to which our answers to theoretical questions have been
shaped by methodological issues.
For one, it is now widely recognized that our ability to answer substantive
questions about an election is directly affected by data-quality issues. As just
one example, Burden (2000) found that turnout overreporting in the American
National Election Studies has gotten worse in recent decades because of declin-
ing response rates among individuals least likely to participate. And turnout
overreporting, is itself a classic example of the dangers of measurement error.
In recent years, many other self-report measures have come under considerable
scrutiny as well. In a comparison of Nielsen estimates and self-reported news
exposure, Prior (2009) found that respondents exaggerate news exposure by
a factor of 3 on average, with young people overreporting by a factor of 8. Other
research has shown that respondents find it socially desirable to say they are
politically informed and engaged, in addition to independent, moderate, and
tolerant (Holbrook, Green, and Krosnick 2003). Importantly, these data-quality
issues matter. Bernstein, Chadha, and Montjoy (2001) found that overreporting
has exaggerated the impact of education, partisanship, and religiosity on turn-
out. Burden (2008) found that the gender gap in party identification shrank
when the question wording for party identification emphasized feelings rather
than thoughts. Other essays in this issue discuss in more detail the way that
sampling, mode, and questionnaire design directly impacts our substantive con-
clusions. As more and more scholars are collecting original survey data to an-
swer questions about electoral politics, it becomes ever more important for us to
have a firm grasp on the way that data-quality issues can affect knowledge
claims.
A second methodological evolution in voting behavior research is the rec-
ognition that causal relationships are exceptionally difficult to establish using
surveys, especially cross-sectional surveys. The most prominent example is the
enduring relationship between party identification, issue preferences, and vote
choice. There has long been recognition of a strong relationship (Campbell et al.
1960), but the direction of the causal arrow remains unclear. Some research has
concluded that party identification is driven by issue positions (Fiorina 1981),
while other studies have argued that issue positions stem from party attachments
(Jackson 1975) or vote preferences (Page and Brody 1972). There is a chicken-
and-egg quality to the literature that has yet to be resolved, but political behavior
scholars are no longer naı
¨
ve to the conundrum. In a recent essay, Bartels (2010)
observed the following about the development of voting behavior research:
The search for causal order in voting behavior seemed to have reached an
unhappy dead end ... few have been content to hope that new theories
and statistical wizardry might untangle the causal complexities that
emerged in the 1960s and 1970s. Instead, the most common impulse
974 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
among electoral analysts of the past quarter-century has been to change
the subject. Rather than building ever more complex and comprehensive
models of individual voting behavior, they have focused on more trac-
table questions. As a result, contemporary voting research has become
increasingly eclectic and opportunistic. (pp. 28–29)
One consequence of these methodological developments is that scholars are
increasingly being led away from surveys, especially cross-sectional surveys, in
answering substantive questions about electoral behavior. In an effort to make
causal claims, some scholars have turned to panel surveys that track the same
individual over time. For example, Hillygus and Shields (2008) found that, over
the course of a campaign, politically sophisticated individuals changed their
vote choice to be in line with their issue positions, even at the expense of party
attachments. While such panel analyses can shed light on vote dynamics, they
do not typically catch people before their initial policy and party attitudes are
formed, so they cannot fully unravel the causal relationships.
Other scholars have turned to experimental designs, where causal relation-
ships can be more cleanly established (e.g., Iyengar and Kinder 1987).
15
Field
experiments, in particular, have grown in popularity as a tool for gauging the
influence of campaign efforts on turnout or vote choice. Gerber and Green
(2001), for example, concluded that nonpartisan mobilization appeals by phone
did not increase turnout, in contrast to the conclusions of observational research
on party contact. Survey experiments, in which experimental designs are em-
bedded in opinion surveys by randomly assigning respondents alternative ver-
sions of questionnaire items, are increasingly popular because they combine the
generalizability of surveys with the causal leverage of an experiment. Survey
experiments have long been used in questionnaire design experiments, but they
are now increasingly the basis for testing substantive questions. For example,
Krysan (1998) used a survey experiment to show that whitesÕ racial attitudes
vary based on the privacy of their expressed opinions.
Even those using a survey often supplement it with geospatial or adminis-
trative data. There is now a large literature on campaign effects that leverages
geographic variation in campaign intensity as a proxy for campaign exposure.
For example, Huber and Arceneaux (2007) found compelling evidence of the
persuasive effects of advertising by linking advertising data to individual-level
survey data, and then taking advantage of the fact that some media markets
overlap battleground and non-battleground states, exposing some voters to
higher levels of advertising than the candidate intended.
Other observational research relies on data about individuals collected
not as a result of an interview, but from supplemental data sources, such as
15. For a more detailed history of the experimental research in political science, see Druckman et al.
(2006).
The Evolution of Election Polling 975
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
administrative records or other electronic databases. We live in an information
environment in which data are compiled when we surf the Web, subscribe to
a magazine, swipe our credit card, register to vote, and so onand this infor-
mation can be used to validate or substitute for survey measures. For example,
Meredith (2009) relied on vote history in the voter registration files to examine
the habitual nature of voting behavior. Mathiowetz (1992) used employment
records to validate occupational self-reports in the Current Population Survey.
King (2009) offered the following prediction about the future of voting behav-
ior research:
Instead of trying to extract information from a few thousand activistsÕ
opinions about politics every two years, in the necessarily artificial con-
versation initiated by a survey interview, we can use new methods to mine
the tens of millions of political opinions expressed daily in published
blogs. Instead of studying the effects of context and interactions among
people by asking respondents to recall their frequency and nature of social
contacts, we now have the ability to obtain a continuous record of all
phone calls, e-mails, text messages, and in-person contacts among a much
larger group. (p. 93)
In sum, public opinion scholars are increasingly cognizant of the important link
between substantive questions of interest and the strengths and weaknesses of the
methodologies available for answering them. And it is this recognition that has
led many scholars to alternative measures of voter attitudes and behaviors.
Candidates and Polling
We see a similar trend in looking at the role of political polling by candidates,
parties, and interest groups. Whereas polling was once the primary way to
gauge the preferences of the public, today campaigns are increasingly relying
on consumer and political databases instead of polling alone (Jacobs and
Shapiro 2005).
Before public opinion polling took hold in presidential campaigns (and even-
tually campaigns for offices at almost all levels), candidates relied on the local
party structure to assess the wants and needs of the electorate (Kernell 2000). By
the 1960s, public opinion polls were central to campaign strategy, used to
determine which issues to emphasize, to test messages, and to identify persuad-
able voters. Polls became an integral part of a presidentÕs campaign (and term in
office), offering an independent read of public opinion and thus autonomy from
Congress, the media, and political parties (Eisinger 2003). Jacobs and Shapiro
(1995) traced the decisions within the Nixon administration to the institutional
development of a White House polling apparatus, and Druckman and Jacobs
(2006) showed how those polls shaped strategic policy decisions.
976 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
Given changes in the information environment and computing power, can-
didates today no longer have to generalize from a sample survey to the broader
populationthey now have information about the entire population. The
political parties have built enormous databases that contain information about
every registered voter in the United States. Statewide, electronic voter registra-
tion filesmandated by the 2002 Help America Vote Actare the cornerstone
of these databases. These files typically include a personÕs name, home address,
turnout history, party registration, phone number, and other information, and
are available to parties and candidates (and, in most states, anyone else who
wants it). Consumer, census, political, and polling data are then merged into
these files to better predict who is going to turn out, what their beliefs and
attitudes are and, ultimately, how they are going to vote.
This development has influenced not only how candidates communicate with
citizens, but also whom they are contacting and what they are willing to say.
Candidates are able to more efficiently target their resources to particular sub-
sets of the electorate. In doing so, they are particularly likely to ignore those
individuals not registered to vote, exacerbating inequalities in political infor-
mation and political engagement. Hillygus and Shields (2008) showed, for in-
stance, that direct mail, phone calls, and personal visits were directed toward
registered voters with an active vote history. Candidates also narrow-cast dif-
ferent messages to different segments of the electorate (Jacobs and Shapiro
2005). In these targeted communications, candidates are taking positions on
more issues and more divisive issues than in broadcast messages. In 2004,
for instance, the presidential candidates took positions on more than 75 different
policy issues in their direct mail, including wedge issues like abortion, gay mar-
riage, and stem cell research that were not mentioned in television advertising
(Hillygus and Shields 2008). Polls are still used for assessing a candidateÕs
standing in the race, but they are just one piece of campaign strategy planning,
rather than the foundation.
Conclusion
In the 50th-anniversary issue of Public Opinion Quarterly, Philip Converse
observed that ‘From the very outset in the 1930s, public opinion polling
has been closely wedded to the study of popular democratic politics’ (1987,
p. S12). Election polls are used to predict election outcomes and interpret
the meaning of the results. They are the basis for campaign strategy by candi-
dates, parties, and interest groups. They are the primary tool that academics and
journalists use to understand voting behavior. At the same time, however, there
has been a noticeable decline in the prominence of polls in election politics and
scholarship. In forecasting the election, statistical models and prediction mar-
kets appear to be viable alternatives to polling predictions, especially early in
the campaign. In understanding voting behavior, surveys are increasingly
The Evolution of Election Polling 977
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
replaced by experimental designs or alternative measures of attitudes and
behaviors. In campaign strategy polls are increasingly second fiddle to massive
databases from voter files and consumer databases, changing the campaign
messages that we see. On the one hand, it seems surprising that the impact
of polls might well be declining at the same time their numbers are sharply
increasing. On the other hand, it may well be the reason for it. With the pro-
liferation in polls, we have also seen greater variability in the methodologies
used and the quality of the data. The lack of transparency about those meth-
odologies has contributed to skepticism about the industry. Coupled with
changes in technology and the information environment, it is perhaps no won-
der that polls have lost some of their luster.
References
Abramowitz, Alan. 2004. ‘When Good Forecasts Go Bad: The Time-for-Change Model and the
2004 Presidential Election.’ PS: Political Science and Politics 3(74):745–46.
Bafumi, Joseph, Robert S. Erikson, and Christopher Wlezien. 2010. ‘Ideological Balancing,
Generic Polls, and Midterm Congressional Elections.’ Journal of Politics 72:705–19.
Bartels, Larry. 2010. ‘The Study of Electoral Behavior.’ In The Oxford Handbook of American
Elections and Political Behavior, edited by Jan E. Leighley. Oxford, UK: Oxford University
Press.
Belknap, George, and Angus Campbell. 1951. ‘‘Party Identification and Attitudes Toward Foreign
Policy.’ Public Opinion Quarterly 15:601–23.
Belli, Robert F., Michael W. Traugott, Margaret Young, and Katherine A. McGonagle. 1999.
‘Reducing Vote Overreporting in Surveys.’ Public Opinion Quarterly 63:90–108.
Berg, Joyce, Robert Forsythe, Forrest Nelson, and Thomas Rietz. 2001. ‘Results from a Dozen
Years of Election Futures Market Research.’’ In Handbook of Experimental Economic Results,
edited by Charles Plott and Vernon Smith, 742–51. Amsterdam: Elsevier.
Bernstein, Robert, Anita Chadha, and Robert Montjoy. 2001. ‘Overreporting Voting: Why It Hap-
pens and Why It Matters.’ Public Opinion Quarterly 65:22–44.
Blumenthal, Mark. 2005. ‘Toward an Open-Source Methodology: What We Can Learn from the
Blogosphere.’ Public Opinion Quarterly 69(5):655–69.
———. 2008. ‘NCPPÕs Report on Pollster Performance.’’ Pollster, http://www.pollster.com/blogs/
ncpps_report_on_pollster_perfo.php?nr¼1.
Burden, Barry. 2000. ‘Voter Turnout and the National Election Studies.’ Political Analysis
8:389–98.
———. 2008. ‘The Social Roots of the Partisan Gender Gap.’ Public Opinion Quarterly 72:55–75.
Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The American
Voter. New York: John Wiley & Sons.
Converse, Jean. 1987. Survey Research in the United States: Roots and Emergence 1890–1960.
Berkeley and Los Angeles: University of California Press.
Converse, Philip. 1962. ‘Information Flow and Stability of Partisan Attitudes.’ Public Opinion
Quarterly 26(4):578–99.
———. 1987. ‘Changing Conceptions of Public Opinion in the Political Process.’Public Opinion
Quarterly, Vol. 51, Part 2: Supplement: 50th-Anniversary Issue: S12–24.
Crespi, Irving. 1988. Pre-Election Polling: Sources of Accuracy and Error. New York:
Russell Sage Foundation.
Crespi, Irving, and Dwight Morris. 1984. ‘Question Order Effect and the Measurement of
Candidate Preference in the 1982 Connecticut Elections.Public Opinion Quarterly 48:578–91.
Crossley, Archibald M. 1937. ‘Straw Polls in 1936.’Public Opinion Quarterly 1(January):24–35.
978 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
Dalton, Russell, and Martin Wattenberg. 1993. ‘The Not So Simple Act of Voting.’’ In The State of
the Discipline, edited by Ada Finifter. American Political Science Association: Washington, DC.
Durand, Claire, Andre Blais, and Mylene Larochelle. 2004. ‘The Polls in the 2002 French
Presidential Election.’ Public Opinion Quarterly 68:602–22.
Druckman, James, Donald Green, James Kuklinski, and Arthur Lupia. 2006. ‘The Growth and
Development of Experimental Research in Political Science.’American Political Science Review
100:627–35.
Druckman, James, and Larry R. Jacobs. 2006. ‘Lumpers and Splitters: The Public Opinion Infor-
mation That Politicians Collect and Use.’ Public Opinion Quarterly 70(4):453–76.
Eisinger, Robert M. 2003. The Evolution of Presidential Polling. Cambridge, UK: Cambridge
University Press.
Erikson, Robert S., Costas Panagopoulos, and Christopher Wlezien. 2004. ‘‘Likely (and Unlikely)
Voters and the Assessment of Campaign Dynamics.’Public Opinion Quarterly 68(4):588–601.
Erikson, Robert S., and Christopher Wlezien. 2008. ‘Are Political Markets Really Superior to Polls
as Election Predictions?’ Public Opinion Quarterly 72:190–215.
Fair, Ray C. 1978. ‘‘The Effect of Economic Events on Votes for President.’Review of Economics
and Statistics 60(April):159–73.
Fenwick, Ian, Frederick Wiseman, John Becker, and James Heiman. 1982. ‘Classifying Undecided
Voters in Pre-Election Polls.’ Public Opinion Quarterly 46:383–91.
Fiorina, Morris. 1981. Retrospective Voting in American Elections. New Haven, CT: Yale
University Press.
Gelman, Andrew, and Gary King. 1993. ‘Why Are American Presidential Election Campaign
Polls So Variable When Votes Are So Predictable?’ British Journal of Political Science
23(October):409–51.
Gerber, Alan S., and Donald P. Green. 2001. ‘Do Phone Calls Increase Voter Turnout? A Field
Experiment.’ Public Opinion Quarterly 65:75–85.
Greene, Jay P. 1993. ‘Forewarned Before Forecast: Presidential Election Forecasting Models and
the 1992 Election.’ Political Science and Politics 26:17–21.
Groves, Robert M., Floyd J. Fowler, Mick P. Couper, James M. Lepkowski, and Eleanor Singer.
2009. Survey Methodology. 2nd ed. Hoboken, NJ: John Wiley and Sons.
Herbst, Susan. 1993. Numbered Voices: How Opinion Polling Has Shaped American Politics.
Chicago: University of Chicago Press.
Herron, Michael C., James Lavin, Donald Cram, and Jay Silver. 1999. ÔÔMeasurement of Political
Effects in the United States Economy: A Study of the 1992 Presidential Election.ÕÕ Economics and
Politics 11:51–81.
Hillygus, D. Sunshine, and Todd Shields. 2008. The Persuadable Voter. Princeton, NJ: Princeton
University Press.
Hoek, Janet, and Philip Gendall. 1997. ‘Factors Affecting Poll Accuracy: An Analysis of Unde-
cided Respondents.’ Marketing Bulletin 8:1–14.
Holbrook, Allyson, Melanie C. Green, and Jon A. Krosnick. 2003. Telephone versus Face-to-Face
Interviewing of National Probability Samples with Long Questionnaires: Comparisons of
Respondent Satisficing and Social Desirability Response Bias.’ Public Opinion Quarterly
67:79–125.
Holbrook, Thomas, and Jay DeSart. 1999. ‘Using State Polls to Forecast Presidential Election
Outcomes in American States.’ International Journal of Forecasting 15:137–42.
Holbrook, Thomas M. 2004. ‘‘Good News for Bush? Economic News, Personal Finances, and the
2004 Presidential Election.’ Political Science and Politics 37(4):759–61.
Hopkins, Daniel J. 2009. ‘‘No More Wilder Effect, Never a Whitman Effect: Why and When Polls
Mislead About Black and Female Candidates.’ Journal of Politics 71(3):769–81.
Huber, Greg, and Kevin Arceneaux. 2007. ‘Identifying the Persuasive Effects of Presidential
Advertising.’ American Journal of Political Science 51(4):957–77.
The Evolution of Election Polling 979
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
Iyengar, Shanto, and Donald R. Kinder. 1987. News That Matters: Television and American
Opinion. Chicago: University of Chicago Press.
Jackman, Simon. 2005. ‘Pooling the Polls over an Election Campaign.’ Australian Journal of
Political Science 40(4):499–517.
Jackson, John E. 1975. ‘Issues, Party Choices, and Presidential Votes.’ American Journal of
Political Science 19:161–85.
Jacobs, Lawrence R., and Robert Y. Shapiro. 1995. ‘The Rise of Presidential Polling: The Nixon
White House in Historical Perspective.’ Public Opinion Quarterly 59:163–95.
———. 2005. ‘Polling Politics, Media, and Election Campaigns.’ Public Opinion Quarterly
69(Special Issue No. 5):635–41.
Johnston, Richard, Andre Blais, Henry E. Brady, and Jean Crete. 1992. Letting the People Decide:
Dynamics of a Canadian Election. Stanford, CA: Stanford University Press.
Kernell, Samuel. 2000. ‘‘Life Before Polls: Ohio Politicians Predict the Presidential Vote.’Political
Science and Politics 33:569–74.
King, Gary. 2009. ‘The Changing Evidence Base of Social Science Research.’ In The Future of
Political Science: 100 Perspectives, edited by Gary King, Kay Schlozman, Norman Nie.
New York: Routledge Press.
Krysan, Maria. 1998. ‘Privacy and the Expression of White Racial Attitudes: A Comparison Across
Three Contexts.’ Public Opinion Quarterly 62:506–44.
Lau, Richard R. 1994. ‘An Analysis of the Accuracy of ÔTrial HeatÕ Polls in the 1992 Presidential
Election.’ Public Opinion Quarterly 58(Spring):2–20.
Lewis-Beck, Michael. 2005. ‘Election Forecasting: Principles and Practice.’ British Journal of
Political Science 7(2):145–64.
Lewis-Beck, Michael, and Charles Tien. 1999. ‘Voters as Forecasters: A Micromodel of Election
Prediction.’ International Journal of Forecasting 15:175–84.
———. 2004. ‘Jobs and the Job of President: A Forecast for 2004.’Political Science and Politics
37(4):753–58.
Linzer, Drew. 2011. ‘‘Dynamic Bayesian Forecasting of Presidential Elections in the States.’ Work-
ing Paper, Emory University.
Martin, Elizabeth A., Michael W. Traugott, and Courtney Kennedy. 2005. ‘A Review and Proposal
for a New Measure of Poll Accuracy.’ Public Opinion Quarterly 69(Autumn):342–69.
Mathiowetz, Nancy. 1992. ‘Errors in Reports of Occupations.’ Public Opinion Quarterly
56:352–55.
Meredith, Marc. 2009. ‘Persistence in Political Participation.’ Quarterly Journal of Political
Science 4(3):186–208.
McComb, Maxwell, and Donald Shaw. 1972. ‘Agenda-Setting Function of Mass Media.’Public
Opinion Quarterly 36:176–87.
Mokrzycki, Michael, Scott Keeter, and Courtney Kennedy. 2009. ‘Cell-Phone-Only Voters in the
2008 Exit Polls and Implications for Future Noncoverage Bias.’ Public Opinion Quarterly
73(5):845–65.
Mosteller, Frederick, H. Hyman, P. McCarthy, E. Marks, and D. Truman. 1949. The Pre-Election
Polls of 1948: Report to the Committee on Analysis of Pre-Election Polls and Forecasts.
New York: Social Science Research Council.
Page, Benjamin I., and Richard A. Brody. 1972. ‘Policy Voting and the Electoral Process: The
Vietnam Issue.’ American Political Science Review 66:979–95.
Panagopolous, Costas. 2009. ‘Polls and Elections: Pre-Election Accuracy in the 2008 General
Elections.’ Presidential Studies Quarterly 39(December):896–907.
Prior, Markus. 2009. ‘The Immensely Inflated News Audience: Assessing Bias in Self-Reported
News Exposure.’ Public Opinion Quarterly 73(1):130–43.
Rhode, Paul W., and Koleman S. Strumpf. 2004. ‘‘Historic Presidential Betting Markets.’Journal
of Economic Perspectives 18(Spring):127–42.
980 Hillygus
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from
Rigdon, S. E., S. H. Jacobson, W. K. T. Cho, E. C. Sewell, and C. J. Rigdon. 2009. ‘A Bayesian
Prediction Model for the U.S. Presidential Election.’ American Politics Research 37:700–24.
Rosenstiel, Tom. 2005. ‘Political Polling and the New Media Culture: A Case of More Being Less.’
Public Opinion Quarterly 69(5):698–715.
Rothschild, David. 2009. ‘Forecasting Elections: Comparing Prediction Markets, Polls, and Their
Biases.’ Public Opinion Quarterly 73(5):895–916.
Rothschild, David, and Justin Wolfers. 2010. ‘Forecasting Elections: Voter Intentions versus
Expectations.’ Working Paper, University of Pennsylvania, http://assets.wharton.upenn.edu/
;rothscdm/RothschildExpectations.pdf.
Silver, Nate. 2008. ‘Debunking the Bradley Effect.’ Newsweek, October 28.
———. 2009. ‘‘The Worst Pollster in the World Strikes Again.’ FiveThirtyEight.com, March 24.
http://www.fivethirtyeight.com/2009/03/worst-pollster-in-world-strikes-again.html.
Smith, Tom W. 1990. ‘‘The First Straw? A Study of the Origins of Election Polls.’Public Opinion
Quarterly 54(1):21–36.
Squire, Peverill. 1988. ‘Why the Literary Digest Failed.’ Public Opinion Quarterly 52:125–33.
Traugott, Michael. 2005. ‘‘The Accuracy of the National Pre-Election Polls in the 2004 Presidential
Election.’ Public Opinion Quarterly 65(5):642–54.
Traugott, Michael, Glenn Bolger, Darren W. Davis, Charles Franklin, Robert M. Groves,
Paul J. Lavrakas, Mark S. Mellman, Philip Meyer, Kristen Olson, J. Ann Selzer, and Chris
Wlezien. 2009. ‘An Evaluation of the Methodology of the 2008 Pre-Election Primary Polls.’
AAPOR Ad Hoc Committee on the 2008 Presidential Polling.
Vavreck, Lynn. 2009. The Message Matters: The Economy and Presidential Campaigns. Princeton,
NJ: Princeton University Press.
Wand, Jonathan, Kenneth Shotts, Jasjeet S. Sekhon, Walter R. Mebane, Jr., Michael Herron, and
Henry E. Brady. 2001. ‘The Butterfly Did It: The Aberrant Vote for Buchanan in Palm Beach
County, Florida.’ American Political Science Review 95:793–810.
Wlezien, Christopher, and Robert S. Erikson. 2002. ‘The Timeline of Presidential Election
Campaigns.’ Journal of Politics 64(4):969–93.
The Evolution of Election Polling 981
at Duke University on June 15, 2012http://poq.oxfordjournals.org/Downloaded from