Citizen surveys have been described as the bottom line in local government—delivering outcome metrics local leaders are in the business to achieve. But these kind of surveys should not be mistaken for the election polls that left many around the world expecting a Clinton victory that never came. The wrong calls grew from results that were largely within the margins of uncertainty both nationally and in the swing states, but pollsters are remorseful about the miss even as the public wonders about their reliability.
That wonder or surprise could morph into distrust of citizen surveys, so before local government stakeholders start to worry about their own citizen surveys, it’s useful to take a moment to understand how fundamentally different political polls are from local government surveys.
While surveys and polls are in the same class—like mammals—they are by no means the same species—think dolphins vs. foxes. Citizen surveys collect evaluations of local government services and community quality of life; political polls predict voter turnout for incumbents or challengers.
Seven ways that citizen surveys are more trustworthy than political polls
- The most substantive difference between political polls and citizen surveys resides in the different purposes of the two, which results in fundamentally different methods. Citizen surveys deliver policy guidance, performance tracking, and planning insights based on current resident sentiment. Polls use surveys to prophesy a future outcome. While the base information is the same, polls apply statistical models to “guess” which demographic groups will vote and in what numbers. To emphasize the difference between survey results and poll conclusions, The New York Times gave the same survey results to four different pollsters and got four different predictions for presidential victor.
- Political questions typically are burdened by strong emotional sway that influences respondents to give interviewers what respondents believe to be the “socially acceptable” response. In this election many pundits on both sides of the election felt that there were “shy” Trump supporters fearful of admitting that support in polls. This, it was speculated, is why the now President-elect did worse in telephone interview polls, but better when responses could be given with no interviewer involvement (e.g. in “Robo calls” or on the Web). In citizen surveys, the stakes are lower with no pressure to provide an “acceptable” response. And if conducted using a self-administered questionnaire (mail or Web), citizen surveys avoid altogether the pressure for participants to inflate evaluations of community quality.
- Political polls influence votes and must account for voter gamesmanship, but there are no such forces at play for citizen surveys seeking evaluations of city services and community quality of life. As elections draw near, those favoring third-party candidates may change positions depending on the published poll results. For example, a supporter of the Libertarian or Green Party candidate may decide at the last minute to vote for a main party candidate because polls show the two-party race has tightened. Candidate changes may occur after the last survey is conducted, even for voters of the major parties. Citizen survey results often come just once every year or two, so there are no prior results that could shift a respondent’s choices and no winners to choose.
- In political polls, some types of voters just won’t respond. Some analysts believe that, in the recent election, the enmity toward the establishment - government and media, including the polls – kept many Trump voters from participating in election surveys. When the most passionate group favoring one candidate doesn’t respond to election polls, the polls underestimate support for that candidate. In citizen surveys, those who don’t respond tend to be less involved in the community. That’s not to say they have strongly different opinions about the community than those more involved. They simply have other priorities than taking a survey.
- Political poll responses are driven by values that tend to be polarized in the U.S. Citizen surveys are about observed community quality, so residents are not motivated by doctrinaire perspectives that whiplash aggregate response depending on who participates. Those who participate in citizen surveys generally have similar perspectives to those who do not participate. So response rates, even if as low as polls, do not undermine the credibility of the citizen survey.
- Response rates for most telephone polls are much lower than are response rates for citizen surveys conducted by mail. Typical phone response rates are about 9 percent these days, but well conducted citizen survey response rates range from 20% - 30%.
- Political polls must pick winners and losers, and those declarations occur within a generally modest margin of uncertainty. To stir excitement, talking heads usually ignore error ranges to name a winner who may as likely be a loser because the race is so close. Citizen surveys aim public sector decision-makers at differences that are larger—differences that are relevant to policy decisions. For example, whether 15% or 25% of residents give good ratings to street repair, government action may be required. The same is true for differences in survey sentiments over time or compared to other places. Properly interpreted citizen survey results assist government leaders by steering them clear of small differences whereas the lifeblood of media polls is to make Alps out of anthills.