The Numbers Can Lie: Abusing Statistics in Political Debate
Few people would disagree with the assertion that political debate has reached a new low in the United States in recent years. Ad hominem attacks and libelous allegations reached a fever pitch in the tension of the 2016 election, as political opponents tried to debase each other with attacks on character instead of the details of proposed policies. Many concerned observers have called for the end of this kind of divisive back-and-forth, and have pushed instead for policy debates centered around verifiable facts, studies, and statistics. Unfortunately, over the last year there have been countless examples of political pundits and policymakers using real statistics in ways that manipulate or outright deceive their audiences into believing a reality that the evidence does not support. On a variety of topics, from immigration to the minimum wage and tax policy, unworthy positions on both sides of the political aisle have been defended by tactics like these. For the level of discourse to truly improve in this country, policy advocates and news consumers alike will have to be made aware of the many ways in which “hard numbers” can be misleadingly framed. Those doing the problematic framing must be held accountable as well, since the use of misleading statistics does just as much damage to the public discourse as false news.
One of the most commonly cited statistics in the world of policymaking is the poll number. Measuring the proportion of people, be it among a certain demographic or nationwide, that support a policy proposal or political figure is really more of an art than a science. Even in elections with highly predictive patterns and trends, like presidential elections, polls can’t gauge public opinion beyond a wide margin of error. In primaries and special elections the accuracy is even worse, verging on ten-point margins of error. Unfortunately, this hasn’t stopped polls from being contorted in all manner of directions to push ideas that they do not prove. One of the most obvious examples of this can be seen in President Trump’s approval rating, a polling number that is supposed show the percentage of the American people that approve of the job Donald Trump has done as POTUS. The President has contended that his average approval numbers, which show him polling at somewhere around 41.1%, are being kept artificially low by the media. To prove this, he has frequently pointed to the Rasmussen Reports Daily Tracking Poll, which typically reports an approval rating of about 46-50%, significantly higher than other highly-regarded polls. In one tweet, the President posted an image of the poll, insisting that “#FakeNews likes to say we’re in the 30’s. They are wrong. Some people think numbers could be in the 50's.”
In this particular case, however, those who point to Rasmussen’s anomalous numbers are making a basic error. Out of the 16+ polls that are regularly aggregated by the most commonly cited sites, FiveThirtyEight and RealClearPolitics, Rasmussen’s is the only one that questions “likely voters.” According to their website, this means that only those who have a verified history of habitual voting, found by using “a series of screening questions… used to determine likely voters”, may respond to the poll. Once their responses about “voting history, interest in the current campaign, and likely voting intentions” are cleared, then they may vote in the daily poll. While this means that the results of the poll are useful for gauging the President’s reelection chances, the restrictions compromise its use as a gauge of approval. National politicians are tasked with serving all Americans, not just those with a history of voting. This is why all other reputable approval pollsters extend their polls to all adult citizens, like Marist and Ipsos, or to all registered voters, like Fox News and Quinnipiac. Understanding the basic tenets of polling is an important skill when trying to discern public opinion.
Misrepresenting the purposes of a specific poll is not the only mistake that politicians make when using their numbers as evidence. It is also important to ensure that the questions asked by a poll match the purported conclusion. In February, the White House released a statement in support of an immigration reform proposal. The statement included a poll that claimed to show 84% support for their proposal to “end chain migration”. The poll, by Harvard-Harris, asked respondents whether or not they believed “immigration priority for those coming to the United States should be based on a person’s ability to contribute to America as measured by their education and skills… [instead of being] based on a person having relatives in the U.S.”
The virtues and vices of merit-based immigration aside, the White House’s proposal did not call for the policy that the poll asked about. All the numbers showed was that Americans support rebalancing the immigration visa system in favor of skilled foreigners rather than relatives of those in the U.S.. The White House proposal went in a different direction, calling for family-based migration to be limited to “spouses and minor children”, even though the vast majority of family visas already go to these groups. The proposal also called for the small amount of visas which are currently in the diversity lottery system to be repurposed to alleviate the “backlog” of family and merit visas.
Nowhere did the proposal call for more merit-based immigration, or for a re-balancing of visas. Clearly, the White House proposal went far beyond the support that it claimed polls showed. Their numbers could have been honestly used to support a system that repurposed half of family-based visas to a merit-based system, but the Trump administration did not call for this. Instead, they used a vaguely-worded polling question to support a sweeping, radical change to the US immigration system, even though the poll had little to do with the proposed changes. This instance of polling deception is just one example of dubiously relevant numbers being used to give credence to a controversial political opinion.
Other mistakes can occur when pundits cite studies to support their claims without verifying the study’s accuracy. For example, a December editorial in the Cavalier Daily arguing in favor of a higher minimum wage cited a “study” by the National Employment Law Project. When using academic studies to judge the merits of a policy proposal, it is typically wise to research the affiliation of the researching institution to uncover possible bias. In this case, the National Employment Law Project has been described as “a workers' rights group that researches policy for low-wage workers”, which colors its credibility on issues relating to wages. However, just because the organization is affiliated with labor does not mean that its research cannot be trusted, so it is important to look at the merits of their findings.
The cited report found that there was no correlation over the past seventy years between increases in the minimum wage and increases in joblessness, but admitted that it got these results without using “an academic study that seeks to measure causal effects using techniques such as regression analysis”, which is the standard in economic research into correlation. Rather, the study “assesses opponents’ claims about raising the minimum wage on their own terms by examining simple indicators and job trends.” This method of research was so simplistic that its critics had little problem finding significant holes in its findings. Using reputable studies with respectable methodologies is an important facet of informed policy debate, and understanding what to look for when vetting these studies is another crucial skill that Americans need to learn.
Furthermore, it’s often important to recognize in certain debates that high-quality studies and academic papers can genuinely disagree. Just looking at the argument over the minimum wage, an overview of recent, peer-reviewed studies shows that economists can’t agree on whether minimum wage increases have a significant effect on employment. On subjects like these, like automation and employment, or Federal Reserve policies, it’s important to recognize that academia is not settled, and policy debates will often have to move forward with inconclusive information. When it comes to more settled matters, like the consensus on tariffs or immigration, the study is a much more powerful tool of persuasion. As we move forward and face challenges that have not yet been confronted, policymakers will not be able to lean on established literature to feel confident about a specific course of action
Numbers, statistics, and research play a phenomenally important part of political debate. They are features of the debate atmosphere that have become woefully hard to find in the modern day, and there should be high priority placed on making sure we restore their importance. However, in the era of fake news and easy internet searching, being able to separate the bad numbers and studies from the good is continuously getting harder. Identifying different types of polls, understanding when polling questions are poorly worded, and determining when a study has no rigorous method are just a few areas where the honest debater can trip themselves up. Systematically solving this problem will require concerned citizens to call out those responsible, which will only be possible if we spread awareness of this truly widespread issue.