How to Spot Bullshit in the News

When you read the news, you can be reassured when a journalist backs up his statement with the poll results. But not all polls are equally reliable. Fortunately, there are some telltale signs of unreliable polls — and polls you can kind of trust. Yeah, that’s a spectrum.

We spoke to John Cohen, one of the most qualified survey experts in the country. For the past four years, he has worked with emerging survey organizations as SurveyMonkey’s Principal Investigator . Prior to that, he was vice president of research at the Pew Research Center and (for eight years) director of polls at the Washington Post. The change in technology has actually made it difficult to conduct surveys, he said.

Problem

“Today we are in a much tougher position when evaluating polls than even ten years ago,” says Cohen. Calls to cell phones can cost twice as much as landlines. Therefore, modern sociologists often have to use new methods. Many news outlets cannot afford their own polls, so they rely on third-party polls. There are other ways for a sloppy and inaccurate survey to make it to the media.

Meanwhile, experts in the field often disagree on voting methods. Some of the best industry standards are outdated. The National Council on Public Surveys offers a list of “twenty questions a journalist should ask about the poll results.” But as Cohen points out, this list hasn’t been updated in over a decade. Since then, the number of adults without Internet access has halved, and the use of cell phones in polls has tripled, both of which are dramatically changing the situation for sociologists.

Surveys are now conducted using all sorts of cheaper methods, including robotic calls and various forms of online surveys, which were considered suspicious even when house calls were the gold standard. SurveyMonkey, which works with news organizations to conduct online surveys, randomly selects its respondents from its 3 million daily users who participate in various surveys on its site. ( They explain their methodology here. )

But, as Cohen admits, not all survey experts agree with SurveyMonkey’s methods. FiveThirtyEight poll ratings give SurveyMonkey a C-. Cohen, meanwhile, thinks FiveThirtyEight gives some sociologists too high a rating, sometimes rewarding them for coinciding election results, even if they were just lucky. And all this between the two sites that participated in the polls .

Survey experts also disagree on some basic principles, such as whether to try to interview a huge number of people representing the general population (and then filter the results), or start with the specific population you want, such as registered voters or owners. property. Thus, the definition of a good survey may depend on which expert you are asking. However, experts tend to agree on several important factors, and you can use them to judge the survey results you see on the news.

Solution

As old as the twenty NCPP questions are, they are still a good starting point. They are aimed at journalists writing about polls, but now that more and more bad polls are making their way into media reports, these questions are useful to readers. These include “ Who paid for the survey? “And” What is sampling error? »And questions about who was actually interviewed . (NCPP details the meaning of each response.)

Cohen offers several new questions: were the respondents chosen at random, or did they “agree” to the survey included in the CNN article? The latter greatly distorts the results, and if these respondents also did not answer some of the demographic questions, they are difficult to correct. (Cohen says SurveyMonkey respondents tend to be more educated, for which SurveyMonkey corrects, but otherwise their demographics tend to match the national census data.) So with any survey, and especially any online survey, check how the respondents were selected.

How many people were interviewed? While a well-conducted survey with a small sample size can be valuable, Cohen said, you won’t be able to get reliable results with fewer than 100 respondents – and ideally you need much more.

So if you want to disaggregate your results across multiple subgroups, you need to start by asking 1,000 people, or even better 10,000. If you survey 500 people, but only 60 of them are black, you cannot get meaningful results about those 60 black respondents and Pretend they represent the national average. If one person changed their mind, it would change your results by more than a percentage point. (Cohen says that for rare populations, sociologists can be content with just 75 respondents.)

What questions were actually asked? According to Cohen, a reputable sociologist will disclose their multiple-choice questions and answers. Knowing them can completely change the meaning of the results.

During the war in Iraq, Cohen said, according to a poll, three-quarters of Americans wanted to withdraw American troops from Iraq within a year. But the poll, funded by an anti-war donor, was designed to get the desired answer. The respondents had four options: did they want the troops to leave within three months, six months, a year, or “stay as long as necessary”? For example, in two or three years there was no way to leave. While the poll results weren’t useless (they did show that a lot of people wanted to withdraw troops from Iraq), they didn’t really mean what the headlines claimed they meant. This leads to the next question:

How are results reported? If very few details are provided about a survey, making it impossible to answer all of the other questions, be suspicious. The best survey organizations and agencies like Pew, The Post, or Quinnipiac University are open about their methods, their data, and how they adjust that data. At the Post, Cohen’s team ran telephone surveys in parallel with online surveys to test how the survey method influenced the results. Then, when they conduct a survey using one method, they know how to set it up.

There are clear signs of unreliable polling. “If a survey is presented with decimal points, I reject it very quickly,” says Cohen. “No survey, no matter how it was conducted, can reach a tenth percentage point in terms of how people think.” Check to see if the survey organization is trying to get results from small subsets of respondents, or if they are not exaggerating the significance of their findings.

During the 2007 Democratic primaries, Cohen’s team in the Washington Post was the first to report that many black voters are shifting from Clinton to Obama. They saw this in a survey of just 136 black respondents, but because they trusted their methodology and because the shift was so noticeable, they were confident that this was not statistical noise. If the shift was half that, they wouldn’t trust the data enough to report it.

What do other polls say? Cohen believes the imperfect survey is still useful. But it is useful to compare it with other polls. A well-conducted telephone survey, even if carried out by a determined defender, does not necessarily have to be turned down.

The Republican and Democratic parties conduct opinion polls that tend to be about three or four points biased. But, Cohen says, that doesn’t necessarily mean that individual polls are skewed — it’s just that parties don’t publish poll results that don’t support their views. Taken together, these guerrilla polls can still provide useful results.

It would be nice if objective third parties conducted more surveys and reported them. But due to the high cost, they are simply not enough. Over the past few decades, it has become much more difficult to convince people to take a survey, and the average survey has a response rate of less than 10%. Thus, we are left to use whatever we can from distorted, partially published or otherwise incorrect surveys. FiveThirtyEight will weigh its averages in favor of the good sociologists, but will still consider the results of the bad ones after trying to correct them based on known errors and bias. (Of the hundreds of sociologists, the site estimates, only five are banned.) It is better to conduct a flawed survey with the right amount of skepticism than to ignore it entirely.

Even in the US Census, which produces demographic statistics by which survey data are measured, standards are changing – by improving questions and answers by race and ethnicity, the census has made it difficult to compare statistics directly from decade to decade. In fact, public opinion is not a physical constant that we can accurately measure. These are nuances that are difficult to measure without influence, and are constantly changing. So don’t treat any survey as infallible, timeless truth.

More…

Leave a Reply