Conclusions are as good...

Conclusions are as good as how study was done’

By: Jennifer LaFleur

November 28, 2007, The Dallas Morning News

Blondes make men dumber.  Brits overfeed their bunnies.  Memphis is the fattest city.  These are among the findings of recent reported research.  But how can the average person know whether to believe a particular study?

  These are among the findings of recently reported research.  But how can the average person know whether to believe a particular study?

“Citizens in a democracy have a duty to try to be informed, and, with the Internet, that duty is less painful than ever,” said David Banks, professor of practical statistics at Duke University.  “In fact, people who have not learned to sort wheat from chaff are setting themselves up to be taken advantage of by others.”

To identify whether research is reliable, first consider whether it makes sense, Dr. Banks said.

“If it doesn’t look reasonable, then you should probe it a lot more carefully.”

Second, he said, consider where the study first appeared.  Research in scientific journals, such as the Journal of the American Statistical Association or the Journal of the American Medical Association, is rigorously reviewed before it is published. 

You can find out about the publication pretty easily, Dr. Banks said.  “It doesn’t take much surfing on the Web these days; you can see that the people publishing there are from named universities and they’re doing real science.”

Other experts also say that answering some key questions about a study can help you evaluate it. 

Who conducted the study?

You usually can be more confident in studies by organizations with research experience.  For example, a study about obesity from the Centers for Disease Control and Prevention might carry more weight than one from a diet pill company.

How was the study conducted?

Web polls and man-on-the-street interviews are not considered scientific studies.  Internet polls can be fun for Web site visitors, but the results are not necessarily reliable.  Many of those surveys allow multiple responses from the same person and don’t randomly select participants. 

Randomness is not a notion I just pulled out of a hat.  It’s important.  Scientific research draws samples – groups of folks to interview – so that every individual in the “universe” has the same chance to be picked.  That universe might be registered voters, or it might be kayakers.

Ever wonder why telephone survey takers ask for the person with the most recent birthday?  That’s a technique used to try to randomize the people interviewed and not just those who typically answer the phone.

How many people were interviewed?

If a study is based on what 25 people said, chances are that results could vary greatly if another 25 people were asked the same set of questions.  That variation decreases as the number of people goes up.  Social scientists usually recommend samples of at least 400 people, which produce results with a margin of error just less than 5 percent, 95 percent of the time. 

How were the questions asked?

Questions can be asked in ways that are called “leading,” meaning that they are trying to get a certain response.  Consider a make-believe poll about a proposed new highway.  One question – DO you want to spend millions of taxpayer dollars on another source of pollution – might yield different results than - Do you support construction of a new highway that will reduce commute times?  Neither is appropriate because neither question is neutral. 

What’s the gap?With political season heating up, beware of polls that say someone is ahead or behind when the numbers appear to be very close.  Although more complicate d statistical information could show that one person is more likely to be ahead than another person, polls should take into account margin of error and people who didn’t respond or said they don’t know.  That’s why you’ll see news reports that say that Candidate A appears to be slightly ahead of Candidate B.

Of course some research doesn’t involve asking questions.  It might involve the behavior of mice, frogs or chubby British bunnies.  But many of the same questions should be asked of such studies.

Watch for spurious correlations, warns Steve Doig, a journalism professor at Arizona State University who teaches statistical techniques for journalists.  Studies of cause and effect may ignore underlying causes.  For example, you might find that the number of ministers and the number of liquor stores correlate in cities.  But that probably is because both increase as the population increases – not because ministers drink too much.  You also might see a correlation between shoe size and reading ability, but that’s because babies don’t read well.

Does the research report provide information about the methodology?Most researchers will provide questionnaires and information such as techniques used and sample size.  If they don’t, it might mean that there is a problem with the research. 

If the research appears to be pushing a certain point of view, that should be a red flag, Dr. Banks said.  “Any thoughtful reader ought to think about what the other side is.”

Unless he’s a man who hangs around blondes, that is.  Research shows he might be too dumb to be thoughtful.

Share this with...

Submit to DeliciousSubmit to DiggSubmit to FacebookSubmit to Google PlusSubmit to StumbleuponSubmit to TechnoratiSubmit to TwitterSubmit to LinkedIn