18 January 2013

Taking science reporting with a grain of salt


The chart above displays a history of studies of health effects of omega3 (fish oil) supplements, from early enthusiasm re a health benefit to later evidence of no such effect, explained as follows:
For a small study (such as Sacks’ and Leng’s early work in the top two rows of the table) to get published, it needs to show a big effect — no one is interested in a small study that found nothing. It is likely that many other small studies of fish oil pills were conducted at the same time of Sacks’ and Leng’s, found no benefit and were therefore not published. But by the play of chance, it was only a matter of time before a small study found what looked like a big enough effect to warrant publication in a journal editor’s eyes.

At that point in the scientific discovery process, people start to believe the finding, and null effects thus become publishable because they overturn "what we know". And the new studies are larger, because now the area seems promising and big research grants become attainable for researchers. Much of the time, these larger and hence more reliable studies cut the "miracle cure" down to size. 
The take-home point here is not about fish oils, but about any report that product XYZ is good for (or bad for) your health or well-being.  The mainstream media (and the 'net) is full of such stuff.  It's also important to ascertain who funded a given study.  If the National Association of WidgetMakers funds seven studies, six may be inconclusive or negative, and the seventh may show a benefit, and only the last will get publicized.

Graph and text via The Dish.

9 comments:

  1. This is not just specific to the "soft" sciences. It happens in the "hard" sciences too, where the results of an experiment are biased by the previous measurement.

    Here are a bunch of physical constants, versus time. Of course, the numbers are constant, but the experiments change.

    http://pdg.lbl.gov/2004/reviews/historyrpp.pdf

    ReplyDelete
  2. The thing to note is that the historical values look like a chain of measurements; they are not "jumpy". If they had been 100% independent measurements, you would expect the lower precision values to jump above and below the high precision values. But the don't. The value of measurement 'n' is tied to measurement 'n-1' because that is human nature. If the value is in agreement, you stop your experiment and publish. If it is not, then you try to find your error(s). You fix errors until it agrees. And you don't fix the errors you haven't found yet.

    ReplyDelete
  3. Another point that deserves mention is the mechanism, that following studies are harder to publish, if they contradict previous (semi-established or fashionable) statements. Plus: The authors are assuming that the previous studies were right and this get a certain "bias" towards their results, sometimes massaging data until it fits the picture.

    This leads to a gradual decrease of the effect and is especially funny when it afterwards turns out that the initial effect was invented by the crook writing the first study.

    Superb article explaining the mechanics: "Why most published results are wrong", http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 .

    ReplyDelete
  4. Especially regarding human health and diet... it always makes me wonder what actual variables are being uncovered. The idea that red wine is necessarily a "good thing" is one that always makes me chuckle - how many poor people (in the US, anyway) are drinking red wine three times a week? It seems like the correlation between general better health and the ability to afford three wine drinking meals a week is potentially stronger than red wine's innate ability to make the heart healthier.

    The studies that try to account for socio-economic status make more sense, but even then, the normalization algorithm can be faulty without being obvious.

    ReplyDelete
  5. It's called: progress of insight. Or: learning.

    The beauty of science is that it constantly questions itself, and changes its view based on new knowledge, or the discovery of errors in old knowledge.

    Instead of belittling the changing insights, ask yourself where humanity and the US would be without new insights.

    You may choose: Life with the certainty of an unwavering and never-changing belief or faith, or the progress of science.

    And before you answer, ask yourself: Could you read and answer my question on the internet without the progress of science?

    ReplyDelete
  6. No, it's not progress of insight, except insomuch as "no bucks, no Buck Rogers" is concerned. It's a slow news day, most of the time. Sure, do the studies, small at first, then larger to see if the variables can be accounted for, but in so many cases the researchers choose to (or have to, depending on their sources of funding) go to the press to get anyone to notice them, and the press is notorious for over- or under-stating things when it comes to science.

    As for the comment about the Internet, all I have to say is that without the internet, you had to be good to be published. Now, heck, even *I* have a blog... ;o)

    ReplyDelete
  7. "It's also important to ascertain who funded a given study. If the National Association of WidgetMakers funds seven studies, six may be inconclusive or negative, and the seventh may show a benefit, and only the last will get publicized."

    I definitely agree that knowing the funder or moving party behind a study is vital to an accurate assessment of the usual media reports which proliferate when findings of a so-called scientific study are released. But I take issue with your conclusion about the source of bias I think it incomplete. While I agree that corporate studies are inherently suspect, I would also urge caution about any study which comes out of most high-profile NGOs, with the "Center For Science In The Public Interest" being among the most biased and unreliable.

    Indeed, we are far more likely to encounter badly flawed left-leaning "public-interest" studies than those which puff for corporations. The main reason for this is the news media's own partisanship and political preference. The traditional media has gotten very good at sniffing corporate shilling, and it's become harder and harder to find bogus studies about products or substances which are financed by those with a financial interest in them. But the members of the news media are overwhelmingly in favor of the nanny-statism which so enamors CFSIPI. Any study it releases is given the royal treatment by the news-media royalty, regardless of blatant and obvious shortcomings in the studies or the conclusions drawn from them.

    ReplyDelete
    Replies
    1. I find it interesting that you view the news media as being left-leaning or having liberal tendencies. May I ask what country you live in?

      Delete