twitter

Friday 31 March 2017

Reader beware: Science covered in the news is pretty likely to be overturned IVAN ORANSKY @ivanoransky and ADAM MARCUS @armarcus MARCH 2, 2017


https://www.statnews.com/2017/03/02/science-media-news/
Coffee’s good for you. Or maybe it’s bad for you. Ditto dark chocolate. And red wine.
It often feels as though today’s health headlines are some scientific version of Mad Libs. And now there’s a study that provides evidence for that hunch.
It’s not news — so to speak — that credulous reporters too often produce nuance-free articles about research that deserves not only caveats but outright skepticism, nor how much coverage of science, and biomedicine in particular, suffers from “shiny object syndrome” — the uncontrollable impulse to chase after that latest thing to catch the eye, as long as it’s pretty and uncomplicated.
Now, however, researchers at the University of Bordeaux, France, have connected the dots with a study that shows the extent of the problem.
Their analysis of media coverage indicates that studies written about in newspapers are highly likely to be later overturned.
“This is partly due to the fact that newspapers preferentially cover ‘positive’ initial studies rather than subsequent observations, in particular those reporting null findings,” the researchers note in their study, which appears in the journal PLOS ONE.
Using a database of thousands of published studies in six areas — psychiatry, neurology, breast cancer, rheumatoid arthritis, glaucoma, and psoriasis — the researchers identified papers reporting initial results, as well as subsequent meta-analyses (which combine the results of numerous studies for a more robust sense of what’s going on). The team then looked at subsequent newspaper coverage of those studies.
And the number of stories per study also depended on the prestige of the journal in which it appeared. That’s hardly surprising. Newsrooms are more likely to pay attention to these marquee publications; they also tend to have the slickest PR machines that pump out press releases to the herd. As the authors note, “publishing a study in a prestigious journal considerably increases its chance of being covered by newspapers irrespective of the type of association investigated or its newness.”
Also unsurprising (but by no means encouraging), news outlets were far more likely to report on initial studies than follow-up research, covering roughly 13 percent of the former but only 2 percent of the latter.
And who says readers love bad news? Not in science, apparently: All 53 initial studies that generated news coverage reported positive findings. As for the 174 studies with null, or negative, results, the number of resulting stories was zero. Journalists did a slightly better job covering negative findings in follow-up studies, but only slightly.
Hence phenomena like the 2003 study in Science on how stress and genetics are linked to depression that garnered 50 newspaper stories, plus another nine articles when two subsequent studies appeared to confirm the finding. But “newspapers never covered the eleven subsequent studies that failed to replicate this genetic association,” according to the authors.
We hardly need to prove the argument that the public can seize on news stories about flawed or unreproduced science, especially biomedicine, with unfortunate consequences. Safe to say, the dangers are clear and present. Paging Andrew Wakefield …
So the French researchers offer a bit of advice to science journalists: “When preparing a report on a scientific study, journalists should always ask scientists whether it is an initial finding and, if so, they should inform the public that this discovery is still tentative and must be validated by subsequent studies.” Of course, their findings hint that few will follow said advice: “Our study also suggests that most journalists from the general press do not know or prefer not to deal with the high degree of uncertainty inherent in early biomedical studies.”
To be fair, they also note that although reporters bear the brunt of the blame here, scientists also are at fault. As any science journalist can attest — and as those on deadline know even more acutely — finding objective sources to put a new study into the appropriate context is challenging at best.
We agree. If scientists aren’t willing to talk to journalists about studies, they lose credibility when they complain that the “press always gets it wrong.” Perhaps if those scientists realized that journalists are working under incentives that are as warped as the ones that govern science, they’d return more of those phone calls.
Now, let’s see if this new study can be reproduced. If not, we promise we’ll write about it anyway.