Spin Doctors

The media, the science and the hyping of health studies

Published on February 24, 2013by Elaine Meyer

Last September, a group of Stanford University scientists published a controversial study reporting that organic foods are no more nutritious than conventional foods.

The media jumped on it.

Organic Food No Healthier than Non-Organic: Study,” Reuters reported. Business Week’s headline: “Organic Food Adds No Vitamins for Extra Cost.” Other reputable outlets gave the results similar coverage.

But just as that news cycle ended, the backlash began.

It soon became clear that the researchers–who are part of the Stanford Center for Health Policy and Center for Primary Care and Outcomes Research–had been misleading. They had defined nutrition only based on number of vitamins, downplaying other advantages of organic foods, like reduced pesticides and antibiotic resistant bacteria.

Outlets that had at first reported the study credibly backtracked. “Parsing of Data Led to Mixed Messages on Organic Food’s Value read a New York Times headline. “Lots of Chatter, Anger Over Stanford Organic Food Study,” said the Los Angeles Times.

The scientists had acknowledged the pesticide and bacteria advantages—they just communicated in their paper as if the issues fell outside of what it means for food to be nutritious, saying “the published literature lacks strong evidence that organic foods are significantly more nutritious than conventional foods. Consumption of organic foods may reduce exposure to pesticide residues and antibiotic-resistant bacteria.”

How did this study get legs so fast? That question has recently been on the minds of both scientists and journalists, touching on important issues related to how science is communicated in an era of 24/7 news.

Recent research into medical spin suggests that the Stanford organics study is hardly an isolated incident. When faculty at Université Paris Descartes reviewed 70 randomized controlled trials published during 2009 and 2010, they found that half of news reports were guilty of spinning scientific results. Interestingly, a majority of the spin originated in the conclusion section of the scientific abstract—that is, with the scientists.

“If their trial is negative they try to find at least a positive message and emphasize this positive message,” said Dr. Philippe Ravaud, the senior author of the study who is a professor at Descartes and an adjunct professor at Columbia University’s Mailman School of Public Health.

Some reporters seized on the PLoS study as vindication against scientists who have accused their profession of sloppy and sensationalistic reporting.

Their excitement obscured perhaps a more troubling revelation of the PLoS study, which is how complicit the media was—at least initially—in reporting research without questioning it, as in the Stanford organics study.

Although scientists and journalists often view theirs as an adversarial relationship—one in which the scientist’s quest for accuracy is at battle with the journalist’s quest for a story that people will read and find relevant, both scientists and journalists have an interest in hyping up positive results and downplaying negative ones.

“Everyone involved can be tempted to benefit from exaggeration. The news person has what looks like an exciting story and the investigator has visibility, which is increasingly valued by medical school PR offices and promotion committees,” says Dr. David Ransohoff, a professor of medicine and epidemiology at the University of North Carolina-Chapel Hill and an associate editor of the Journal of the National Cancer Institute.

Not only does media coverage of a study influence a scientist’s prominence, but it can also direct funding toward certain medical procedures and diagnostic tests, shape individuals’ health choices, and change grant-funding priorities.

There have been many examples in recent years of the symbiotic relationship between medical researchers and journalists.

One cited by the PLoS study took place in 2009, when researchers at the Henry Ford Hospital in Detroit reported that acupuncture is as effective as drug therapy for treating breast cancer. The media was quick to cover these impressive results.

However, the researchers’ study included only 25 women in each trial group at the study’s conclusion and thus was not statistically significant, say the authors of the PLoS paper.

Dr. Eleanor M. Walker, the lead author of the study and director of breast services in the department of radiation oncology at Henry Ford defended her research to the Chronicle of Higher Education, saying “everything that was mentioned is in fact in the paper and supported with data.” She did not respond to a request for comment for this article.

Because of the incentives that both scientists and journalists have to cherry-pick results, some experts say the medical journals that publish their research need to do a better job of catching spin.

“Researchers often have an interest in overstating their findings, sometimes through financial interests, but often, also, because of an honest and passionate belief in their favorite pet theory,” Dr. Ben Goldacre, a frequent critic of science journalism and writer of the website Bad Science, said over email. “But for these overstatements and distortions to make it into print, academic journals themselves have to fail.”

And in recent years they have failed. Retractions have risen, drawing attention to the flaws of the peer-reviewed system.

Moreover, some of the most prestigious and best-known journals like Science and Nature, those whose studies make it into the news most often, have had more retractions than specialized journals.

The big journals have their eye out for studies that are more likely to generate media coverage, says Dr. Richard Ransohoff, the director of the Cleveland Clinic’s Neuroinflammation Center and cousin of Dr. David Ransohoff. The two of them published an article in 2001 called “Sensationalism in the Media: When Scientists and Journalists May be Complicit Collaborations,” which anticipated many of these issues.

Unlike the specialty journals, which are often published by professional societies and run by practicing scientists, high-impact journals are typically headed by professional editors, says Dr. Richard Ransohoff. While these editors usually have science doctorates, they have been out of the field for a while and are lacking expertise in most of the areas their journal publishes in.

“The people running those journals always have one eye on the quality of the science and its importance to the scientific community, but there are other eyes on getting publicity because that in some ways helps their journal,” says Dr. Ransohoff.

This seems to have been what played out in 2006 when the New England Journal of Medicine published a study by scientists at Weill Medical College of Cornell University finding that a computerized tomography or CT scan could prevent 80 percent of lung cancer deaths if detected at an early stage – a dramatic result, especially since the scans were not a part of routine medical screening for lung cancer and little was known about the success rate of cancer screening.  The media jumped on findings.

Emboldened by the coverage, the Cornell scientists and screening advocates were able to pressure Congress into investigating whether to halt a long-running national trial that was comparing the CT scan to the chest X-ray, claiming it was unethical to perform the latter test on people in light of the roaring success of CT scans. They also helped get state legislatures across the country to consider bills to direct tobacco company settlement funds to CT screening programs.

But the Cornell group came under fire when it was discovered that their group, the International Early Lung Cancer Action Program, had financial interests in the results, including millions in grants from the parent company of a cigarette-maker and patents pending related to CT screening and follow up.

Even before those issues emerged, critics had taken issue with the study’s methods. Their criticism was usually reported in the media but initially did not lead the coverage.

While no party was blame-free in the debacle, the publisher bears major responsibility for publishing the study, says Dr. David Ransohoff, who at the time was a prominent critic.

“It will likely go down as one of the bigger publishing goofs the New England Journal has ever made,” he says, adding that it was a rare but serious slip up on the part of the journal.

What keeps the media from being more skeptical of journals? Says David H. Freedman a journalist and author of Wrong: Why Experts Keep Failing Us—And How to Know When Not to Trust Them, scientists–unlike other oft-reported on figures like politicians and businesspeople–can occupy a rarified plane in journalists’ minds.

“[I]n health journalism (and in science journalism in general), scientists are treated as trustworthy heroes,” Freedman writes in a recent article in the Columbia Journalism Review. “Scientists are human beings who, like all of us, crave success, status, and funding, and who make mistakes; and that journals are businesses that need readers and impact to thrive.”

That perspective is shared by Dr. Kausik Datta, an immunology researcher at Johns Hopkins University who has written on topics related to media coverage of science for the Nature.com-associated blogging network SciLogs. Journalists sometimes will have “too much awe for the scientist/institution associated with the study, including personal/emotional investment,” he says.

Some scientists complain that they are operating in a climate that demands sexy results at the expense of accuracy.

“On the one hand, scientists are expected to present their data dispassionately and objectively; at the same time, they are also expected to make their research sound “sexy,” or at least relevant and orderly,” said cancer and stem cell biologist Dr. Ada Ao on Nature’s Scitable blog.

Scientists are usually quick to say that there are many journalists who labor to get the story right.

And some journalists who get it wrong are under the gun, pushed by the deadline pressures of publications that demand several stories or blog entries a day, making it difficult to do background research and seek out various points of view. While editors at some publications may give their reporters time to get it right, many others prioritize more news at the expense of better-reported news, especially if competing outlets are also going to report the story.

Some scientists are critical of this defense, but they acknowledge that their profession could do a better job of communicating their research and making their findings more accessible.

“As a working scientist, I feel that our first duty is to science. But that doesn’t mean that we should confine ourselves to the proverbial ivory tower,” says Dr. Datta. “We need to actively engage with the general public at large, as well as science journalists, and spend some amount of time on a regular basis to skim through how our research work is being portrayed in the media, as well as engage in dialogs if necessary.”

Edited by Jordan Lite and Dana March. Additional research by Arti Virkud.