Oseltamivir is the chemical name for a drug in the category of neuraminidase inhibitors (more commonly referred to as antiviral drugs). It is marketed most familiarly under the commercial name
Tamiflu. Tamiflu has an interesting history that was detailed by Helen Epstein in an article in the
New York Review of Books:
Flu Warning: Beware the Drug Companies. She describes the path taken by this medication from its initial evaluation for public usage by the medical committee of the FDA as being not worthy of FDA approval for treatment of flu sufferers, to its current position where billions of dollars worth of the material are deemed required to be stockpiled as a means of treating any coming flu pandemic. It seems a drug company that has control of the research data and has vast resources at its disposal can be quite effective at influencing people in positions of power.
Tamiflu’s use for the treatment of flu is controversial, especially where children are involved. A summary is provided by
Wikipedia.
"BMJ [British Medical Journal] editor Dr. Fiona Godlee, said ‘claims that oseltamivir reduces complications have been a key justification for promoting the drug's widespread use. Governments around the world have spent billions of pounds on a drug that the scientific community has found itself unable to judge’."
And why is Tamuflu so difficult to evaluate? In addition to the expected problems associated with evaluating complex medical responses of humans, unbiased researchers have been hindered by the refusal by its manufacturer to make all the data it has compiled available.
"A subsequent Cochrane review, in 2012, maintains that significant parts of the clinical trials still remains unavailable for public scrutiny, and that the available evidence is not sufficient to conclude that oseltamivir decreases hospitalizations from influenza-like illnesses. As of October 2012, 60% of Roche's clinical data concerning oseltamivir remains unpublished."
It should also be noted that Japan, as the earliest and heaviest user of Tamiflu, decided in 2007 to recommend that the medication should not be prescribed for children because of dangerous side effects.
"In March 2007, Japan's Health Ministry warned that oseltamivir [Tamiflu] should not be given to those aged 10 to 19."
Given this background, it was with interest that a recent article was viewed in the
New York Times by Catherine Saint Louis:
Lifesaving Flu Drugs Fall in Use in Children. Note that the original title for the article when it appeared was
Antiviral Drugs, Found to Curb Flu Deaths in Children, Fall in Use. It seems that a Times editor believed it was necessary to pump up the title in order to attract more readers. The media’s role as a conveyer of public knowledge and a molder of public opinion will be returned to shortly.
Saint Louis refers to a study that appeared recently in the journal
Pediatrics:
Neuraminidase Inhibitors for Critically Ill Children With Influenza. She provides this opinion on the significance of the report:
"’Antivirals matter and they decrease mortality, and the sooner you give them the more effectively they do that,’ said Dr. Peggy Weintrub, the chief of pediatric infectious diseases at the University of California, San Francisco, who was not involved in the research. ‘We didn’t have nice proof on a large scale until this study’."
But can this really be called a "large study," and can it really be considered as "nice" proof?
The study examined, in retrospect, the cases of 784 children under the age of eighteen who had been admitted to intensive care units in California suffering from severe flu symptoms (2009-2012). Of these, 653 were administered neuraminidase drugs such as Tamiflu, while 113 were not. Six percent of those administered the drug subsequently died; eight percent of those not provided the drug died as well. Is this proof that two percent of the 113 patients not administered the drug would have survived if they had been properly medicated? That seems to be the point Saint Louis is making.
Anyone familiar with the statistics of small numbers will be wary of drawing conclusions from nine events (eight percent of 113). If one applies the most
straightforward statistical analysis to the data presented in the study, one finds that, with 95 percent certainty, the probability of death for those administered the drugs is within the range 4.2 to 7.8 percent, and between 3.3 and 12.7 percent for those who did not receive the drugs. All that can be said is that outcomes of the two classes of patients fall within the same statistical range. Never trust any account that does not attempt to assess the uncertainty in the results presented.
The authors of the study upon which Saint Louis based her article could not and did not make any definite claims based on their data, and did provide estimates of uncertainties. They produced this rather more equivocal summary of their results:
"Prompt treatment with NAIs [neuraminidase inhibitors] may improve survival of children critically ill with influenza."
What has happened here is that a research result that is "suggestive" in a research journal, became definite as interpreted by the media. A cynic might suspect that a drug industry public-relations firm was lurking in the background providing Saint Louis with a "summary" of the research findings, and suggesting people she should contact who would be willing to provide publishable quotes in order to arrive at an "eye-catching" title. An even deeper cynicism would drive one to suspect the validity of the entire medical study. Saint Louis provides this quote from one of the study’s authors, Janice K. Louie:
"One of the goals of the study was to increase awareness and remind clinicians that antiviral use is important in this population."
This seems to be an admission that the goal of the study was not to determine if drugs like Tamiflu were effective, but to
prove that they were.
The deepest level of cynicism would suggest that what is at work here is yet another clever marketing campaign by the drug companies.
An uncertain research study was translated in the pages of the
New York Times to convey to the public the notion that drugs like Tamiflu definitely save lives and if their sick child is suffering from the flu it should be medicated. The inevitable conclusion will be that if it is good for children, why not provide it for everyone?
Such reviews of medical research "findings" often appear in the press. The sad fact is that most of these compelling articles are subsequently proved to be either false or the research to be inconclusive. The even sadder fact is that if the conclusion presented by Saint Louis is subsequently proven to be nonsense, the public is likely to never hear about it.
An article in
The Economist addressed the issue of public presentation of medical research by the popular media:
Journalistic deficit disorder.
"IF ALL the stories in the newspapers claiming that a cure for cancer is just around the corner were true, the dread disease would have been history long ago. Sadly, it isn’t. But though some publications do have a well-deserved reputation for exaggeration in this area, many of these reports are at least based on respectable research published in peer-reviewed journals. So what is going on?"
"Research on research—particularly on medical research, where sample sizes are often small—shows that lots of conclusions do not stand the test of time. The refutation of plausible hypotheses is the way that science progresses. The problem was in the way the work was reported in the press."
To illustrate how media fails in its reporting of medical research, the article describes the work of Francois Gonon of the University of Bordeaux who studied reportage on a number of studies related to ADHD (attention-deficit/hyperactivity disorder), a common diagnosis for children who are deemed to be having trouble paying attention or behaving in school.
"First, they studied subsequent scientific literature to see what had become of the claims reported in the top ten papers. Seven of these had reported research designed to test novel hypotheses. Though each concluded at the time that the hypothesis in question might be correct (ie, the data collected did not refute it), the conclusions of six were either completely refuted or substantially weakened by the subsequent investigations unearthed by Dr Gonon. The seventh has neither been confirmed nor rejected, but he and his colleagues, citing two independent experts on ADHD, say its hypothesis "appears unlikely"."
"The other three papers in the top ten were following up existing hypotheses rather than presenting novel ideas. Two of them were confirmed by the subsequent work Dr Gonon tracked down, whereas one was weakened."
So, of the ten most reported-upon medical findings, only two were supported by subsequent research, six were suggested to be false by later studies, and two remain uncertain in accuracy. How were these new revelations treated by the press?
"In total, the original top ten papers received 223 write-ups in the news. But then the newspapers lost interest. Dr Gonon and his team found 67 further studies examining the conclusions of the original ten, but these subsequent investigations earned just 57 newspaper articles between them. Moreover, the bulk of this coverage concerned just two of the ten. Follow-ups to the other eight got almost no attention. Dr Gonon’s team do not pull their punches. There is, they say, an ‘almost complete amnesia in the newspaper coverage of biomedical findings’."
The article wisely pointed out that news reporters were not totally to blame. Some of the problem emanates from publishing bias in favor of exciting new results within the medical community itself.
"It would be easy to point the finger at lazy journalists for this state of affairs, and a journalistic version of attention-deficit disorder is, no doubt, partly to blame, for the press has a natural bias towards the new and exciting. But science itself must carry some responsibility, too. Eight of the ten articles whose fates Dr Gonon studied were published in respected outlets such as the New England Journal of Medicine and the Lancet. The deflating follow-ups, by contrast, languished in more obscure publications, which hard-pressed hacks and quacks alike are less likely to read."
And
The Economist proves it is not above a bit of snark with this concluding comment:
"And, for what it is worth, as The Economist went to press, a search on Google News suggested that, a week after its publication, not a single newspaper had reported Dr Gonon’s paper."