Thursday, August 8, 2013

Can We Trust Clinical Trials? Size Matters—So Does Integrity

We will discuss two articles that call into question the accuracy of the current approach to determining the efficacy of drugs: clinical testing. The first has the eye-catching title Do Clinical Trials Work? It was written by Clifton Leaf and appeared in the New York Times. The second article had an even more provocative title: Lies, Damned Lies, and Medical Science. It was provided by David H. Freedman in The Atlantic.

Let’s begin with Leaf’s article.

Leaf describes a study of the drug Avastin for use with patients having a form of brain cancer.

"Mark R. Gilbert, a professor of neuro-oncology at the University of Texas M. D. Anderson Cancer Center in Houston, presented the results of a clinical trial testing the drug Avastin in patients newly diagnosed with glioblastoma multiforme, an aggressive brain cancer. In two earlier, smaller studies of patients with recurrent brain cancers, tumors shrank and the disease seemed to stall for several months when patients were given the drug, an antibody that targets the blood supply of these fast-growing masses of cancer cells."

"But to the surprise of many, Dr. Gilbert’s study found no difference in survival between those who were given Avastin and those who were given a placebo."

The smaller studies were performed without comparison groups so that any improvement obtained could be attributed to the action of the drug whether it was or not.

Leaf tells us that many physicians believe, from personal experience, that Avatin was helpful for some of their patients. One could discount the anecdotal data as being unreliable, and, based on the Gilbert results, conclude that the medication was of no value in this case.

Leaf seems to accept the anecdotal claims and draw a different conclusion: the drug works, but only on a small class of patients.

"Some patients did do better on the drug, and indeed, doctors and patients insist that some who take Avastin significantly beat the average. But the trial was unable to discover these ‘responders’ along the way, much less examine what might have accounted for the difference."

No numbers are presented to support Leaf’s conclusion. Nevertheless, let’s see where this leads. Leaf extrapolates that conclusion and applies it more broadly, suggesting that response to drugs is highly specific to the individual patient and broad studies are not well-designed to recognize these differing medical outcomes.

"Researchers are coming to understand just how individualized human physiology and human pathology really are. On a genetic level, the tumors in one person with pancreatic cancer almost surely won’t be identical to those of any other. Even in a more widespread condition like high cholesterol, the variability between individuals can be great, meaning that any two patients may have starkly different reactions to a drug."

"Which brings us to perhaps a more fundamental question, one that few people really want to ask: do clinical trials even work? Or are the diseases of individuals so particular that testing experimental medicines in broad groups is doomed to create more frustration than knowledge?"

Leaf then goes even further with this thought and attributes the withdrawal of a number of medications from the market to a lack of understanding of how they work on individuals.

"That’s one reason that, despite the rigorous monitoring of clinical trials, 16 novel medicines were withdrawn from the market from 2000 through 2010, a figure equal to 6 percent of the total approved during the period. The pharmacogenomics of each of us — the way our genes influence our response to drugs — is unique."

Those readers who only recall the drugs removed from market because they were found to be dangerous and had been approved based on clinical testing that was often shoddy and purposely misleading might be wondering which drugs Leaf was referring to. No information is provided to support the claim.

Taking Leaf’s reasoning to its logical conclusion, a drug that helps 10 people, kills 10 people, and has no effect on 1,000 others is a perfectly good drug. One merely needs to figure out who the 10 people are that it will help and give the drug only to them.

Leaf seems to believe that a major fault in current clinical testing arises from using sample sizes too small to accurately observe effects on the scale of interest.

Let’s now look at Leaf’s claims from the point of view of a drug company. If it could prove that a given drug was very effective, but for only 5% of the population, and that it could detect that 5%, it would have a very strong case for having that drug approved for use. If this were the case, then a pharmaceutical industry struggling to maintain profit growth and to develop new products, would suddenly have an endless supply of new drugs to pursue. If the drug applies only to 5% of the population because of genetic reasons, then there is the potential for many more variations (20?) to address the remaining 95% of the population. Each would, of course, be expensive to develop and come with a hefty price tag, presumably much higher than the price of a single drug that applies to a large population.

Let’s hold that thought for a moment and consider the article by Freedman.

Freedman focuses on the work of the doctor John Ioannidis and his associates. Ioannidis has concluded that the data emerging from clinical trials cannot be assumed to be trustworthy.

"He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed."

That sounds like the opinion of some marginalized crank—but it isn’t.

"His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive."

He is most famous for an article published in the Journal of the American Medical Association.

"He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid."

What did he discover?

"Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable."

Ioannidis has arrived at a rather simple, although startling, explanation for why medical studies are so often wrong or misleading.

"’The studies were biased,’ he says. "’Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.’ Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. "’At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded," says Ioannidis. ‘There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded’."

Most of the initial data on drug efficacy is produced by the drug companies themselves. Consequently, the bias that Ioannidis sees being introduced has the drug companies as its major source. Ben Goldacre wrote in detail about how this is accomplished in his book Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients. In his chapter describing how to manipulate clinical trials and mislead the medical community he lists 15 techniques that have been used. Misrepresentation of medical results is easy—and it is common.

Bigger and more expensive testing will not help if the trials are improperly designed and analyzed.

The notion that medications can be tailored to the specific physical responses of the individual patient is exciting. It could be an incredible medical advance or it could prove to be impractical and end up a bust. In either event, to move forward intelligently, and to avoid being swindled, more control must be exerted over a testing process that has misled us so often in the past.

 

Clifton Leaf is the author of "The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It."

No comments:

Post a Comment