Popular Posts

Total Downloads Worldwide

Saturday, 4 August 2012

ANTIPSYCHOTICS - Jonah Lehrer was also Wrong About Antipsychotics - COURTESY OF MAD IN AMERICA WEBSITE




Jonah Lehrer was also Wrong About Antipsychotics

We spend a lot of time writing about knowledge dissemination in mental health, and over time, have increasingly recognized the important role of science journalists in our society. Thus, we have watched the recent rise and fall of Jonah Lehrer with great interest. Mr. Lehrer wrote a piece for the New Yorker last year that addressed antipsychotics. However, his well-written and entertaining piece didn’t seem to reflect the data on how pharmaceutical companies have promoted antipsychotics – instead, it seemed like he had jammed two narratives together that didn’t quite fit, at best not taking all the available into account, at worst, discarding the data that didn’t fit his story. At the time, we were a bit puzzled, and wrote a blog post about it (see below). As it turns out, this turns out not to be an anomaly but more a pattern of behavior, as he just resigned his position at the New Yorker for inventing quotes in a recent book.
We don’t mean to “pile-on”, as this is a tragic circumstance, but we noticed how many people read Lehrer’s piece who do not read, for instance, Mad in America.com, and feel it was a missed opportunity to inform the public of a very interesting story – the overestimation of the efficacy and safety of the newer antipsychotics due to enthusiastic and overwhelming promotion by pharmaceutical companies and their associates. Unfortunately, that story apparently didn’t fit the narrative Lehrer wanted to write.
We badly need good science writers, especially those who do rigorous work and consider all the available data in their stories.
Originally posted on Mad in America on December 13, 2011:
In a recent article in the New Yorker, titled, The Truth Wears Off, science writer Jonah Lehrer discusses an intriguing problem in science. The problem is that scientific results which are confirmed at one point are sometimes overturned after further testing – today’s “facts” are tomorrow’s “fallacies.” The reason for this, as he sees it, are subtle biases at work that taint the scientific method, so that well-done experiments designed by well-meaning scientists are eventually shown to be problematic. To support this line of reasoning he provides an excellent example of an experiment by John Crabbe.  Crabbe’s group attempted to do the exact same experiment in three different labs, each in different parts of the country. One would expect that if all the experiments were well-controlled at all three research labs, each would reach similar results. His group did their absolute best to replicate all the variables in each of the labs as much as possible, and in spite this, the results varied somewhat for each lab. The natural conclusion is that eliminating all bias in experimentation is probably impossible and that replicability is more difficult and complex than commonly considered.
In addition to Crabbe’s study, Lehrer also cites the clinical trials of antipsychotics to support his view. According to Lehrer, by 2007, scientists were scratching their heads in exasperation because several large studies began to show that the drugs were not as efficacious as was presumed in the 1990s, when the drugs were first introduced. In his words: “But the data presented at the Brussels meeting made it clear that something strange was happening: the therapeutic power of the drugs appeared to be steadily waning.” Having followed the atypicals for the last decade, this piqued our interest. We believe that the atypical antipsychotics are NOT an example of the type of subtle research bias that Crabbe is writing about otherwise, but rather an example of scientists getting closer and closer to the true efficacy of the drugs, as various types of overt bias (and even outright fraud) are observed, noted, and integrated into the literature. While Crabbe’s research was designed to explore the nuances of the scientific method, the story of clinical trials of antipsychotics does not belong in the same category. We are concerned that this mis-categorization may be confusing, and explain our different interpretation of the same story, below. We do not mean to single out Jonah Lehrer for criticism- he is undoubtedly one of our best science writers, with the challenging task of understanding and explaining multiple scientific sub-fields to the layperson. However, we do think that it will be useful to understand how the atypical antipsychotic story differs from the subtle biases that complicate other kinds of research.
The Clinical Trial Process
Lehrer writes, “Before the effectiveness of a drug can be confirmed, it must be tested and tested again, different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as its known, is the foundation of modern research.”  This portrays the clinical trial process in the best light possible; one might call it a scientific ideal that is seldom realized in psychiatric research. Under the current regulatory process, for a company to get their drug approved, they must submit two positive studies to the FDA. This would seem to support the importance of replicability, however, there is a major problem with this.  Namely, that although a company has to submit two studies to the FDA, the company can do as many studies as they want. As Paul Leber of the FDA CNS division has said, “How do we interpret two positive results in the context of several more studies that fail to demonstrate that effect? ….in a sense the sponsor could just do the studies until the cows come home until he gets two of them that are statistically significant by chance alone, walks them out and says he has met the criteria.” To get their two positive studies, they might have to do five.
It is a mistake, therefore, to think of psychiatric research on patented, for-profit drugs as an unbiased scientific endeavor. This completely misses the primary purpose of clinical trials, which are fundamentally designed as the centerpiece of a company marketing program. When the company eventually submits their data to the FDA, then they must submit all their data, even the negative data. But it is up to the company to publish whatever data it wants, and in most cases, the companies have published the positive trials and withheld the negative trial data. In other words, the published literature, especially the early literature, is not a true representation of all the available data on a drug.  And in the case of psychiatric medications, when one examines all of the data that was collected, the drugs look less effective, and more harmful, than originally portrayed. This does not mean the drugs have slowly lost their efficacy- it means that when all the data are examined, or unbiased experiments finally take place, the scientific community is able to examine a less biased database regarding the true efficacy of the drug.
In psychiatric drug research, this process appears to take 10-20 years, as increasingly contradictory data slowly trickles in.
But, Back to Fundamentals
But, perhaps a more important point is the premise that atypical antipsychotics were shown to be highly effective in the 1990s. We understand that the drugs were marketed as highly superior to first-generation antipsychotics- one of us worked in a psychiatric hospital in the late 1990s, and we watched as clinicians told patients that the new antipsychotics were miracle drugs. So, it is surely easy to find research psychiatrists who endorse their use – some with conflicts-of-interest with the makers of atypicals, others with no such conflicts but who are eager to find alternatives to first-generation drugs such as thorazine and haldol. However, it is just as easy to find evidence that these drugs never should have been considered to be a major improvement over these older drugs in the 1990s. For instance:
In 1992, the FDA wrote Johnson and Johnson regarding their atypical antipsychotic, Risperdal: “We would consider any advertisement or promotion labeling for RISPERDAL false, misleading or lacking fair balance under section 502 of the Act if there is a presentation of data that conveys the impression that Risperidone is superior to haloperidol or any other marketed antipsychotic drug product with regard to safety or effectiveness.” It is hard to imagine a more clear statement from the FDA- they would not allow advertisements claiming that Risperdal that superior to the older antipsychotics. (Although they couldn’t run advertisements claiming this fact, they did use the peer-reviewed scientific literature to convince clinicians of this claim).
Ten years later, in 2002, psychiatric historian David Healy wrote the authoritative history of the antipsychotic drugs, “The Creation of Psychopharmacology.” Healy dedicated only a few pages to the atypicals, writing, “…they were not obviously more effective than haloperidol [haldol], except for their marginal benefits on negative symptoms.”
Professor David Cohen published an article in 2002 analyzing the methodology used in clinical trials of atypical antipsychotics, concluding that biased study designs were likely responsible for many of the purported “benefits” of the newer medications. The list of deliberate confounds is long. The studies he analyzed were almost exclusively funded by the makers of atypical antipsychotics.
Many more examples could be given, but the point should be clear- an examination of the available evidence from the 1990s forward surely calls in question the degree to which the atypicals were a major step forward; this issue is well-covered in the critical literature.
More recently, there has been clear evidence that scientific evidence of adverse effects was deliberately withheld by the makers of atypical antipsychotics. Almost every manufacturer of atypicals has been fined millions of dollars for illegal marketing. As time goes by, a flow of documents generated through legal discovery increases what is known about these drugs. Again, this just means that we uncover more (previously discovered but hidden) data about these drugs, not that their efficacy is now waning.
Conclusion
The issue of replicability in research is an interesting one. However, it would be a mistake to buy into the premise that the efficacy of atypicals has been dropping dramatically since their introduction in the 1990s. Instead, what has been waning is the influence of bias and marketing on the perception of the atypical antipsychotics.
A more important scientific question, perhaps, is: How do we reduce the amount of time (nearly 20 years?) that it takes to reduce the impact of this bias and marketing, and learn the true utility of a new psychiatric medication?

No comments:

Post a Comment

PLEASE ADD COMMENTS SO I CAN IMPROVE THE INFORMATION I AM SHARING ON THIS VERY IMPORTANT TOPIC.