Popular Posts

Total Downloads Worldwide

Wednesday, 30 December 2015

Timberrr! Psychiatry’s Evidence Base For Antipsychotics Comes Crashing to the Ground - by Robert Whitaker Courtesy of the MadinAmerica Website.



Timberrr! Psychiatry’s Evidence Base For Antipsychotics Comes Crashing to the Ground


HYPERLINK : http://www.madinamerica.com/2015/12/timberrr-psychiatrys-evidence-base-for-antipsychotics-comes-crashing-to-the-ground/

When I wrote Anatomy of an Epidemic, one of my foremost hopes was that it would prompt mainstream researchers to revisit the scientific literature. Was there evidence that any class of psychiatric medications—antipsychotics, antidepressants, stimulants, benzodiazepines, and so forth—provided a long-term benefit? Now epidemiologists at Columbia University and City College of New York have reported that they have done such an investigation about antipsychotics, and their bottom-line finding can be summed up in this way: Psychiatry’s “evidence base” for long-term use of these drugs does not exist.
This is a finding, published in the American Journal of Orthopsychiatry, that profoundly undercuts the societal narrative that has been driving psychiatric care in our society for the past sixty years.
In the conventional narrative of psychiatry’s history, Thorazine, which is remembered today as the first “antipsychotic,” is said to have kicked off a “psychopharmacological revolution” when it arrived in asylum medicine in 1954. Thorazine, wrote Edward Shorter, in his book A History of Psychiatry, “initiated a revolution in psychiatry, comparable to the introduction of penicillin in general medicine.” Soon psychiatry was touting that it had discovered “antidepressants” and “anti-anxiety” agents, and that narrative—of a medical specialty that had developed chemical antidotes to mental disorders—became fixed in the public mind, and drives psychiatric care today.
I once believed that narrative, but by the time I had finished researching Mad in America, which was published in 2002, I was convinced that it was out of sync with the scientific literature. It was, I wrote in one chapter, the “story we told ourselves.” Then, in 2004, I wrote a paper titled “The Case Against Psychiatric Drugs: A 50-year record of doing more harm than good,” which was published in Medical Hypotheses. In Anatomy of an Epidemic, which was published in 2010, I expanded on this argument, as I believe there isa long line of research, stretching across more than five decades, that reveals that antipsychotics worsen outcomes over the long term.
By writing that journal article and that book, I was challenging the conventional narrative, and the evidence for that “counter-narrative” is of many types: retrospective studies, a few randomized studies, cross-cultural studies, long-term naturalistic studies (such as Martin Harrow’s), MRI studies, and animal research into why antipsychotics “fail” over the long-term. It is that collective body of evidence that I find convincing, and in 2015, I updated that argument once more in for a new edition of Anatomy of an Epidemic. The case against antipsychotics grows stronger and stronger.
The paper published in the American Journal of Orthopsychiatry is titled “Weighing the Evidence for Harm from Long-term Treatment with Antipsychotic Medications: A Systematic Review. “ Nancy Sohler, from City College of New York, and a team of five epidemiologists from Columbia University, note that their study was occasioned by my writings on this topic. They wrote:
“Recently, Robert Whitaker advanced a troubling interpretation of the evidence base for long-term use of antipsychotic medication. He reviewed a number of epidemiological and clinical studies and concluded that antipsychotic medications are an iatrogenic cause of chronicity in schizophrenia, and that these medications may lead to the deterioration of patients’ health and well-being over time. His explanation rested on the notion that antipsychotic medication may induce a hypersensitivity to dopamine. We were concerned by Whitaker’s findings and wondered whether a systematic appraisal of published literature would produce the same results.”
So this was the very inquiry that I hoped Anatomy of an Epidemic would provoke. Let mainstream researchers take a trip through the scientific literature, and see what they would conclude.
In their “systematic appraisal of published literature,” Sholer and colleagues searched for studies that met two criteria: they had to be at least two-years in length and they needed to “permit a comparison of patients who were exposed to antipsychotic medications with patients who were not exposed to medications over the 2-year follow-up period.” They identified 18 studies that met this criteria, and then they assessed--in a yes, maybe, or no fashion--whether the reported results supported the hypothesis that antipsychotics worsen long-term outcomes.
Now, in my opinion, the researchers were reluctant to conclude that a study showed harm, even when the researchers themselves drew such a conclusion. For instance, in their assessment of Martin Harrow’s long-term study of psychotic patients, they concluded that his findings were “mixed” in terms of whether they showed long-term harm from drug usage. But in Harrow’s report on their 20-year outcomes, he noted that in every subgroup of patients, outcomes were worse for the medicated group, and when he compared those patients who took antipsychotics throughout this lengthy period, versus those who got off by year two and never took them again, it was the medication-compliant patients who, by far, had worse outcomes, and on every domain of functioning. As Harrow stated in 2008, at the American Psychiatric Association’s annual conference, “I conclude that patients with schizophrenia not on antipsychotic medication for a long period of time have significantly better global functioning than those on antipsychotics.” But these researchers did not find such results a “yes” in terms of their supporting a hypothesis that antipsychotics worsen long-term outcomes.
I also think the “evidence” that can be reviewed in regard to this question is much broader than the studies that Sholer and colleagues selected. Cross-cultural studies, MRI studies, animal models of psychosis, and reviews of serious adverse effects, such as tardive dyskinesia, are also relevant to this question of whether the drugs do more harm than good over the long-term.
But that is not important here. The important result from this study was this: “We found the published data to be inadequate to test this hypothesis.”
This is a stunning admission. Even though psychiatrists have been prescribing these drugs for 60 years and have been telling their patients that they should stay on these medications indefinitely, the profession never spent the necessary effort to assess whether this drug treatment actually benefits patients over the long-term. This conclusion also reveals that psychiatry, when it boasts of its treatments being “evidence-based,” is making a rather hollow boast.
It is important to understand too that in the realm of evidence-based medicine, it is the obligation of the medical specialty to find evidence that its treatments are helpful, and not vice versa. In other words, it is not the responsibility of critics to find evidence showing harm done; the responsibility rests with the profession to show evidence of a treatment benefit.
In sum, this study shoots one more arrow into the conventional narrative that drives societal thinking today. In that narrative, the antipsychotics occupy a central role. These are the drugs that kicked off the psychopharmacological revolution and made it possible to empty the mental hospitals. These are the drugs that are presented to the public as an absolute necessity for psychotic patients. Read Jeffrey Lieberman’s book Shrinks, and you see this conventional narrative on display. But this study reveals that it’s a narrative woven from a profession’s own desire to tell a narrative of progress, to itself and to the public, rather than a narrative grounded in science.
As for a referendum on Anatomy of an Epidemic, I think this review helps advance the discussion. The researchers didn’t find, in their review of studies that met a certain criteria, evidence that allowed any conclusion to be drawn about the long-term merits of antipsychotics. What is needed now is to broaden the evidence reviewed, so that it includes the MRI research (showing drug-induced brain shrinkage), the cross-cultural evidence, the animal evidence, and a chalking up of all the adverse events from antipsychotics. At that point, researchers might conclude that the pieces of the evidence puzzle all fit together, and they paint a consistent picture of harm done.
Robert Whitaker
In the News: A journalist’s review of reports in medical journals and the media on psychiatric disorders and treatments.

Tuesday, 8 December 2015

The Modern Day Witch-hunt - MadinAmerica Website - “Mental illness” is the scapegoat dependably relied upon by politicians and fearful citizens to blame for senseless violence and chaos.





The Modern Day Witch-hunt


Fear is a funny thing. When people are afraid, they need to feel a sense of control. Often, control may be perceived when blame is cast and scapegoats are named. If there is someone to blame, then there is something we can do. Fear can lead to irrational postulations of immense proportions; depending on one’s hierarchical position in the world, such postulations may be considered delusional or innovative.
The centuries-long persecution of witches was a powerful example of society and governments acting to combat social problems through the scapegoating of (mostly) innocent prey. Areas that had the greatest social and political turmoil were those that also persecuted the greatest number of witches. Most witch hunts were commanded by government authorities in response to chaos and death. Investigations frequently involved obtaining ‘testimony’ from subjective informants, including children, and confessions through torture.  Execution was generally the official punishment. In fact, well before the infamous Salem witch trials, Connecticut held witchcraft as one of 12 capital crimes punishable by death.
Similarly, “mental illness” is the scapegoat dependably relied upon by politicians and fearful citizens to blame for senseless violence and chaos. According to Arthur Colman, “The basis of the scapegoat myth is this: the group is not to blame for its problems, its bad feelings, its pain, its defeats. These are the responsibility of a particular individual or subgroup – the scapegoat – who is perceived as being fundamentally different from the rest of the group and must be excluded or sacrificed in order for the group to survive and remain whole”. Unlike other scapegoats, such as Jews, Muslims or African Americans, mental illness is more akin to witchery due to its illusive, subjective, and culturally defined nature.
Scapegoating the “mentally ill” every time violence or chaos breaks out allows us to absolve society of any blame. It allows us to ignore the problems that give rise to anger, distress, and violence (i.e., poverty, rejection, discrimination, oppression, injustice, abuse, etc) and instead focus on the one thing that can never be proven or defined and yet so easily can be identified in another. It provides relief without any reflection on how our society and way of life, and the inevitability of death, may be contributing to the terror that overwhelms us.
In the same way that the “mentally ill” are defined by highly educated, elite, usually white professionals, so too were witches. Identifying, classifying, and interrogating witches was a highly sophisticated endeavor, so much so that an official publication, commissioned by the pope, was printed and reprinted over 13 times and held persuasion for over 200 years. The Malleus Maleficarum was used by judges and prosecutors — among others — in the effort to officially condemn those marginalized offenders of witchcraft. Further, once defined as such, defendants would often admit to unfathomable behaviors, such as flying on poles and causing violent storms, and behaved in deranged and frightening ways in tandem with expectations. In much the same way, the DSM acts as a manual with the façade of sophistication, that allows the elite class to identify and classify those that are different, abnormal, or deviant from the social norms.
Methods for testing the validity of accusations of witchcraft likewise had a veneer of superiority and complexity that allowed professional witch hunters to feel justified in their authority, as is also the case with psychological testing. For instance, witches might be bound and thrown in the water to see if they would sink, or would be examined for a “witches’ teat,” which was an extra nipple through which to nourish the witch’s helper animals. If one wanted to discover signs of a person being a witch, he or she was sure to find something. While fortunately we have moved past the era of phrenology, which was much more closely akin to witch tests, mental health professionals continue to evaluate for mental illness with tautological questionnaires based on politically-driven diagnostic categories examining deviations from cultural norms that subjectively can be found anywhere one may choose to look.  And, as with the hysteria of centuries before, the more chaos and violence within society, the more governments and frightened citizens will continue to look for something and someone to blame.
Interestingly, many scholars have suggested that many of the people deemed witches were, in fact, traumatized citizens who were suffering the ills of rape, child abuse, poverty,gender oppression, and other psychologically damaging events.  Not coincidentally, these also tend to be the most common factors predicting a mental illness diagnosis. Trauma survivors and those with developmental disruptions tend to make perfect scapegoats; they spent their childhood learning how to be just that.
Are you upset and struggling with this thing called life? Mentally ill. Are you violent? Mentally ill. Are you passive and avoid conflict? Mentally ill. Are you angry? Mentally ill. Are you energetic and happy all the time? Mentally ill. Are you numb and repressing emotions? Mentally ill. Are you anti-authoritarian? Mentally ill. What an easy solution. If every person who acts ‘crazy’ and does bad things is, by definition, crazy, then I guess the witch-hunters, er, mental health professionals are right. All the bad things that happen are because of mental illness.
The Helping Families in Mental Health Crisis Act (H.R. 2646) is a prime example of hysteria reaching the Federal government, in much the same way fear of witches did 600 years ago. In the same vein as burning the witches in Salem, Murphy and others are suggesting that we essentially “round ‘em up, drug ‘em, and lock ‘em away” in an effort to ameliorate society’s fear of death and violence. Yes, there are likely many other political and financial reasons for this, but people are afraid and Murphy has provided a scapegoat and a method to give the illusion of action and control. People seem to believe that persecuting, excluding, and taking away the rights of people already in distress will somehow result in American society becoming whole and safe. Witch hunts did nothing to increase security or safety; they bred fear and hatred. Perhaps by understanding the uncanny and disturbing similarities between the hysteria of the 16th and 17th centuries and our current culture, we might be able to heed warning and save the lives of thousands of vulnerable and powerless individuals before it’s too late.
Noel Hunter
Madness and Meaning in the Human Experience: A clinical psychology doctoral student, Noel explores the link between trauma and various anomalous states and the need for recognition of states of extreme distress as meaningful responses to overwhelming life experiences.

Wednesday, 18 November 2015

“Let Food Be Thy Medicine” — So Let’s Teach Physicians How to Cook! By BONNIE KAPLAN, PHD & JULIA RUCKLIDGE, PH - ourtesy of the MadinAmerica website

“Let Food Be Thy Medicine” — So Let’s Teach Physicians How to Cook!

Most people reading this blog will have heard or read the quotation attributed to Hippocrates: “Let food be thy medicine, and medicine be thy food.” Whether or not this ancient Greek physician actually made that comment 2500 years ago is something that we cannot determine. But it certainly is a statement that is coming back into favor in the current era.
We know now that what we eat can heal both our bodies and our brains. And even though we have written in this blog about the fact that improved dietary intake does not seem to be sufficient for some people, and multinutrient ‘supplements’ are also needed, the fact is that everything starts with a healthy diet. And yes, we recognize that the definition of a healthy diet changes over time, but that is inevitable as more and more research emerges.
But underlying all of our comments and the work of many people who care about nutrition has been a very big frustration with the fact that the medical profession generally dismissed the importance of nutrition for physical and mental health until very recently. This attitude toward nutrition has often been attributed to the fact that physicians were not taught about nutrition when they were in training.
A group of nutrition researchers at the University of North Carolina has been doing repeated surveys of nutrition education in American medical schools, and although the results are surely better than a decade ago, they are still a bit discouraging. In 2010 they reported that not only was nutrition education not yet adequate, but also it seemed to have gotten worse compared to their previous survey in 2004 (Adams et al., 2010). Of the 105 schools answering questions about courses and contact hours, only 25% required a dedicated nutrition course (a decrease from 30% in 2004). In addition, medical students received 19.6 contact hours of nutrition instruction during their medical school careers (though it is noteworthy that the range of 0-70 hours still included zero) compared to 22.3 hours in 2004. And finally, the National Academy of Sciences had recommended a minimum of 25 required hours, and only 27% of the 105 schools met that minimum whereas 38% had met it in 2004.
But the news is not all bad. Today we want to tell you about a big change in medical school training that seems to have been initiated by Tulane University Medical School in New Orleans.
In 2012, Tulane began teaching its medical students how to cook.
Isn’t that an incredible statement, from so many perspectives? But that is the essence of a revolution in medical education that is emerging: teaching medical students how to cook good food from scratch. Then they are encouraged to interact with community members, teaching and learning about cooking, and ultimately it is expected that they will be able to use their own cooking knowledge to help their future patients make better food choices. Since 2012, nine other medical schools across the U.S. have purchased the license to use the same Tulane cooking curriculum. Tulane itself has a new, named facility for their work: the Goldring Center for Culinary Medicine, led by internist Dr. Timothy Harlan. You can watch a 3-minute video about the Center here:
Another pioneering aspect of this program is that it is a partnership with a local culinary school, resulting in Tulane Medical School hiring the first chef to be on a medical school faculty anywhere in North America — or perhaps the world.
One final interesting note about the Goldring Center: if you watch the video, you will see that Dr. Harlan emphasizes that they are teaching about food, as opposed to nutrition. Although it is hard for some of us to distinguish between the two, this philosophy does reflect an important ‘back to basics’ approach. Though it is true, we believe, that there are some people who cannot achieve their optimal mental function without supplementation, and it is also true (and very worrisome!) that our food supply is not as nutritious as it was 50 years ago, it is important that we always take a ‘food first’ stance. Especially when focusing on population health, it is essential to promote dietary improvements as the first line of prevention and treatment. Unless one is reading a lot of science fiction, it is impossible to envision 7 billon people surviving on nutrient pills.

* * * * *

References:
Adams KM, Kohlmeier M, Zeisel SH. Nutrition education in U.S. medical schools: latest update of a national survey. Acad Med. 2010 Sep;85(9):1537-42. doi: 10.1097/ACM.0b013e3181eab71b.

Monday, 9 November 2015

Depression - It's not your Serotonin - by Kelly Brogan - Courtesy of the : greenmedinfo.com website

Posted on: 
Sunday, January 4th 2015 at 4:45 am
Written By: 
Millions believe depression is caused by 'serotonin deficiency,' but where is the science in support of this theory?
"Depression is a serious medical condition that may be due to a chemical imbalance, and Zoloft works to correct this imbalance."
Herein lies the serotonin myth.
As one of only two countries in the world that permits direct to consumer advertising, you have undoubtedly been subjected to promotion of the "cause of depression." A cause that is not your fault, but rather; a matter of too few little bubbles passing between the hubs in your brain! Don't add that to your list of worries, though, because there is a convenient solution awaiting you at your doctor's office...
What if I told you that, in 6 decades of research, the serotonin (or norepinephrine, or dopamine) theory of depression and anxiety has not achieved scientific credibility?
You'd want some supporting arguments for this shocking claim.
So, here you go:
The Science of Psychiatry is Myth
Rather than some embarrassingly reductionist, one-deficiency-one-illness-one-pill model of mental illness, contemporary exploration of human behavior has demonstrated that we may know less than we ever thought we did.  And that what we do know about root causes of mental illness seems to have more to do with the concept of evolutionary mismatch than with genes and chemical deficiencies.
In fact, a meta-analysis of over 14,000 patients and Dr. Insel, head of the NIMH, had this to say:
"Despite high expectations, neither genomics nor imaging has yet impacted the diagnosis or treatment of the 45 million Americans with serious or moderate mental illness each year."
To understand what imbalance is, we must know what balance looks like, and neuroscience, to date, has not characterized the optimal brain state, nor how to even assess for it.
A New England Journal of Medicine review on Major Depression, stated:
" ... numerous studies of norepinephrine and serotonin metabolites in plasma, urine, and cerebrospinal fluid as well as postmortem studies of the brains of patients with depression, have yet to identify the purported deficiency reliably."
The data has poked holes in the theory and even the field of psychiatry itself is putting down its sword. One of my favorite essays by Lacasse and Leo has compiled sentiments from influential thinkers in the field – mind you, these are conventional clinicians and researchers in mainstream practice – who have broken rank, casting doubt on the entirety of what psychiatry has to offer around antidepressants:
Humble Origins of a Powerful Meme
In the 1950s, reserpine, initially introduced to the US market as an anti-seizure medication, was noted to deplete brain serotonin stores in subjects, with resultant lethargy and sedation. These observations colluded with the clinical note that an anti-tuberculosis medication, iproniazid, invoked mood changes after five months of treatment in 70% of a 17 patient cohort. Finally, Dr. Joseph Schildkraut threw fairy dust on these mumbles and grumbles in 1965 with his hypothetical manifesto entitled "The Catecholamine Hypothesis of Affective Disorders" stating:
"At best, drug-induced affective disturbances can only be considered models of the natural disorders, while it remains to be demonstrated that the behavioral changes produced by these drugs have any relation to naturally occurring biochemical abnormalities which might be associated with the illness."
Contextualized by the ripeness of a field struggling to establish biomedical legitimacy (beyond the therapeutic lobotomy!), psychiatry was ready for a rebranding, and the pharmaceutical industry was all too happy to partner in the effort.
Of course, the risk inherent in "working backwards" in this way (noting effects and presuming mechanisms) is that we tell ourselves that we have learned something about the body, when in fact, all we have learned is that patented synthesized chemicals have effects on our behavior. This is referred to as the drug-based model by Dr. Joanna Moncrieff. In this model, we acknowledge that antidepressants have effects, but that these effects in no way are curative or reparative.
The most applicable analogy is that of the woman with social phobia who finds that drinking two cocktails eases her symptoms. One could imagine how, in a 6 week randomized trial, this "treatment" could be found efficacious and recommended for daily use and even prevention of symptoms. How her withdrawal symptoms after 10 years of daily compliance could lead those around her to believe that she "needed" the alcohol to correct an imbalance. This analogy is all too close to the truth.
Running With Broken Legs
Psychiatrist Dr. Daniel Carlat has said:
"And where there is a scientific vacuum, drug companies are happy to insert a marketing message and call it science. As a result, psychiatry has become a proving ground for outrageous manipulations of science in the service of profit."
So, what happens when we let drug companies tell doctors what science is? We have an industry and a profession working together to maintain a house of cards theory in the face of contradictory evidence.
We have a global situation in which increases in prescribing are resulting in increases in severity of illness (including numbers and length of episodes) relative to those who have never been treated with medication.
To truly appreciate the breadth of evidence that states antidepressants are ineffective and unsafe, we have to get behind the walls that the pharmaceutical companies erect. We have to unearth unpublished data, data that they were hoping to keep in the dusty catacombs.
A now famous 2008 study in the New England Journal of Medicine by Turner et al sought to expose the extent of this data manipulation. They demonstrated that, from 1987 to 2004, 12 antidepressants were approved based on 74 studies. Thirty-eight were positive, and 37 of thesewere published.  Thirty-six were negative (showing no benefit), and 3 of these were published as such while 11 were published with a positive spin (always read the data not the author's conclusion!), and 22 were unpublished.
In 1998 tour de force, Dr. Irving Kirsch, an expert on the placebo effect, published a meta-analysis of 3,000 patients who were treated with antidepressants, psychotherapy, placebo, or no treatment and found that only 27% of the therapeutic response was attributable to the drug's action.
This was followed up by a 2008 review, which invoked the Freedom of Information Act to obtain access to unpublished studies, finding that, when these were included, antidepressants outperformed placebo in only 20 of 46 trials (less than half!), and that the overall difference between drugs and placebos was 1.7 points on the 52 point Hamilton Scale.  This small increment is clinically insignificant, and likely accounted for by medication side effects strategically employed (sedation or activation).
When active placebos were used, the Cochrane database found that differences between drugs and placebos disappeared, given credence to the assertion that inert placebos inflate perceived drug effects.
The finding of tremendous placebo effect in the treatment groups was also echoed in two different meta-analyses by Khan et al who found a 10% difference between placebo and antidepressant efficacy, and comparable suicide rates. The most recent trial examining the role of "expectancy" or belief in antidepressant effect, found that patients lost their perceived benefit if they believed that they might be getting a sugar pill even if they were continued on their formerly effective treatment dose of Prozac.
The largest, non-industry funded study, costing the public $35 million dollars, followed 4000 patients treated with Celexa (not blinded, so they knew what they were getting), and found that half of them improved at 8 weeks. Those that didn't were switched to Wellbutrin, Effexor, or Zoloft OR "augmented" with Buspar or Wellbutrin.
Guess what? It didn't matter what was done, because they remitted at the same unimpressive rate of 18-30% regardless with only 3% of patients in remission at 12 months.
How could it be that medications like Wellbutrin, which purportedly primarily disrupt dopamine signaling, and medications like Stablon which theoretically enhances the reuptake of serotonin, both work to resolve this underlying imbalance? Why would thyroid, benzodiazepines, beta blockers, and opiates also "work"? And what does depression have in common with panic disorder, phobias, OCD, eating disorders, and social anxiety that all of these diagnoses would warrant the same exact chemical fix?
Alternative options
As a holistic clinician, one of my bigger pet peeves is the use of amino acids and other nutraceuticals with  "serotonin-boosting" claims. These integrative practitioners have taken a page from the allopathic playbook and are seeking to copy-cat what they perceive antidepressants to be doing.
The foundational "data" for the modern serotonin theory of mood utilizes tryptophan depletion methods which involve feeding volunteers amino acid mixtures without tryptophan and are rife with complicated interpretations.
Simply put, there has never been a study that demonstrates that this intervention causes mood changes in any patients who have not been treated with antidepressants.
In an important paper entitled Mechanism of acute tryptophan depletion: Is it only serotonin?, van Donkelaar et al caution clinicians and researchers about the interpretation of tryptophan research. They clarify that there are many potential effects of this methodology, stating:
"In general, several findings support the fact that depression may not be caused solely by an abnormality of 5-HT function, but more likely by a dysfunction of other systems or brain regions modulated by 5-HT or interacting with its dietary precursor. Similarly, the ATD method does not seem to challenge the 5-HT system per se, but rather triggers 5HT-mediated adverse events."
So if we cannot confirm the role of serotonin in mood and we have good reason to believe that antidepressant effect is largely based on belief, then why are we trying to "boost serotonin"?
Causing imbalances
All you have to do is spend a few minutes on http://survivingantidepressants.org/ orhttp://beyondmeds.com/ to appreciate that we have created a monster. Millions of men, women, and children the world over are suffering, without clinical guidance (because this is NOT a part of medical training) to discontinue psychiatric meds. I have been humbled, as a clinician who seeks to help these patients, by what these medications are capable of. Psychotropic withdrawal can make alcohol and heroin detox look like a breeze.
An important analysis by the former director of the NIMH makes claims that antidepressants "create perturbations in neurotransmitter functions" causing the body to compensate through a series of adaptations which occur after "chronic administration" leading to brains that function, after a few weeks, in a way that is "qualitatively as well as quantitatively different from the normal state."
Changes in beta-adrenergic receptor density, serotonin autoreceptor sensitivity, and serotonin turnover all struggle to compensate for the assault of the medication.
Andrews, et al., calls this "oppositional tolerance," and demonstrate through a careful meta-analysis of 46 studies demonstrating that patient's risk of relapse is directly proportionate to how "perturbing" the medication is, and is always higher than placebo (44.6% vs 24.7%). They challenge the notion that findings of decreased relapse on continued medication represent anything other than drug-induced response to discontinuation of a substance to which the body has developed tolerance. They go a step further to add:
"For instance, in naturalistic studies, unmedicated patients have much shorter episodes, and better long-term prospects, than medicated patients. Several of these studies have found that the average duration of an untreated episode of major depression is 12–13 weeks."
Harvard researchers also concluded that at least fifty percent of drug-withdrawn patients relapsed within 14 months. In fact:
"Long-term antidepressant use may be depressogenic . . . it is possible that antidepressant agents modify the hardwiring of neuronal synapses (which) not only render antidepressants ineffective but also induce a resident, refractory depressive state."
So, when your doctor says, "You see, look how sick you are, you shouldn't have stopped that medication," you should know that the data suggests that your symptoms are withdrawal, not relapse.
Longitudinal studies demonstrate poor functional outcomes for those treated with 60% of patients still meeting diagnostic criteria at one year (despite transient improvement within the first 3 months). When baseline severity is controlled for, two prospective studies support a worse outcome in those prescribed medication:
One in which the never-medicated group experienced a 62% improvement by six months, whereas the drug-treated patients experienced only a 33% reduction in symptoms, and anotherWHO study of depressed patients in 15 cities which found that, at the end of one year, those who weren't exposed to psychotropic medications enjoyed much better "general health"; that their depressive symptoms were much milder"; and that they were less likely to still be "mentally ill." 
I'm not done yet. In a retrospective 10-year study in the Netherlands, 76% of those with unmedicated depression recovered without relapse relative to 50% of those treated.
Unlike the mess of contradictory studies around short-term effects, there are no comparable studies that show a better outcome in those prescribed antidepressants long term.
First Do No Harm
So, we have a half-baked theory in a vacuum of science that that pharmaceutical industry raced to fill. We have the illusion of short-term efficacy and assumptions about long-term safety. But are these medications actually killing people?
The answer is yes.
Unequivocally, antidepressants cause suicidal and homicidal behavior. The Russian Roulette of patients vulnerable to these "side effects" is only beginning to be elucidated and may have something to do with genetic variants around metabolism of these chemicals.  Dr. David Healy has worked tirelessly to expose the data that implicates antidepressants in suicidality and violence, maintaining a database for reporting, writing, and lecturing about cases of medication-induced death that could make your soul wince.
What about our most vulnerable?
I have countless patients in my practice who report new onset of suicidal ideation within weeks of starting an antidepressant. In a population where there are only 2 randomized trials, I have grave concerns about postpartum women who are treated with antidepressants before more benign andeffective interventions such as dietary modification and thyroid treatment. Hold your heart as you read through these reports of women who took their own and their childrens' lives while treated with medications.
Then there is the use of these medications in children as young as 2 years old. How did we ever get the idea that this was a safe and effective treatment for this demographic? Look no further than data like Study 329, which cost Glaxo Smith Klein 3 billion dollars for their efforts to promote antidepressants to children. These efforts required ghost-written and manipulated data that suppressed a signal of suicidality, falsely represented Paxil as outperforming placebo, and contributes to an irrepressible mountain of harm done to our children by the field of psychiatry.
RIP Monoamine Theory
As Moncrieff and Cohen so succinctly state:
"Our analysis indicates that there are no specific antidepressant drugs, that most of the short-term effects of antidepressants are shared by many other drugs, and that long-term drug treatment with antidepressants or any other drugs has not been shown to lead to long-term elevation of mood. We suggest that the term "antidepressant" should be abandoned."
So, where do we turn?
The field of psychoneuroimmunology dominates the research as an iconic example of how medicine must surpass its own simplistic boundaries if we are going to begin to chip away at the some 50% of Americans who will struggle with mood symptoms, 11% of whom will be medicated for it.
There are times in our evolution as a cultural species when we need to unlearn what we think we know. We have to move out of the comfort of certainty and into the freeing light of uncertainty. It is from this space of acknowledged unknowing that we can truly grow. From my vantage point, this growth will encompass a sense of wonder – both a curiosity about what symptoms of mental illness may be telling us about our physiology and spirit, as well as a sense of humbled awe at all that we do not yet have the tools to appreciate. For this reason, honoring our co-evolution with the natural world, and sending the body a signal of safety through movement, diet, meditation, and environmental detoxification represents our most primal and most powerful tool for healing.