Popular Posts

Total Downloads Worldwide

Wednesday, 22 January 2014

Psychiatry Gone Astray - Myths about Psychiatry - Courtesy of David Healey Website



Psychiatry Gone Astray - http://davidhealy.org/psychiatry-gone-astray/
January 21, 2014 11 Comments

Editorial note: We follow up the Guilty post last week with a piece written by Peter Gotzsche that has caused a stir in Denmark and provoked some of the Danish professors he critiques to respond. 

At the Nordic Cochrane Centre, we have researched antidepressants for several years and I have long wondered why leading professors of psychiatry base their practice on a number of erroneous myths. These myths are harmful to patients. Many psychiatrists are well aware that the myths do not hold and have told me so, but they don’t dare deviate from the official positions because of career concerns.

Being a specialist in internal medicince, I don’t risk ruining my career by incurring the professors’ wrath and I shall try here to come to the rescue of the many conscientious but oppressed psychiatrists and patients by listing the worst myths and explain why they are harmful.

 Myth 1: Your disease is caused by a chemical imbalance in the brain

Most patients are told this but it is completely wrong. We have no idea about which interplay of psychosocial conditions, biochemical processes, receptors and neural pathways that lead to mental disorders and the theories that patients with depression lack serotonin and that patients with schizophrenia have too much dopamine have long been refuted. The truth is just the opposite. There is no chemical imbalance to begin with, but when treating mental illness with drugs, we create a chemical imbalance, an artificial condition that the brain tries to counteract.

This means that you get worse when you try to stop the medication. An alcoholic also gets worse when there is no more alcohol but this doesn’t mean that he lacked alcohol in the brain when he started drinking.

The vast majority of doctors harm their patients further by telling them that the withdrawal symptoms mean that they are still sick and still need the mediciation. In this way, the doctors turn people into chronic patients, including those who would have been fine even without any treatment at all. This is one of the main reasons that the number of patients with mental disorders is increasing, and that the number of patients who never come back into the labour market also increases. This is largely due to the drugs and not the disease.

Myth 2: It’s no problem to stop treatment with antidepressants

A Danish professor of psychiatry said this at a recent meeting for psychiatrists, just after I had explained that it was difficult for patients to quit. Fortunately, he was contradicted by two foreign professors also at the meeting. One of them had done a trial with patients suffering from panic disorder and agoraphobia and half of them found it difficult to stop even though they were slowly tapering off. It cannot be because the depression came back, as the patients were not depressed to begin with. The withdrawal symptoms are primarily due to the antidepressants and not the disease.
 

Myth 3: Psychotropic Drugs for Mental Illness are like Insulin for Diabetes

Most patients with depression or schizophrenia have heard this falsehood over and over again, almost like a mantra, in TV, radio and newspapers. When you give insulin to a patient with diabetes, you give something the patient lacks, namely insulin. Since we’ve never been able to demonstrate that a patient with a mental disorder lacks something that people who are not sick don’t lack, it is wrong to use this analogy.

Patients with depression don’t lack serotonin, and there are actually drugs that work for depression although they lower serotonin. Moreover, in contrast to insulin, which just replaces what the patient is short of, and does nothing else, psychotropic drugs have a very wide range of effects throughout the body, many of which are harmful. So, also for this reason, the insulin analogy is extremely misleading.
 

Myth 4: Psychotropic drugs reduce the number of chronically ill patients

This is probably the worst myth of them all. US science journalist Robert Whitaker demonstrates convincingly in “Anatomy of an Epidemic” that the increasing use of drugs not only keeps patients stuck in the sick role, but also turns many problems that would have been transient into chronic diseases.

If there had been any truth in the insulin myth, we would have expected to see fewer patients who could not fend for themselves. However, the reverse has happened. The clearest evidence of this is also the most tragic, namely the fate of our children after we started treating them with drugs. In the United States, psychiatrist collect more money from drug makers than doctors in any other specialty and those who take most money tend to prescribe antipsychotics to children most ofter. This raises a suspicion of corruption of the academic judgement.

The consequences are damning. In 1987, just before teh newer antidepressants (SSRIs or happy pills) came on the market, very few children in the United States were mentally disabled. Twenty years laterm it was over 500,000, which represents a 35-fold increase. The number of disabled mentally ill has exploded in all Western countries. One of the worst consequences if that the treatment with ADHD medications and happy pills has created an entirely new disease in about 10% of those treated – namely bipolar disorder – which we previously called manic depressive illness.

Leading psychiatrist have claimed that it is “very rare” that patients on antidepressants become bipolar. That’s not true. The number of children with bipolar increased 35-fold in the United States, which is a serious development, as we use antipsychotic drugs for this disorder. Antipsychotic drugs are very dangerous and one of the main reasons why patients with schizophrenia live 20 years shorter than others. I have estimated in my book, ‘Deadly Medicine and Organized Crime’, that just one of the many preparations, Zyprexa (olanzapine), has killed 200,000 patients worldwide.

Myth 5: Happy pills do not cause suicide in children and adolescents


Some professors are willing to admit that happy pills increase the incidence of suicidal behavior while denying that this necessarily leads to more suicides, although it is well documented that the two are closely related. Lundbeck’s CEO, Ulf Wiinberg, went even further in a radio programme in 2011 where he claimed that happy pills reduce the rate of suicide in children and adolescents. When the stunned reporter asked him why there then was a warning against this in the package inserts, he replied that he expected the leaflets would be changed by the authorities!

Suicides in healthy people, triggered by happy pills, have also been reported. The companies and the psychiatrists have consistently blamed the disease when patients commit suicide. It is true that depression increases the risk of suicide, but happy pills increase it even more, at least up to about age 40, according to a meta-analysis of 100,000 patients in randomized trials performed by the US Food and Drug Administration.
 

Myth 6: Happy pills have no side effects

At an international meeting on psychiatry in 2008, I criticized psychiatrists for wanting to screen many healthy people for depression. The recommended screening tests are so poor that one in three healthy people will be wrongly diagnosed as depressed. A professor replied that it didn’t matter that healthy people were treated as happy pills have no side effects!

Happy pills have many side effects. They remove both the top and the bottom of the emotions, which, according to some patients, feels like living under a cheese-dish cover. Patients care less about the consequences of their actions, lose empathy towards others, and can become very aggressive. In school shootings in the United States and elsewhere a striking number of people have been on antidepressants.

The companies tell us that only 5% get sexual problems with happy pills, but that’s not true. In a study designed to look at this problem, sexual disturbances developed in 59% of 1,022 patients who all had a normal sex life before they started an antidepressant. The symptoms include decreased libido, delayed or no orgasm or ejaculation, and erectile dysfunction, all at a high rate, and with a low tolerance among 40% of the patients. Happy pills should therefore not have been marketed for depression where the effect is rather small, but as pills that destroy your sex life.
 

Myth 7: Happy pills are not addictive

They surely are and it is no wonder because they are chemically related to and act like amphetamine. Happy pills are a kind of narcotic on prescription. The worst argument I have heard about the pills not causing dependency is that patients do not require higher doses. Shall we then also believe that cigarettes are not addictive? The vast majority of smokers consume the same number of cigarettes for years.
 

Myth 8: The prevalence of depression has increased a lot
A professor argued in a TV debate that the large consumption of happy pills wasn’t a problem because the incidence of depression had increased greatly in the last 50 years. I replied it was impossible to say much about this because the criteria for making the diagnosis had been lowered markedly during this period. If you wish to count elephants in Africa, you don’t lower the criteria for what constitutes an elephant and count all the wildebeest, too.
 

Myth 9: The main problem is not overtreatment, but undertreatment

Again, leading psychiatrists are completely out of touch with reality. In a 2007 survey, 51% of the 108 psychiatrists said that they used too much medicine and only 4 % said they used too little. In 2001–2003, 20% of the US population aged 18–54 years received treatment for emotional problems, and sales of happy pills are so high in Denmark that every one of us could be in treatment for 6 years of our lives. That is sick.
 

Myth 10: Antipsychotics prevent brain damage

Some professors say that schizophrenia causes brain damage and that it is therefore important to use antipsychotics. However, antipsychotics lead to shrinkage of the brain, and this effect is directly related to the dose and duration of the treatment. There is other good evidence to suggest that one should use antipsychotics as little as possible, as the patients then fare better in the long term. Indeed, one may completely avoid using antipsychotics in most patients with schizophrenia, which would significantly increase the chances that they will become healthy, and also increase life expectancy, as antipsychotics kill many patients.
How should we use psychotropic drugs?

I am not against using drugs, provided we know what we are doing and only use them in situations where they do more good than harm. Psychiatric drugs can be useful sometimes for some patients, especially in short-term treatment, in acute situations. But my studies in this area lead me to a very uncomfortable conclusion:

THIS IS DANGEROUS : Our citizens would be far better off if we removed all the psychotropic drugs from the market, as doctors are unable to handle them. It is inescapable that their availability creates more harm than good. Psychiatrists should therefore do everything they can to treat as little as possible, in as short time as possible, or not at all, with psychotropic drugs.
 

- See more at: http://davidhealy.org/psychiatry-gone-astray/#sthash.pz2APHJy.dpuf

Monday, 20 January 2014

The Epidemic of Mental Illness: Why? By Marcia Angell - Courtesy of the New York Review of Books



http://www.nybooks.com/articles/archives/2011/jun/23/epidemic-mental-illness-why/?page=1

The Epidemic of Mental Illness: Why?
Marcia Angell



It seems that Americans are in the midst of a raging epidemic of mental illness, at least as judged by the increase in the numbers treated for it. The tally of those who are so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) increased nearly two and a half times between 1987 and 2007—from one in 184 Americans to one in seventy-six. For children, the rise is even more startling—a thirty-five-fold increase in the same two decades. Mental illness is now the leading cause of disability in children, well ahead of physical disabilities like cerebral palsy or Down syndrome, for which the federal programs were created.

A large survey of randomly selected adults, sponsored by the National Institute of Mental Health (NIMH) and conducted between 2001 and 2003, found that an astonishing 46 percent met criteria established by the American Psychiatric Association (APA) for having had at least one mental illness within four broad categories at some time in their lives. The categories were “anxiety disorders,” including, among other subcategories, phobias and post-traumatic stress disorder (PTSD); “mood disorders,” including major depression and bipolar disorders; “impulse-control disorders,” including various behavioral problems and attention-deficit/hyperactivity disorder (ADHD); and “substance use disorders,” including alcohol and drug abuse. Most met criteria for more than one diagnosis. Of a subgroup affected within the previous year, a third were under treatment—up from a fifth in a similar survey ten years earlier.

Nowadays treatment by medical doctors nearly always means psychoactive drugs, that is, drugs that affect the mental state. In fact, most psychiatrists treat only with drugs, and refer patients to psychologists or social workers if they believe psychotherapy is also warranted. The shift from “talk therapy” to drugs as the dominant mode of treatment coincides with the emergence over the past four decades of the theory that mental illness is caused primarily by chemical imbalances in the brain that can be corrected by specific drugs. That theory became broadly accepted, by the media and the public as well as by the medical profession, after Prozac came to market in 1987 and was intensively promoted as a corrective for a deficiency of serotonin in the brain. The number of people treated for depression tripled in the following ten years, and about 10 percent of Americans over age six now take antidepressants. The increased use of drugs to treat psychosis is even more dramatic. The new generation of antipsychotics, such as Risperdal, Zyprexa, and Seroquel, has replaced cholesterol-lowering agents as the top-selling class of drugs in the US.

What is going on here? Is the prevalence of mental illness really that high and still climbing? Particularly if these disorders are biologically determined and not a result of environmental influences, is it plausible to suppose that such an increase is real? Or are we learning to recognize and diagnose mental disorders that were always there? On the other hand, are we simply expanding the criteria for mental illness so that nearly everyone has one? And what about the drugs that are now the mainstay of treatment? Do they work? If they do, shouldn’t we expect the prevalence of mental illness to be declining, not rising?

These are the questions, among others, that concern the authors of the three provocative books under review here. They come at the questions from different backgrounds—Irving Kirsch is a psychologist at the University of Hull in the UK, Robert Whitaker a journalist and previously the author of a history of the treatment of mental illness called Mad in America (2001), and Daniel Carlat a psychiatrist who practices in a Boston suburb and publishes a newsletter and blog about his profession.

The authors emphasize different aspects of the epidemic of mental illness. Kirsch is concerned with whether antidepressants work. Whitaker, who has written an angrier book, takes on the entire spectrum of mental illness and asks whether psychoactive drugs create worse problems than they solve. Carlat, who writes more in sorrow than in anger, looks mainly at how his profession has allied itself with, and is manipulated by, the pharmaceutical industry. But despite their differences, all three are in remarkable agreement on some important matters, and they have documented their views well.

First, they agree on the disturbing extent to which the companies that sell psychoactive drugs—through various forms of marketing, both legal and illegal, and what many people would describe as bribery—have come to determine what constitutes a mental illness and how the disorders should be diagnosed and treated. This is a subject to which I’ll return.

Second, none of the three authors subscribes to the popular theory that mental illness is caused by a chemical imbalance in the brain. As Whitaker tells the story, that theory had its genesis shortly after psychoactive drugs were introduced in the 1950s. The first was Thorazine (chlorpromazine), which was launched in 1954 as a “major tranquilizer” and quickly found widespread use in mental hospitals to calm psychotic patients, mainly those with schizophrenia. Thorazine was followed the next year by Miltown (meprobamate), sold as a “minor tranquilizer” to treat anxiety in outpatients. And in 1957, Marsilid (iproniazid) came on the market as a “psychic energizer” to treat depression.

In the space of three short years, then, drugs had become available to treat what at that time were regarded as the three major categories of mental illness—psychosis, anxiety, and depression—and the face of psychiatry was totally transformed. These drugs, however, had not initially been developed to treat mental illness. They had been derived from drugs meant to treat infections, and were found only serendipitously to alter the mental state. At first, no one had any idea how they worked. They simply blunted disturbing mental symptoms. But over the next decade, researchers found that these drugs, and the newer psychoactive drugs that quickly followed, affected the levels of certain chemicals in the brain.

Some brief—and necessarily quite simplified—background: the brain contains billions of nerve cells, called neurons, arrayed in immensely complicated networks and communicating with one another constantly. The typical neuron has multiple filamentous extensions, one called an axon and the others called dendrites, through which it sends and receives signals from other neurons. For one neuron to communicate with another, however, the signal must be transmitted across the tiny space separating them, called a synapse. To accomplish that, the axon of the sending neuron releases a chemical, called a neurotransmitter, into the synapse. The neurotransmitter crosses the synapse and attaches to receptors on the second neuron, often a dendrite, thereby activating or inhibiting the receiving cell. Axons have multiple terminals, so each neuron has multiple synapses. Afterward, the neurotransmitter is either reabsorbed by the first neuron or metabolized by enzymes so that the status quo ante is restored. There are exceptions and variations to this story, but that is the usual way neurons communicate with one another.

When it was found that psychoactive drugs affect neurotransmitter levels in the brain, as evidenced mainly by the levels of their breakdown products in the spinal fluid, the theory arose that the cause of mental illness is an abnormality in the brain’s concentration of these chemicals that is specifically countered by the appropriate drug. For example, because Thorazine was found to lower dopamine levels in the brain, it was postulated that psychoses like schizophrenia are caused by too much dopamine. Or later, because certain antidepressants increase levels of the neurotransmitter serotonin in the brain, it was postulated that depression is caused by too little serotonin. (These antidepressants, like Prozac or Celexa, are called selective serotonin reuptake inhibitors (SSRIs) because they prevent the reabsorption of serotonin by the neurons that release it, so that more remains in the synapses to activate other neurons.) Thus, instead of developing a drug to treat an abnormality, an abnormality was postulated to fit a drug.

That was a great leap in logic, as all three authors point out. It was entirely possible that drugs that affected neurotransmitter levels could relieve symptoms even if neurotransmitters had nothing to do with the illness in the first place (and even possible that they relieved symptoms through some other mode of action entirely). As Carlat puts it, “By this same logic one could argue that the cause of all pain conditions is a deficiency of opiates, since narcotic pain medications activate opiate receptors in the brain.” Or similarly, one could argue that fevers are caused by too little aspirin.

But the main problem with the theory is that after decades of trying to prove it, researchers have still come up empty-handed. All three authors document the failure of scientists to find good evidence in its favor. Neurotransmitter function seems to be normal in people with mental illness before treatment. In Whitaker’s words:

    Prior to treatment, patients diagnosed with schizophrenia, depression, and other psychiatric disorders do not suffer from any known “chemical imbalance.” However, once a person is put on a psychiatric medication, which, in one manner or another, throws a wrench into the usual mechanics of a neuronal pathway, his or her brain begins to function…abnormally.

Carlat refers to the chemical imbalance theory as a “myth” (which he calls “convenient” because it destigmatizes mental illness), and Kirsch, whose book focuses on depression, sums up this way: “It now seems beyond question that the traditional account of depression as a chemical imbalance in the brain is simply wrong.” Why the theory persists despite the lack of evidence is a subject I’ll come to.

Do the drugs work? After all, regardless of the theory, that is the practical question. In his spare, remarkably engrossing book, The Emperor’s New Drugs, Kirsch describes his fifteen-year scientific quest to answer that question about antidepressants. When he began his work in 1995, his main interest was in the effects of placebos. To study them, he and a colleague reviewed thirty-eight published clinical trials that compared various treatments for depression with placebos, or compared psychotherapy with no treatment. Most such trials last for six to eight weeks, and during that time, patients tend to improve somewhat even without any treatment. But Kirsch found that placebos were three times as effective as no treatment. That didn’t particularly surprise him. What did surprise him was the fact that antidepressants were only marginally better than placebos. As judged by scales used to measure depression, placebos were 75 percent as effective as antidepressants. Kirsch then decided to repeat his study by examining a more complete and standardized data set.

The data he used were obtained from the US Food and Drug Administration (FDA) instead of the published literature. When drug companies seek approval from the FDA to market a new drug, they must submit to the agency all clinical trials they have sponsored. The trials are usually double-blind and placebo-controlled, that is, the participating patients are randomly assigned to either drug or placebo, and neither they nor their doctors know which they have been assigned. The patients are told only that they will receive an active drug or a placebo, and they are also told of any side effects they might experience. If two trials show that the drug is more effective than a placebo, the drug is generally approved. But companies may sponsor as many trials as they like, most of which could be negative—that is, fail to show effectiveness. All they need is two positive ones. (The results of trials of the same drug can differ for many reasons, including the way the trial is designed and conducted, its size, and the types of patients studied.)

For obvious reasons, drug companies make very sure that their positive studies are published in medical journals and doctors know about them, while the negative ones often languish unseen within the FDA, which regards them as proprietary and therefore confidential. This practice greatly biases the medical literature, medical education, and treatment decisions.

Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. This was a better data set than the one used in his previous study, not only because it included negative studies but because the FDA sets uniform quality standards for the trials it reviews and not all of the published research in Kirsch’s earlier study had been submitted to the FDA as part of a drug approval application.

Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.

Kirsch was also struck by another unexpected finding. In his earlier study and in work by others, he observed that even treatments that were not considered to be antidepressants—such as synthetic thyroid hormone, opiates, sedatives, stimulants, and some herbal remedies—were as effective as antidepressants in alleviating the symptoms of depression. Kirsch writes, “When administered as antidepressants, drugs that increase, decrease or have no effect on serotonin all relieve depression to about the same degree.” What all these “effective” drugs had in common was that they produced side effects, which participating patients had been told they might experience.

It is important that clinical trials, particularly those dealing with subjective conditions like depression, remain double-blind, with neither patients nor doctors knowing whether or not they are getting a placebo. That prevents both patients and doctors from imagining improvements that are not there, something that is more likely if they believe the agent being administered is an active drug instead of a placebo. Faced with his findings that nearly any pill with side effects was slightly more effective in treating depression than an inert placebo, Kirsch speculated that the presence of side effects in individuals receiving drugs enabled them to guess correctly that they were getting active treatment—and this was borne out by interviews with patients and doctors—which made them more likely to report improvement. He suggests that the reason antidepressants appear to work better in relieving severe depression than in less severe cases is that patients with severe symptoms are likely to be on higher doses and therefore experience more side effects.

To further investigate whether side effects bias responses, Kirsch looked at some trials that employed “active” placebos instead of inert ones. An active placebo is one that itself produces side effects, such as atropine—a drug that selectively blocks the action of certain types of nerve fibers. Although not an antidepressant, atropine causes, among other things, a noticeably dry mouth. In trials using atropine as the placebo, there was no difference between the antidepressant and the active placebo. Everyone had side effects of one type or another, and everyone reported the same level of improvement. Kirsch reported a number of other odd findings in clinical trials of antidepressants, including the fact that there is no dose-response curve—that is, high doses worked no better than low ones—which is extremely unlikely for truly effective drugs. “Putting all this together,” writes Kirsch,

    leads to the conclusion that the relatively small difference between drugs and placebos might not be a real drug effect at all. Instead, it might be an enhanced placebo effect, produced by the fact that some patients have broken [the] blind and have come to realize whether they were given drug or placebo. If this is the case, then there is no real antidepressant drug effect at all. Rather than comparing placebo to drug, we have been comparing “regular” placebos to “extra-strength” placebos.

That is a startling conclusion that flies in the face of widely accepted medical opinion, but Kirsch reaches it in a careful, logical way. Psychiatrists who use antidepressants—and that’s most of them—and patients who take them might insist that they know from clinical experience that the drugs work. But anecdotes are known to be a treacherous way to evaluate medical treatments, since they are so subject to bias; they can suggest hypotheses to be studied, but they cannot prove them. That is why the development of the double-blind, randomized, placebo-controlled clinical trial in the middle of the past century was such an important advance in medical science. Anecdotes about leeches or laetrile or megadoses of vitamin C, or any number of other popular treatments, could not stand up to the scrutiny of well-designed trials. Kirsch is a faithful proponent of the scientific method, and his voice therefore brings a welcome objectivity to a subject often swayed by anecdotes, emotions, or, as we will see, self-interest.

Whitaker’s book is broader and more polemical. He considers all mental illness, not just depression. Whereas Kirsch concludes that antidepressants are probably no more effective than placebos, Whitaker concludes that they and most of the other psychoactive drugs are not only ineffective but harmful. He begins by observing that even as drug treatment for mental illness has skyrocketed, so has the prevalence of the conditions treated:

    The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate. Thus we arrive at an obvious question, even though it is heretical in kind: Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?

Moreover, Whitaker contends, the natural history of mental illness has changed. Whereas conditions such as schizophrenia and depression were once mainly self-limited or episodic, with each episode usually lasting no more than six months and interspersed with long periods of normalcy, the conditions are now chronic and lifelong. Whitaker believes that this might be because drugs, even those that relieve symptoms in the short term, cause long-term mental harms that continue after the underlying illness would have naturally resolved.

The evidence he marshals for this theory varies in quality. He doesn’t sufficiently acknowledge the difficulty of studying the natural history of any illness over a fifty-some-year time span during which many circumstances have changed, in addition to drug use. It is even more difficult to compare long-term outcomes in treated versus untreated patients, since treatment may be more likely in those with more severe disease at the outset. Nevertheless, Whitaker’s evidence is suggestive, if not conclusive.

If psychoactive drugs do cause harm, as Whitaker contends, what is the mechanism? The answer, he believes, lies in their effects on neurotransmitters. It is well understood that psychoactive drugs disturb neurotransmitter function, even if that was not the cause of the illness in the first place. Whitaker describes a chain of effects. When, for example, an SSRI antidepressant like Celexa increases serotonin levels in synapses, it stimulates compensatory changes through a process called negative feedback. In response to the high levels of serotonin, the neurons that secrete it (presynaptic neurons) release less of it, and the postsynaptic neurons become desensitized to it. In effect, the brain is trying to nullify the drug’s effects. The same is true for drugs that block neurotransmitters, except in reverse. For example, most antipsychotic drugs block dopamine, but the presynaptic neurons compensate by releasing more of it, and the postsynaptic neurons take it up more avidly. (This explanation is necessarily oversimplified, since many psychoactive drugs affect more than one of the many neurotransmitters.)

With long-term use of psychoactive drugs, the result is, in the words of Steve Hyman, a former director of the NIMH and until recently provost of Harvard University, “substantial and long-lasting alterations in neural function.” As quoted by Whitaker, the brain, Hyman wrote, begins to function in a manner “qualitatively as well as quantitatively different from the normal state.” After several weeks on psychoactive drugs, the brain’s compensatory efforts begin to fail, and side effects emerge that reflect the mechanism of action of the drugs. For example, the SSRIs may cause episodes of mania, because of the excess of serotonin. Antipsychotics cause side effects that resemble Parkinson’s disease, because of the depletion of dopamine (which is also depleted in Parkinson’s disease). As side effects emerge, they are often treated by other drugs, and many patients end up on a cocktail of psychoactive drugs prescribed for a cocktail of diagnoses. The episodes of mania caused by antidepressants may lead to a new diagnosis of “bipolar disorder” and treatment with a “mood stabilizer,” such as Depokote (an anticonvulsant) plus one of the newer antipsychotic drugs. And so on.

Some patients take as many as six psychoactive drugs daily. One well- respected researcher, Nancy Andreasen, and her colleagues published evidence that the use of antipsychotic drugs is associated with shrinkage of the brain, and that the effect is directly related to the dose and duration of treatment. As Andreasen explained to The New York Times, “The prefrontal cortex doesn’t get the input it needs and is being shut down by drugs. That reduces the psychotic symptoms. It also causes the prefrontal cortex to slowly atrophy.”*

Getting off the drugs is exceedingly difficult, according to Whitaker, because when they are withdrawn the compensatory mechanisms are left unopposed. When Celexa is withdrawn, serotonin levels fall precipitously because the presynaptic neurons are not releasing normal amounts and the postsynaptic neurons no longer have enough receptors for it. Similarly, when an antipsychotic is withdrawn, dopamine levels may skyrocket. The symptoms produced by withdrawing psychoactive drugs are often confused with relapses of the original disorder, which can lead psychiatrists to resume drug treatment, perhaps at higher doses.

Unlike the cool Kirsch, Whitaker is outraged by what he sees as an iatrogenic (i.e., inadvertent and medically introduced) epidemic of brain dysfunction, particularly that caused by the widespread use of the newer (“atypical”) antipsychotics, such as Zyprexa, which cause serious side effects. Here is what he calls his “quick thought experiment”:

    Imagine that a virus suddenly appears in our society that makes people sleep twelve, fourteen hours a day. Those infected with it move about somewhat slowly and seem emotionally disengaged. Many gain huge amounts of weight—twenty, forty, sixty, and even one hundred pounds. Often their blood sugar levels soar, and so do their cholesterol levels. A number of those struck by the mysterious illness—including young children and teenagers—become diabetic in fairly short order…. The federal government gives hundreds of millions of dollars to scientists at the best universities to decipher the inner workings of this virus, and they report that the reason it causes such global dysfunction is that it blocks a multitude of neurotransmitter receptors in the brain—dopaminergic, serotonergic, muscarinic, adrenergic, and histaminergic. All of those neuronal pathways in the brain are compromised. Meanwhile, MRI studies find that over a period of several years, the virus shrinks the cerebral cortex, and this shrinkage is tied to cognitive decline. A terrified public clamors for a cure.

    Now such an illness has in fact hit millions of American children and adults. We have just described the effects of Eli Lilly’s best-selling antipsychotic, Zyprexa.

If psychoactive drugs are useless, as Kirsch believes about antidepressants, or worse than useless, as Whitaker believes, why are they so widely prescribed by psychiatrists and regarded by the public and the profession as something akin to wonder drugs? Why is the current against which Kirsch and Whitaker and, as we will see, Carlat are swimming so powerful? I discuss these questions in Part II of this review.

—This is the first part of a two-part article.
Letters

'The Illusions of Psychiatry': An Exchange August 18, 2011

Sunday, 19 January 2014

"Psychiatric diagnosis as a political device," Excellent article as always by Dr Joanna Moncrieff - Courtesy of Palgrave Journals - Read Full article via link




http://www.palgrave-journals.com/sth/journal/v8/n4/full/sth200911a.html

Psychiatric diagnosis as a political device

Joanna Moncrieff
Department of Mental Health Sciences, University College London, Gower Street, London, W1W 7EJ, UK. E-mail: j.moncrieff@ucl.ac.uk

Abstract

Diagnosis in psychiatry is portrayed as the same type of activity as diagnosis in other areas of medicine. However, the notion that psychiatric conditions are equivalent to physical diseases has been contested for several decades. In this paper, I use the work of Jeff Coulter and David Ingelby to explore the role of diagnosis in routine psychiatric practice. Coulter examined the process of identification of mental disturbance and suggested that it was quite different from the process of identifying a physical disease, as it was dependent on social norms and circumstances. Ingelby pointed out that it was the apparent medical nature of the process that enabled it to act as a justification for the actions that followed. I describe the stories of two patients, which illustrate the themes Ingelby and Coulter identified. In particular they demonstrate that, in contrast to the idea that diagnosis should determine treatment, diagnoses in psychiatry are applied to justify predetermined social responses, designed to control and contain disturbed behaviour and provide care for dependents. Hence psychiatric diagnosis functions as a political device employed to legitimate activities that might otherwise be contested.

Introduction

Modern diagnostic systems in psychiatry, like the Diagnostic and Statistical Manual (DSM), now in its fourth version and soon to be updated, have been enormously influential. Many formal concepts like ‘clinical depression’, attention deficit hyperactivity disorder (ADHD) and more recently bipolar disorder have been incorporated into lay language and understandings, helping to shape the way ordinary people view themselves and their situations (Healy, 2004; Rose, 2004). These systems also form the basis of a vast research effort aimed at mapping the prevalence, aetiology, outcome and treatment response of the entities defined. They are also used in pharmaceutical marketing, which often starts with raising awareness of a particular diagnostic category, before going on to promote a drug for its treatment (Koerner, 2002).

The basis of modern diagnostic systems, the idea that psychiatric disorders can be conceptualized in the same terms as medical diseases, has been challenged for decades now. Antipsychiatrists such as Laing and Szasz, and sociologists such as Conrad, stressed the differences between medical diseases and psychiatric conditions and pointed out the social control function served by dressing up normative judgements about behaviour as medical facts. Although their work provided an important conceptual analysis, it often relied on extreme and exceptional examples of the use of psychiatric diagnosis, such as the incarceration of dissidents in the old Soviet Union. Less attention has been paid to the nature of routine psychiatric practice. More recent analyses have highlighted the tautological and redundant nature of psychiatric diagnoses (Bentall, 1990). A diagnosis is applied on the basis of observations of an individual's behaviour, but diagnostic categories are defined by collections of typical behaviours.

Champions of the idea that psychiatric disorders are like other medical diseases have continued to assert their position (Craddock et al, 2008), but have not answered the basic arguments posed by their challengers. David Pilgrim recently argued that the debate had been rehearsed so many times that the question that remained was not about the validity of psychiatric diagnosis, but why it has survived, and what interests it serves (Pilgrim, 2007).

In this paper I examine the gulf between what psychiatric diagnosis purports to be and how it functions in everyday practice. I have returned to the analyses of Jeff Coulter, a sociologist with an ethnomethodological orientation, and David Ingelby, a psychologist and philosopher, whose work examines the differences between psychiatric diagnosis and diagnosis in the rest of medicine. In particular, it suggests that contrary to other areas of medicine, where diagnosis determines the appropriate treatment to be given, in psychiatry diagnosis is merely a ‘signal’ for the application of pre-existing institutional arrangements. I shall present the stories of two real psychiatric patients, who are reasonably typical of people with severe and long-standing psychiatric problems. These stories illustrate how psychiatric diagnosis can be understood as functioning as a political device, in the sense that it legitimates a particular social response to aberrant behaviour of various sorts, but protects that response from any democratic challenge.

What is Psychiatric Diagnosis?

First however, it is necessary to clarify the meaning of the term ‘diagnosis’. Diagnosis is a medical concept which covers both the process of identifying a disease, and the designation of that disease. Reaching a ‘diagnosis’ involves investigations and observations that help to identify the nature of the underlying disease that is thought to be causing the individual's symptoms. Having a diagnosis indicates that the nature of the underlying disease has been certainly or probably ascertained. Everyone with the same diagnosis is assumed to have the same disease, the same biological abnormality. This means that their outcomes are determined by the nature of the disease, within the range of outcomes associated with that disease. They can also be predicted to respond to a given set of treatments that are known or thought to modify the particular disease process. Indeed, establishing the correct treatment is the main practical function of diagnosis in medicine.

The use of the concept of diagnosis in psychiatry implies an equivalence between psychiatric classification and the process of medical diagnosis with the implication that psychiatric problems are caused by a bodily dysfunction. Therefore diagnosis in psychiatry should determine the nature of treatment in the same way that it does in medicine.

Some early psychiatric classification systems used the term diagnosis loosely and metaphorically. The first versions of the DSM were influenced by psychoanalytical concepts of the nature and causation of psychiatric conditions. In contrast, the development of DSM III has been seen as a deliberate remedicalization of psychiatric classification. References to psychoanalytical concepts and sociological thinking were expunged, in favour of brief lists of supposedly objective empirical criteria for the application of different diagnoses. The manual is reported to have contained the statement that ‘mental disorders are a subset of medical disorders’ in an early draft, which was taken out after complaints by the American Psychological Association (Kutchins and Kirk, 1997). In addition a huge research effort was put into demonstrating the reliability, or reproduceability, of DSM III categories, with almost no attention paid to their validity. This was necessary so that the concepts outlined in the manual could legitimately be employed in medical research designs such as epidemiological studies and clinical trials.

The construction of medical-like diagnostic systems can be seen as one of a number of ways in which psychiatry increased its medical orientation from the 1970s onwards, in the face of attacks from the antipsychiatry movement and elsewhere (Wilson, 1993). The influence of psychoanalysis and sociological theories of mental illness were seen as making psychiatry vulnerable to competition and economic pressures and the reassertion of the ‘psychiatry is like any other medical speciality’ argument was the line taken to safeguard psychiatry's dominant role in the management of madness and mental distress (Wilson, 1993). As well as detailed diagnostic manuals like DSM III, the medical credentials of psychiatry were promoted through an increasing focus on high-tech biological research and greater links with the pharmaceutical industry.

Before I go any further it is necessary to assess whether the view of psychiatric disorders as manifestations of bodily diseases is justified. In contrast to most medical conditions like diabetes, tuberculosis and heart disease, no psychiatric condition can be traced to a specific dysfunctional bodily process, excepting dementia, and the occasional neurological conditions that present to psychiatrists. In other words, there is no agreed physical aetiology for psychiatric disorders, although there are numerous and ongoing speculations about physical processes that might be involved.

In addition, despite claims to the contrary, there is no evidence that psychiatric conditions respond to physical interventions in a specific manner, as would be expected on the basis of a disease model. The effects of psychiatric drugs can be explained by the fact that they are psychoactive substances, that produce altered, drug-induced states. These altered states may effectively suppress psychiatric ‘symptoms’. There is no evidence that any class of psychiatric drug acts by reversing or partially reversing an underlying physical process that is responsible for producing symptoms (Moncrieff and Cohen, 2005). Therefore the idea that the behaviours seen by psychiatrists are indicative of an underlying disease is simply an assumption. Diagnostic labels embody this assumption, but also obscure the fact that it remains an assumption by glossing over the complex subjective judgements involved in the process of applying the label.

Coulter and Ingelby on the Nature of Psychiatric Diagnosis
Coulter's analysis, based on an ethnomethodological paradigm and strongly influenced by the later Wittgenstein, suggests that psychiatric diagnosis is a quite different sort of activity from the medical process of diagnosis (Coulter, 1979). ‘Psychiatric practices are not poor cousins of physical diagnoses, for they do not belong to that family of practices, however medical are some of the consequences’ (p. 149). In contrast to medical diagnoses, which result from the application of biological knowledge, Coulter argues that the designation of insanity or mental illness is not made on the basis of scientific methods, and cannot claim to be objective and independent of context, as scientific judgements are meant to be. Instead someone is said to be mad or mentally ill when their behaviour infringes social norms of intelligibility. What counts as intelligible, reasonable or rational is determined by unwritten rules of conduct that are constituted by social groups. In contrast to scientific laws, which are universal, judgements about conduct are always dependent on context. Different rules apply in different situations and what counts as infringement of those rules also varies: ‘psychiatric diagnoses are predicated on social and moral contingencies relating both to a person's conduct and to the context within which the diagnosis is being made’ (p. 147).

Although they are rarely explicit, rules of conduct are constituted and understood by most members of the social group they operate within. Again this contrasts with scientific laws which emanate from the physical world and require specialist scientific knowledge to be understood and applied. Coulter suggests that ascriptions of mental illness or insanity are first made by the individual themselves or by family members, social workers or general practitioners, before the prospective patient reaches a psychiatrist. The diagnosis is merely a formal sounding label given to behaviour that has already been identified as problematic. It is a ‘response to mundane social and moral requirements, and not to the development of some esoteric branch of knowledge’ (p. 147).

Rules of conduct are bound up with the practical arrangements that exist for enforcing the rules, such as arrangements or sanctions for dealing with people who do not abide by them. Implicit in Coulter's analysis is the idea that psychiatric diagnosis serves the pragmatic function of enabling the appropriate application of these arrangements. In Coulter's words diagnoses are ‘devices for pragmatic use in ward or treatment allocation’ (p. 149).

Ingelby accepts Coulter's emphasis on intelligibility and how it is understood with reference to rules of conduct (Ingelby, 1982). Ingelby's interest, however, is in the contradiction between the way Coulter demonstrates that psychiatric diagnosis is applied and the way it is portrayed. What Ingelby points out is that psychiatric diagnosis can only function in the way outlined by Coulter because it presents itself as an activity other than that which it actually consists of. In other words, it is only because psychiatry presents its activities as essentially medical in nature, that it is able to fulfil the function of social control that Coulter identifies, in the sense of enforcing certain rules of conduct. A psychiatric diagnosis brings with it consequences that follow directly from the implication that it designates a medical disease.

A psychiatric diagnosis therefore allows a situation to be construed within a medical framework and this framework obscures the values and judgements embedded in psychiatric activity. The framework allows interventions designed to curb or control unwanted behaviour to be conceptualized as medical treatments – in other words, as treatments that modify the underlying disease process and thereby help restore normal functioning. Psychiatric treatments are presented as capable of changing the course of the condition, not merely suppressing or ameliorating its symptoms or manifestations. From this it follows that the problem or disorder can be regarded as temporary in nature and can be expected to abate as soon as effective treatment is administered. If it does not, then a search for more effective treatments commences.

Mental health legislation is also premised on the notions of the treatability of mental disorders. Involuntary commitment to hospital is presented as serving the best interests of the patient because treatment will restore them to normal functioning. It is only their temporarily disordered state of mind that prevents them from perceiving their need for treatment. Mental Health legislation cannot be used simply to incarcerate someone whose behaviour is odd, antisocial, violent or dangerous. It has to be justified on the basis of providing ‘treatment’ that will benefit the individual by alleviating their illness or disorder. Although the ‘treatability’ criterion has arguably been weakened in the latest Mental Health Act of England and Wales passed in 2008, and this may reflect a desire in parts of government that the Act be used to detain people on the grounds of dangerousness alone, it remains the case that ‘appropriate medical treatment’ has to be available to justify the use of coercive measures (UK Parliament, 2007).

Apart from disguising control, psychiatric diagnosis also enables the provision of care for adults. In modern societies the presence of physical illness or disability entails an unquestioned entitlement to state-funded care and support, and other prerogatives of the sick role. The implication provided by a psychiatric diagnosis that disturbed behaviour originates from a bodily disease, enables these prerogatives to be extended to cover numerous situations in which people find it difficult to care for themselves. Temporary sick notes for people in distress can be justified by a diagnosis of ‘depression’ and long-term care can be provided for people with more severe behavioural problems. Psychiatric diagnosis therefore authorizes the allocation of state funds, but forecloses any debate about this area of policy. In addition, by eradicating culpability for the actions concerned, since these are the result of disease rather than intention, a psychiatric diagnosis can exempt people from the usual sanctions of the criminal justice system.

Coulter examines several empirical sociological investigations of the way in which psychiatric diagnoses are applied in practice. The most well-known is perhaps the Rosenhan experiment, which was published in 1973 in the leading scientific journal Science (Rosenhan, 1973). In this experiment eight volunteers contrived to be admitted to psychiatric hospitals by presenting at appointments saying they were hearing a voice saying ‘empty’, ‘hollow’ or ‘thud’. After admission all behaved entirely normally and all were discharged with a diagnosis of ‘schizophrenia in remission’. This is especially remarkable since the ‘symptom’ at presentation, that of hearing a voice speaking a single word, is quite unlike typical auditory hallucinations that occur in people with psychotic disorders like schizophrenia. Rosenhan concluded that the psychiatric system could not distinguish between the sane and the insane. What Coulter is interested in is the fact that the context of being in a mental hospital determined the way a diagnosis was applied. In this context, the psychiatric system assumes that people are mentally ill, and observations of their behaviour are shaped to fit this assumption. In other contexts like the military, or assessments for welfare benefits, the assumption that operates is that people are well, unless proven otherwise (p. 146).

What the Rosenhan experiment tells us, and why it caused the consternation it did, is that in ordinary practice, psychiatric diagnoses are applied to whoever presents themselves or is presented to psychiatric services, unless a good case can be made that they should be dealt with by another institution. Psychiatric services simply apply a diagnosis to whoever they are asked to deal with. The diagnosis signals that the situation can be re-interpreted according to a medical framework. This framework obliterates the memory that what psychiatric ‘treatment’ consists of is a particular social response to certain problematic behaviours. It conceals the fact that the response could be different. As Ingelby points out: ‘If it were accepted that the meaning of the label were simply to signal a certain organisational response, then questions would immediately arise about the propriety of those responses’ (Ingelby, 1982, p. 137).

Case Studies

The stories of two individual mental health patients are presented below. Both stories are fairly typical of people with more severe forms of psychiatric disorder. The patients are described using pseudonyms, and factual details have been changed to preserve their anonymity. Bill was chosen as he represents a group of patients whose behaviour clearly does not conform with the criteria for the diagnosis of any specific mental illness such as schizophrenia or manic depression. He appears to represent an example of people who end up in the psychiatric system because no other institution appears to be capable of, or willing to deal with them, but the sort of problems he presents are not uncommon. I could have chosen a number of other patients who raise similar issues. Tanya was chosen because her problems conform more nearly to the picture of a severe psychiatric disorder, namely schizophrenia, although, like many such patients, she does not exhibit the typical ‘textbook’ symptoms of the disorder.
 

Bill

Bill was first admitted to psychiatric hospital at the age of 29. Over the next few years he went in and out of hospital several times, and in his late 30s, when his elderly father could no longer cope with caring for him, he was admitted as a long-stay patient. He has spent the last 15 years in hospital. At school he was described as a ‘loner’, he worked very little and was dependent on his parents until admission. The ‘symptoms’ that led to his admission to hospital, and that he continued to display intermittently over the following years, consisted of periodic outbursts of violence, lack of activity, little communication, rigidity and resistance to change. He usually became violent or abusive if he was asked to do something he did not want to do, or if his wishes were challenged in some way. Throughout the early years of his stay in hospital he was diagnosed with schizophrenia and treated with injectable and oral antipsychotic medication. He spent most of his time sitting in the same chair smoking cigarettes. He engaged in little conversation and gave mostly one word answers in response to questions. He rarely took part in any organized activities provided, nor did he engage in informal contact with staff. He did communicate with some of the other patients, among whom he had great authority and he was able to make them lend him money and run errands for him.

A few years ago a new psychiatric team took over Bill's care and decided that there was no basis for the diagnosis of schizophrenia. The team started to reduce his considerable dose of antipsychotic medications, at which point he became more talkative and animated. However, his violent outbursts also became more frequent, and so his medication was increased again. About a year later he made a violent and premeditated attack on another patient, which resulted in serious injuries. Although he was arrested for this attack, no charges have been pressed and the police have shown no further interest in the case, despite repeated requests from the psychiatric team and hospital management. Shortly after this incident, Bill attacked a member of staff in a similar fashion, but resulting in less severe injuries on this occasion. At this point he was transferred to a locked ward, for which he needed to be under a section of the Mental Health Act 1983 (although he would have gone quite willingly). In the section papers he was given a diagnosis of psychopathic disorder and he was detained on the legal grounds that treatment may alleviate his condition or prevent deterioration. The Mental Health Trust management were unhappy about the use of the clause for psychopathic disorder and queried the section. In the locked ward environment he was again labelled as having schizophrenia and his drug treatment was increased in response to further violent attacks. The discharge summary from this unit refers to him as having a ‘well documented diagnosis of paranoid schizophrenia’ (discharge summary, dated 17 May 2007). His refusal to take outside exercise and his concerns about being harmed by local ‘yardie’ gangs were interpreted as evidence of psychotic symptoms and he was referred to as having ‘persistent persecutory thoughts’ (ibid). However, he was never known to take much exercise, and had long expressed racist views. His concerns about ‘yardie gangs’ are also understandable as a reaction to being moved from a hospital situated in a largely white middle class area, in which he had resided for over 10 years, to a unit located in an inner city area with an ethnically more diverse population. At this stage an application was made for a placement in a long-term secure unit, but funding for this was turned down on the grounds that there was a lack of consensus about his diagnosis.

Bill's story illustrates some of the functions of psychiatric diagnosis. Firstly, his long-term hospitalization and drugging was justified by giving him a diagnosis of schizophrenia. No one questioned this diagnosis for many years, despite the fact that he never displayed any characteristic symptoms of schizophrenia. Secondly, the changing of his diagnosis from schizophrenia to psychopathic disorder caused discomfort within the system and it was soon changed back to schizophrenia by the staff of the psychiatric secure unit. In the secure unit, actions and utterances that appear quite understandable were interpreted as psychotic symptoms. Concerns about these symptoms were used to justify increasing the amount of medication he was prescribed, but his medication was also clearly increased in response to his continued violent outbursts, and unpredictable behaviour. Thirdly the police and criminal justice system took the fact of Bill's status as a psychiatric patient as a cue not to pursue criminal charges against him for a serious offence which could have sustained a charge of grievous bodily harm. The police indicated that they were unlikely to persuade the office of the Director of Prosecutions to take the case up, because the likelihood of obtaining a conviction in someone with a history of long-term contact with psychiatric services was so low. Lastly, the local funding body used the lack of consensus over Bill's diagnosis to justify refusing to fund an expensive placement. The implication was that if the diagnosis had been maintained as schizophrenia, the funding would have been awarded, although it is likely that the funding body was looking for reasons to make savings.
 

Tanya

Tanya is a 19-year-old girl who has been under the care of psychiatric services since she was 12. She has also been diagnosed as having schizophrenia. She has been an inpatient in various psychiatric facilities continuously for 3 years now, since her mother could no longer cope with her at home. She spends most of her time alone and talks little to staff or other patients. Occasionally she listens to music but she shows little interest in anything else. She does ask to spend time at home with her mother, but when she does, she spends most of her time in bed. Occasionally her speech is bizarre and she speaks about childhood friends and experiences in a disjointed way that is difficult to follow. Sometimes she expresses fears that people are trying to harm her, and this fear prevents her from going out alone, she says. She has never said that she hears voices, but it is inferred that she does because she laughs and sometimes talks to herself when she is alone.

There is no doubt that something unusual is occurring in Tanya's inner mental world, which she cannot share with other people and that prevents her from communicating and otherwise functioning in a normal way. The nature of her inner experiences is unclear, as is often the case. Since she has been in hospital she has been treated with five different ‘antipsychotic’ drugs. These have been changed because of adverse effects and because of a lack of improvement. She has just started taking clozapine, a drug that is thought to produce some improvement when other drugs have failed. She also periodically receives other sedative drugs when she is agitated or distressed and she has a ‘rehabilitation’ programme consisting of occupational therapy and other supervised activities to try and help her function more independently. So far she has made little progress.

In contrast to Bill, Tanya had problems that could be more closely identified with the pattern of behaviour characterized as schizophrenia, but even in her case she did not explicitly describe the typical symptoms, such as auditory hallucinations. These were instead inferred from her bizarre behaviour and all that was directly observable was that she seemed preoccupied with a largely inaccessible internal world. In common with around 30 per cent or more of people given this diagnosis, she did not recover, but has continued to be severely impaired for many years now. The diagnosis firstly enabled her to be removed from her mother, who was finding it difficult to cope with her, and admitted to hospital. Tanya was detained using the Mental Health Act on several occasions during her adolescence, in which she was classified as having a ‘mental illness’. Again the diagnosis also entails that she is cared for in a state-funded institution, without any questions asked about her ability to work or provide for herself.

In terms of her day-to-day care, the diagnosis of schizophrenia allows interventions such as the provision of medication and support to be presented not simply as sedative agents that may help suppress her mental preoccupations, but as treatments for a specific disease designed to produce a recovery. This has the advantage that if she does not show any improvement with the medications, as she clearly has not, different medications can be tried, doses can be increased, or additional drugs added. There is always something that can be done and staff involved in her day-to-day care can feel that they are able to make a fundamental difference to her outcome, through their specialist training in the application of medical treatments. When it was recently decided to start prescribing clozapine, one staff member commented ‘isn’t it exciting’. The alternative position is to admit that the most that can be achieved is a modest improvement in her functioning and that this could be done by anyone who could provide a modicum of containment and encouragement. What is in reality, a difficult task of trying to help someone whose focus is an internal world that she will not reveal, whose ability to relate to others is severely compromised, and who is likely to remain in this state for many years to come, can be presented instead as a series of specific medical interventions, each bringing with it expectancy of a breakthrough. When the time comes for her to be discharged from hospital– when the list of drugs and other interventions has been exhausted – then the state will fund her ongoing care in a staffed care home.

Discussion

The idea that psychiatry is an institution of social control is of course a familiar one (Foucault, 1965; Szasz, 1994; Conrad, 2009). However, there has been little examination of the actual mechanisms whereby the conceptual basis of psychiatry enables this control to be exercised. There has also been little attention paid to the other social function of psychiatry: the provision of care. Coulter and Ingelby took up the theme by examining the role of diagnosis in facilitating social control, and particularly the implications of its medical nature. Rosenhan's experiment revealed the process of initial application of a psychiatric label, but few people have yet examined the way that a psychiatric diagnosis functions once applied over the course of patients’ lives in actual psychiatric settings.

The patients’ stories presented here demonstrate how psychiatric diagnosis facilitates the control of people who exhibit violent and antisocial behaviours, whom the criminal justice system does not want to entertain. It is also employed to legitimate the provision of long-term care and to motivate and sustain the morale of the professionals who provide this care. There is no doubt that the behaviour of both the individuals described was, by most standards, highly abnormal and dysfunctional, but this is not in itself evidence of a specific physical disease. In both cases, the diagnosis they were given served clear social rather than medical functions.

Therefore, as Coulter and Ingelby suggest, psychiatric diagnosis appears to act as a political device to enable the application of various social arrangements for the care and control of people whose behaviour presents problems to themselves or to the people around them. Far from determining what sort of ‘treatment’ will be given, the diagnosis is invoked post hoc because the irreducible medical meaning of a diagnosis allows for those responses to be construed in a certain light. It allows behavioural control to be presented as treatment and it sanctions the release of state funds for support, the desirability of which might otherwise be challenged. It conceals the coercive aspects of psychiatric care but it also maintains hope and morale among staff, by encouraging the belief that the interventions that they are specially trained to apply make a fundamental difference to the outcome of psychiatric problems.

Over recent decades, western society has accepted the mass use of prescribed psychotropic drugs for everyday problems. This phenomenon is presented as the treatment of previously unrecognized psychiatric disorders, which clinicians are now encouraged to diagnose as depression, anxiety, attention deficit disorder or bipolar disorder. Another interpretation is that diagnostic fashions follow marketing imperatives. David Healy has shown how everyday nerves were transformed from anxiety to depression in order to market SSRI antidepressants and that the diagnosis of bipolar disorder was promoted to sell antipsychotics (Healy, 2004; Healy, 2006). These examples illustrate how the concept of psychiatric diagnosis can be exploited for profit and how particular diagnoses are employed by leading drug companies to expand their markets.

Pilgrim rightly concludes that the interests of the psychiatric profession and the pharmaceutical industry have helped to sustain the practice of psychiatric diagnosis (Pilgrim, 2007). However, the current analysis suggests there are more fundamental reasons for its survival, as highlighted by theories of medicalization and social control (Conrad, 2009). Psychiatric diagnosis forms a key part of the framework that supports the existing social response to certain problematic behaviours. It is a vital step in the medicalization of social problems. By purporting to indicate the presence of an objectively identifiable bodily disease, psychiatric diagnosis is able to re-designate social problems as medical ones, and the social responses to those problems as medical treatment.

By concealing the political nature of the responses to the situations that are labelled as ‘mental illness’, psychiatric diagnosis prevents these responses from being questioned and scrutinized. It allows the state to delegate a difficult area of social policy to supposed technical experts, and thus to remove it from the political and democratic arena. An effective challenge to the concept of diagnosis would entail a challenge to an entire body of flexible institutional arrangements designed to deal with those who infringe certain social norms. It would open up complex and thorny questions about how society should respond to those whose behaviour is disruptive or dangerous to other people, and under what circumstances the state should provide ongoing care and financial support. Psychiatry would be revealed, as Coulter suggests it should be, as a ‘practical moral enterprise’ (Coulter, 1979, p. 151), that requires democratic participation and control.

References

    Bentall, R.P. (1990) Reconstructing Schizophrenia. London: Routledge.
    Conrad, P. (2009) Medicalization and social control. Annual Review of Sociology 18: 209–232. | Article
    Coulter, J. (1979) The Social Construction of Mind. London: Macmillan.
    Craddock, N. et al (2008) Wake-up call for British psychiatry. British Journal of Psychiatry 193(1): 6–9. | Article | PubMed
    Foucault, M. (1965) Madness and Civilisation. London: Tavistock.
    Healy, D. (2004) Shaping the intimate: Influences on the experience of everyday nerves. Social Studies Science 34(2): 219–245. | Article
    Healy, D. (2006) The latest mania: Selling bipolar disorder. PLoS Medicine 3(4): e185. | Article | PubMed
    Ingelby, D. (1982) The social construction of mental illness. In: P. Wright and A. Treacher (eds.) The Problem of Medical Knowledge. Edinburgh, UK: Edinburgh University Press, pp. 123–143.
    Koerner, B. (2002) Disorders made to order. Mother Jones 27 (July/August), pp. 58–63, Ref Type: Magazine Article.
    Kutchins, H. and Kirk, S.A. (1997) Making us Crazy. DSM – the Psychiatric Bible and the Creation of Mental Disorders. New York: The Free Press.
    Moncrieff, J. and Cohen, D. (2005) Rethinking models of psychotropic drug action. Psychotherapy and Psychosomatics 74(3): 145–153. | Article | PubMed
    Pilgrim, D. (2007) The survival of psychiatric diagnosis. Social Science & Medicine 65(3): 536–547. | Article
    Rose, N. (2004) Becoming neurochemical selves. In: N. Stehr (ed.) Biotechnology, Commerce and Civil Society. New Brunswick, NJ: Transaction Publishers, pp. 89–128.
    Rosenhan, D.L. (1973) On being sane in insane places. Science 179(70): 250–258. | Article | PubMed | ISI | ChemPort |
    Szasz, T. (1994) Cruel Compassion. Psychiatric Control of Society's Unwanted. New York: John Wiley & Sons.
    UK Parliament. (2007) Mental Health Act 2007. UK Parliament, Ref Type: Bill/Resolution.
    Wilson, M. (1993) DSM-III and the transformation of American psychiatry: A history. American Journal of Psychiatry 150(3): 399–410. | PubMed |


Acknowledgements

I would like to thank Graham Scambler and Paul Higgs for commenting on drafts of this manuscript and Phil Thomas for setting up the symposium on philosophy of psychiatric diagnosis at the Royal College of Psychiatrist's annual meeting, at which the initial ideas for this paper were presented and discussed.

Wednesday, 15 January 2014

Conners Rating Scale - Subjective or objective tool? - "ADHD is a national disaster of dangerous proportions," Dr Keith Conners speaking about his checklist has been used - Courtesy of Ritalindeath.com

Conners Rating Scale - Subjective or objective tool?
 

This has been around for years. It was developed by a psychiatrist. The revised edition claims to identify childhood and adolescent behavioural 'disorders' such as ADHD and ODD.

This “test” has been around for years.  It was developed by Keith Conner’s a psychiatrist.  Today the revised edition claims to “identify childhood and adolescent ADHD behavioral problems and psychopathology.”  Sadly these claims fail to mention early on invalidation of this testing method.  In the *Right to Privacy Hearings on the Federal Role in the Use of Behavior Modification Drugs Used on School Age Children (September 29", default", 1970) it was noted that this type of screening method was used to obtain children for drug research.  Dr. Keith Conner’s had a financial stake in the outcome of this research.  It was established in these hearings that parents/guardians were not required to participate.  It was classified as research of the effects of psychotropic drugs on children.  The Conner’s Rating Scale was never validated as a standardized test.  In today’s psychiatric Propaganda marketed to the public regarding the Conner’s Rating Scale one critical element is clearly missing, and that is the fact that dozens of parents came forward to protest forced participation in such a risky human experiment on their children (1970). Dr. Keith Conner’s and his rating scale were at the forefront of obtaining millions of dollars in *blanket mental health grants from the federal government to supply children to the pharmaceutical industry to use for experimentation.  This experimentation was conducted to determine the effects of different types of drugs.  These blanket grants are still active today, many of which serve as a blank check to the entire psychiatric industry for drug research on children.

The Bottom Line is that for over 3 decades Keith Conner’s has failed to validate his rating/testing scale.  Simply put, the Conner’s Rating Scale is nothing more than a subjective survey to obtain innocent children to do experimental drug research on.

A member of our organization took the time to look into the matter of the Conner’s Rating Scale further and requested a copy of any type of certificate validating it as an accurate measure in diagnosing ADD/ADHD.  He requested this validation/certification from:

1. Center for Disease Control

2. Food & Drug Administration

3. Drug Enforcement Administration

4. National Institute of Health

5. National Institute of Mental Health

6. Center for Health Education & Welfare

7. Kansas State Board of Education

8. Kansas Blue Valley School District

NOT one of the above 8 listed could provide any type of certification on this “test”.  Digging deeper this member went further and sent a letter certified registered receipt to C. Keith Conner’s himself.  The letter was received.  Not surprisingly, Dr. Conner’s failed to produce an answer or a certificate.  You can read this letter by clicking here

Pertaining to all other “tests” that are being used in the determination of ADD/ADHD, it has been proven that NO ADD/ADHD rating scale, checklist, survey, questionnaires has ever been validated, endorsed, or recommended by ANY Local Government, State Government, or even the Federal Government itself! 

*Reference: Right to Privacy Hearings on the Federal Role in the Use of Behavior Modification Drugs Used on School Age Children (September 29, 1970)

*blanket grants- large amounts of dollars given out for up to 25 years without any checks and balances or oversight.

Dr. C. Keith Conners Ph.D.
4-26-02
Box 3431
Duke Medical Center
718 Rutherford Street
Durham , NC 27705
1-919-416-2430

Dr. Conners:

It is my understanding you are the author of the Conner’s Performance Test  “CPT” used for determining the disease ADHD in individuals.

As such, may I please request you provide me with a copy of the certification validating your test (and/or its procedures) as to the “CPT” being an accurate, efficient standard capable of determining the presence of the disease ADHD?

Specifically ALL I’m looking for is a copy of a certificate issued by any of the following:  The CDC, FDA, DEA, NIH, NIMH, DHE&W and/or the Kansas State Board of Education in Kansas, and the Kansas Blue Valley School districts certification underwriting, certifying, validating, approving your CPT as a validated  test standard that will accurately, efficiently ascertain the presence of ADHD, most specifically in minor children.

I have contacted the above entities and they have not been able to provide me with a copy of this validation, so I respectfully look to you for a certification of same.

Should you not be able to provide this validation, I will take that to mean the CPT has not obtained validation by any of the above Federal, State and Local agencies.

Sincerely,
G.B.

Tuesday, 14 January 2014

Defining Normal: How Psychiatry Has Lost Its Way Could one person's "treatable symptoms" be another's formula for success? - Courtesy of the PsychologyToday Website February 2012



Defining Normal: How Psychiatry Has Lost Its Way
Could one person's "treatable symptoms" be another's formula for success?
Published on February 24, 2012 by Dale Archer, M.D. in Reading Between the (Head)Lines


by Dr. Dale Archer

One day when I was in the fifth grade, we had a substitute teacher, and I could see upon her arrival that she was shy and demure. I immediately set about crafting a series of experiments to test her limits. My most creative tactic was the repetitive zinging of spitballs and rubber bands at other kids while she wrote on the blackboard. With every shot, my classmates squealed with delight, but she maintained her calm demeanor. So I upped the ante and brought in air support with a deftly crafted paper Messerschmitt.

I didn't aim for her. Instead, I winged the paper airplane at another kid. But it was more poorly crafted than I expected and it flew by the kid, soared over my classmates' heads, and landed squarely on the teacher's knee. The class erupted in laughter. That was it! Red-faced, the poor substitute marched me to the principal's office. Once there, I was threatened with the dreaded "call to the parents" and sent back to the classroom. Sufficiently shamed, I apologized to the teacher, plopped in my chair, and promptly fell asleep.




   

Back then, kids like me weren't called ADHD; they were called "brats." It took a very long time to have the public understand that ADHD was a real condition and that medication could work. And as a psychiatrist, I have often promoted and prescribed these meds. But over the years my thinking began to change as to who really needed them.

It began when I noticed traits in my own children that I would have medicated in others. My son was classic ADHD. Yes, he was high strung, yes, he was "all boy." But from a psychiatric standpoint, I could see he suffered from severe ADHD: poor attention span, bored easily, and unable to sit still or focus on his homework for more than 10 minutes at a time.

Trey Archer, Adventurer
When he found something of interest, though, he could focus for hours, even days at a time, like when he was 8 and decided to learn the name, location, and capital city of every country in the world (there were 156). He did so in a couple of weeks. Still, I struggled with the temptation of putting him on medication—yet something kept me from it.

After some tough adolescent and teen years, he found his stride. In college, and as an adult, he thrives in an atmosphere in which he juggles many tasks and wears multiple hats. He is proficient in five languages, and has travelled the globe (his goal now being to travel to every country in the world he learned about when he was 8). He teaches English along the way as a means of support. He's three-quarters of the way there. How is it he went from a struggling adolescent barely passing certain subjects, to a university honor student and successful adult? Miraculous!

Adri Archer, Perfectionist
Then there was my daughter. Even as a 7 year old, she made her bed so perfectly you could bounce a quarter on it—and look out if you dared touch it before she was ready to go to sleep! She was methodical, kept herself on a budget, was always on time, and paid attention to EVERY detail. Her perfectionism was her defining character trait; she could be very obsessive, and her obsession with order and neatness did cause problems. I realized that if she had been brought to me as patient, I would have diagnosed her as OCD, but she was a good student, never in trouble (except when pointing out the teachers' organizational shortcomings), and whenever a project came along needing attention to detail, she was the one the whole family turned to. Most importantly, she seemed happy in her own skin.

My real breakthrough in thinking came, ironically, from the World Series of Poker. Let me explain: A few years earlier, my house and office had been destroyed by Hurricane Rita and my family life demolished by divorce. My children were grown and living away from home. I knew I had to re-invent myself but I didn't know how. I had taken up playing poker as an escape and I was pretty good; I had an uncanny knack for reading people, being able to tell when they had a hand or were bluffing. In fact, at times I could name the exact two cards my opponents were holding. It was a combination of intuition, gut feeling, and a bit of magic, so I called it magical thinking. I quickly progressed and began playing in national tournaments.

I had a lot of hypo-manic energy and could easily meet the demands of a poker tourney where you sit and play for 12 to 14 hours a day, for a week straight. I loved the spotlight of being a big time player. I was interviewed on TV, appeared on ESPN and felt like a star. I won more and more until eventually I found myself at the Holy Grail: The Main Event of The World Series of Poker in Las Vegas. Imagine that: A shrink playing professional poker in Vegas!

2004 World Series of Poker (Center)
I came in 11th in the world in 2004. After that, I felt that perhaps I had found my niche. I resolved to start playing more big-name, multi-million dollar events, while limiting my psychiatric practice. But, trouble quickly surfaced. It was one thing to play an occasional poker tourney for fun, but quite another to play six to seven days a week in event after event—I was bored silly! By 2008 I knew the life wasn't for me, yet decided to play the Main Event one more time before figuring out where to go from there.

I busted out quickly (my heart just wasn't in it) and went to my favorite piano bar in Vegas, ordered a vodka martini and started to think. Why was I a good poker player? Because I was a magical thinker, I had lots of energy, I enjoyed the TV spotlight, and when I sat at the table, it was all about me and my decisions—nothing else mattered.

Hmmmm...I thought: These traits were like mild psychiatric diagnoses. Magical thinking—schizophrenia; high energy—hypomania; loved the attention—histrionic; self-focused—narcissistic. I then considered my other traits, and there were some biggies: Bored easily, short attention span and the love of new adventures—ADHD! Yikes! You couldn't be a successful long term poker player with that...UNLESS you were medicated, that could work. But, did I really want to take a psychostimulant just to play professional poker?

Then I started thinking about my kids: a son who was clearly adventurous—and clearly ADHD —traveling the world. And a daughter who was a perfectionist- borderline OCD—who became a successful event planner. Not to mention my sister, who is very shy—perhaps to the point of social anxiety disorder. She was able to forge a career from home as an attorney doing appeals via the internet with almost no client contact.

Hah! Almost everyone I knew had certain traits that could in some cases represent a diagnosis...so then I wondered on a scale of 1-10, what separated an ADHD 8 from an ADHD 10 and who got medicated and why? How could one person use a set of "symptoms" as a springboard for success while another with the exact symptoms needs meds and therapy?

It was a eureka moment. The very conditions that I had treated for the last 20 years, when not causing significant psychological trouble, were the basis for so many individual success stories. How could I have just figured that out?

I realized that my profession had not only redefined mental health by over-diagnosing, over-treating, and over-medicating, we were also taking away the hope of human nature by telling our patients that they were inherently "abnormal," and needed to be fixed.

The psychiatrists office had gone from being the place no one would be caught dead visiting...to the place where a pill could fix anything...and psychiatry itself had gone from being stigmatized to glamorized. The solution was simple: To embrace the idea that unless your ailments are seriously impeding your quality of life, they can be used to your advantage.

If my son or daughter or sister or I had walked into a doctor's office and presented our "problem," how radically different might our lives be today? The mental health profession must change its entire approach to what defines mental health and mental illness. We must stop thinking how to give the patient what they think they want and start with the premise of taking a look at what's good about what they have.

We must empower individuals to think it's ok to be "not normal" to change the mindset that everything can be "fixed" with a pill or a few therapy sessions, to understand that what they perceive as their worst trait, may in reality be their best. It's time for a new order of things in mental health, based on the premise that when you try to conform to a perceived "normal," you lose your uniqueness—which is the foundation for your potential greatness.

ETHICALLY CHALLENGING MEDICAL COLLEAGUES TO SAFEGUARD CHILDREN'S WELLBEING - TOP TEN QUESTIONS FOR WORKERS





TOP TEN QUESTIONS FOR CHILD MENTAL HEALTH WORKERS TO CONSIDER FOR THEIR CLIENT GROUP REGARDING PSYCHOTROPIC DRUGS

     1)    How can I best challenge the practice of prescribing psychotropic  drugs to children with whom I co-work and about whom I have ethical concerns?

       2) Do the Safeguarding Guidelines of my workplace or professional body apply equally to this area of prescribed psychotropic drugs for children?  If yes, do you feel you now have the ‘ethical legitImacy’ to challenge the practice of medical colleagues who are prescribing psychotropics to children with whom you work, where you fear harm will be done?
        3)  Has the informed consent of the young person and their parents been obtained prior to prescribing the psychotropic drugs?
   
    4 ) Has the data used to make a ‘diagnosis’ been triangulated to maximise its reliability i.e. evidence based data obtained from the family, school + community as well as the medical practitioner?
 5)  Do you ever fulfill ‘the sleepless night criteria’ where your levels of anxiety regarding a child’s psychotropic medication interferes with your quality of sleep? What should you do the next working day if that is the case?
6)  Do you think the right balance has been achieved between the Medical and Social models of understanding the many factors that contribute to a child’s situation in the case about which you have ethical concerns? 

7)  Have you ever asked about whether the child experiences any short, medium or long term side effects of the psychotropic drugs that cause you concern or investigated the possible consequences?
8)  How often do you have appropriately challenging conversations with medical professionals about your professional concerns regarding the psychotropic drugs they are prescribing for children on your caseload?
 9)  Was the response of your medical colleague positive or negative to the professional challenge you made or what would you predict it to be?
10)              Have you, as many Directors of Children’s Services demand, executed your paramount responsibility for Safeguarding Children in your current casework practice?
Please post your thoughts as comments or ask for clarification of any issues arising

1