Cute Psychiatric Word Games
Those of us who seek to expose the fraudulent bases of psychiatry
would probably be assisted by a good understanding of some
of their terminology. I don't think it's worth the trouble
to try to grasp every specialized term, since new terminology
is invented daily, and each of the many versions of psychiatry
and psychology has its own buzzwords.
But certain terms recur again and again in the literature
(both pro and anti-psychiatry) that we encounter when we read
about the subject. Also, if we try to tell people the truth
about psychiatry, we are likely to get some of those terms
thrown back at us, usually in large, rapid streams of big
words, with the intention of overwhelming all possible opposition.
Such onslaughts are easier to withstand if we know what the
words mean. This is especially true of the terms used by psychiatrists
and pharmaceutical representatives to suggest that their products
are useful or are not harmful.
In this essay, I define a few of the more basic terms, say
as clearly as I can (without big words or jargon) what they
mean to psychiatrists, and, in most cases, why they don't
mean what the psychiatrists say they mean. Here goes:
STUDY (noun): When a psychiatrist refers to a "study",
he usually means an experiment, a test run to see if some
drug or other treatment is "effective". Typical
study: To see if a new drug is effective in knocking out depression,
the psychiatrist advertises for depressed people to participate
in a study.
He rounds up a bunch of people who consider themselves depressed.
(You can find ads in magazines and newspapers, inviting people
to participate in experiments, get free medication, etc.)
Usually he administers some test to see if they are "really"
depressed: Asks them a set of questions and, according to
their answers, decides how they rate on some scale that is
supposed to show whether or not someone is depressed. Maybe
he also observes them. (Some studies are more intelligently
done than others; occasionally some observation occurs.) He
picks people for his study, maybe eliminating some where there's
legal risk if they do poorly. For example, studies usually
don't include pregnant women or kids. (Big inconsistency:
After not testing something on pregnant women, they will still
recommend it for them once it's approved by the FDA.)
Then he divides the people into groups: Those who receive
the drug being tested and those who receive some sort of pill
that is supposed to do nothing (a "placebo" [pleh-see-bow],
Latin for "I shall please", meaning a pill given
a patient just to humor him, typically a pill containing nothing
but sugar). Sometimes there are more than two groups. For
example, a third group might receive some other anti-depressant
that's already on the market, in hopes of showing that the
new one is better.
Then over a period of time (usually a few weeks) the people
in each group are given their pills or placebos, theoretically,
not knowing which they are receiving. During this period,
they are observed (maybe) and questioned, and at the end of
this period, they are questioned, given tests. Then the people
running the study go over all the data collected and work
out whether the new drug is "significantly more effective"
than the placebo (or other drugs given) or not.
Now, when a psychiatrist says that "studies have shown...",
there are many things you should know that the psychiatrist
is hoping no one will notice. The general concept of "a
study" sounds so scientific that mentions of studies
are almost hypnotic. Here are a few of the things that the
psychiatrist usually isn't saying:
1. You can find studies that prove almost anything. The reason
is that MANY studies are done for each drug (or therapy),
and, by the law of averages, some of them are likely to show
that the drug works. The drug companies usually publish only
the studies that make their drugs look good. Also, a drug
can be made to seem effective when, for example, three small
studies show that it is effective, while one huge study shows
that it's not. When a "meta-study" is done (one
that considers all the data from all available studies and
considers their relative reliability -- whether or not they
were done right), often it shows that a supposedly effective
drug is NOT effective.
Suppose in each of three studies there were 10 subjects (people
taking the pill), 5 in each group getting the real pill. Suppose
in each of these studies, 4 taking the real drug said they
felt better, while only 2 taking placebos said they felt better.
But then there's a 3rd study, with 100 subjects, 50 of whom
got the real drug. In that study, 20 of the people on the
real drug felt better, but 45 of those on the placebo felt
better. So the psychiatrist says, "3 of the 4 studies
showed our drug effective. The "meta-study" results
show, by combining the results of the 4 studies, that 32 taking
the drug said they felt better, while 51 taking the placebo
felt better, so the drug isn't effective. More people got
better by taking NO drug than by taking the new drug.
2. Most of the studies are paid for by the pharmaceutical
companies, who have already invested millions in the drug
and want to see positive results. The guys doing the study
make their living getting paid by the pharmaceutical companies
to get good results for them. Thus, there's a huge bias in
favor of finding the drug effective. The studies are not done
to TEST the drugs, but to prove they work. That's bad science.
Often ineffectiveness is exposed when someone NOT on the pharmaceutical
company payroll does a new study.
3. Most of the studies last for a few weeks. This means no
test is done of long term effects of the drugs. Yet the drugs
are designed and marketed and prescribed to be used indefinitely.
They are known to cure nothing, only to suppress symptoms,
and it is known that if the user stops taking the drug, the
symptoms return, often worse than they were in the first place.
Therefore, these drugs are intended (by their makers) to be
taken for life. So when, after several 3-week or 6-week tests,
the drug makers claim that these drugs have only mild side
effects, actually, they have no idea what the long-range side
effects are. That means they do not know that the drugs will
be safe when "taken as directed". Also, they don't
test the drugs on children or pregnant women or various other
groups who might be particularly vulnerable. If, for example,
a pregnant woman is harmed by an experimental drug, she might
sue. But once the drug is approved by the FDA, it can be prescribed
to all these groups on whom it has never been tested.
4. The FDA theory is that if there are bad effects, they'll
be reported to the the pharmaceutical company by doctors,
and the company will report these to the FDA. In other words,
if a doctor prescribes a drug for you and you come in a day
later with a whopping migraine or a sudden suicide obsession,
the doctor is supposed to report this. However, various studies
(interviews with thousands of doctors) have shown that most
doctors don't know this, don't know how to report, and simply
have never done it. A small percentage of doctors (under 10
%) do occasionally report to the pharmaceutical company.
But the pharmaceutical companies don't relay most of these
reports to the FDA. They look at the reports and, in many
cases, say, "There's no way to prove that our drug caused
that" or "that was obviously caused by the original
condition, not our drug." For example, a guy is depressed
after his girlfriend leaves him, goes to a doctor, gets put
on an anti-depressant, says he feels suicidal. The drug company
gets the report, dismisses it by saying, "How do we know
that's the drug? He probably feels suicidal because his girl
friend left him." (This ignores the fact that he DIDN'T
feel suicidal before being given the drug.)
Using this justification, the company doesn't report the
data to the FDA. Thus, for every thousand bad side effect
observed by doctors, a small percentage get reported to the
pharmaceutical company (maybe 10, maybe fewer), and for every
thousand reports that get to the pharmaceutical company, only
a small percentage gets recorded as a side effect and reported
to the FDA. So we end up with maybe 10% of 2% (0.5%) of observed
bad effects getting reported or some such figure. So
we never do find out what the effects of long-term use is.
When a drug used by millions accumulates 2,000 negative reports,
that's a huge number, because it represents a far larger number
of other incidents that weren't reported (hundreds of thousands).
5. Often people are put on several different drugs at a time
(psychiatric and medical). But no one has tested these combinations
of drugs or shown that they are safe in combination. There
are millions of possible combinations, none of which have
been tested and found safe.
6. Studies can show invalid results for all sorts of reasons.
It's not hard to mess up the statistics and cover up the mess
with jargon. Here are some examples:
Sometimes the psychiatrists give people tests to find out
how suggestible they are, then remove the most suggestible
from the study. (Suggestible: Easily influenced by others,
good subjects for hypnotism, etc.) The idea is that suggestible
people are more likely to get better if given the placebo.
If lots of people get better on the placebo, then the drug
won't look as good. The drug looks good if lots of people
get better on the drug, and few get better on the placebo.
So by removing the suggestible people ahead of time, the researchers
are trying to make the study favor the drug.
In one case (and I've simplified the numbers here, but the
principle is the same), before the study began (that is, before
the people were given the drugs and placebos), one of the
subjects had "suicidal ideation" (thoughts about
committing suicide). At that time, subjects had not been assigned
to groups (placebo group, drug group). Shortly after the study
was over, one of the people who'd received the drug got suicidal.
DURING the study, 2 people on the placebo felt suicidal and
3 people on the drug felt suicidal. The psychiatrists reported
that the drug was effective, because 4 people
felt suicidal on the placebo, and only 3 felt suicidal on
the drug. How did they come up with this? After-the-fact,
they assigned the 2 guys who'd had suicidal ideation just
before and just after the study to the placebo group. How
did they reason? These two had suicidal ideation when not
receiving the drug, and that's the same as being on the placebo.
They ignored the fact that (1) they weren't on the placebo
OR the drug; (2) the post-study guy had been receiving the
drug during the study, and was probably feeling suicidal as
a symptom of withdrawal from the drug.
In another study, they claimed (with similar faulty reasoning)
that more in the placebo group had suicidal ideation than
in the drug group. In this case, besides the juggled numbers,
there was this distortion: They did not report that in the
placebo group, there was ONLY suicidal ideation, while in
the drug group, two of those suicidal people had actually
committed suicide!
In some studies, results are reported only for subjects who
complete the study. If several people drop out of the study,
they are simply not counted. But when such studies are looked
into, it is usually discovered that the people who dropped
out did so because the drug made them ill or suicidal. In
other words, vital evidence of the drug being dangerous is
not reported in the study, because the subjects "dropped
out of the study."
A much bigger factor is the use of INACTIVE PLACEBOS. As I
said, a placebo is usually a sugar pill or salt or something
not likely to have any strong, observable effect. That is
what's meant by an "inactive placebo." The problem
with this is that the subjects are not supposed to know whether
they are getting the placebo or the drug. Some flawless studies
have shown that when a placebo doesn't do anything, a high
percentage of those receiving the placebo realize that they
aren't getting the real drug. Since the whole point of a placebo
is to see whether people are getting better because the drug
helps them or because they just expect to get better when
they are given medication, an inactive placebo destroys validity.
They won't get better on a placebo when they know it is a
placebo. A placebo is not a placebo unless the person receiving
it thinks he is getting medicine.
The whole point is, a placebo is something that doesn't do
ANYTHING doesn't help. So if the subjects know they're
getting a placebo, they know that it isn't helping them. The
solution to this difficulty is to use an ACTIVE PLACEBO. This
is something not thought to help with depression (or anxiety
or whatever is supposed to be handled by the drug being tested),
but that creates some obvious side effects. Niacin, for example,
will create a flush, and many other substances will create
side effects (a dry mouth, for example). Studies have shown
that if an active placebo is used, people feel some side effects,
so think they are getting the real drug, so get better in
far greater numbers. (This is like a child knowing he's being
helped because the medicine tastes awful.)
Most of the studies favorable to psych drugs used inactive
placebos. Most such studies, if favorable, were not highly
favorable. The difference between the drug and the placebo
is more than wiped out by the difference between the results
gotten by an inactive placebo and an active placebo. For example,
if, with an inactive placebo, 40 out of 50 said they felt
better on the drug, and 35 out of 50 said they felt better
on the inactive placebo, then something like 45 would have
felt better on an active placebo, making the placebo better
than the drug.
Not all drugs are tested with placebos. For example, long
ago (1974) Ritalin was tested and said to be effective. Those
studies were lost by the FDA long ago (some administrative
or logistic mess). Thus, we have no way to look at the data
and how it was obtained. However, because Ritalin has been
declared effective, many studies since then have not tested
the new drug against a placebo. They've tested it against
Ritalin (one group gets the new drug, the other group gets
Ritalin), the idea being that if Ritalin is effective, they
just have to show that the new drug does better than Ritalin
or does as well, without some bad side effect. So here's
a drug tested with inactive placebos decades ago, and all
the new drugs are compared only to it.
The first study presented to the public showing Magnetic Resonance
Imaging (MRI) of brains of people with ADHD (Attention Deficit
Hyper Active Disorder) claimed to show that the brains of
ADHD people were abnormal, smaller than the brains of "normal"
people. When the psychiatrists did press conferences on this
discovery, they failed to mention that the ADHD people with
the smaller brains had already been on psych. drugs before
these images were made, and those drugs are known to have
that effect on brains. In other words, they attributed to
the made-up mental illness (ADHD) effects that were created
by the psych. drugs.
After this was exposed (but the exposure given very little
press), a second study came out that claimed to have used
ADHD people who had never been given psych. drugs. When MRIs
of their brains were compared with "normal" subjects,
the so-called ADHD brains were smaller, even though they hadn't
been given drugs. What the researchers failed to mention was
that the "normal" subjects averaged 2 years older
than the ADHD subjects. All of the subjects were children,
at ages where the 2-year difference accounted for the difference
in brain size. (I guess they had trouble finding older "ADHD"
kids who hadn't already been drugged.) Since then, there have
been more MRI studies, but none are considered valid science
by MRI experts, for reasons that are beyond my technical scope.
There's a great deal more you can find (in various books)
on this subject. The point is, when a psychiatrist mentions
"a study", don't back down or assume that it means
anything. There are all sorts of questions you can ask, and
chances are, he won't have the answers. ("Who paid for
the study?" "Did it use an inactive placebo?"
Etc.)
Note: Of course, all of the points above are minor compared
to some more basic points: How can you test people suffering
from a mental illness called "depression" when there
is no such illness? Also, the scientific method, properly
applied, does not set out to prove that one's product works.
What a good scientist, having a theory, does is try his best
to DISPROVE his theory. If the theory stands up to such an
assault, it is considered likely to be useful. It's bad science
to hire people to prove that your drugs work or to have the
testing of these drugs paid for by the people who need and
want them to work. Also, how do psychiatrists decide who is
depressed? And how do they decide who has improved? Can they
spot someone who is no longer in grief, but is in apathy,
no longer ABLE to grieve? Do they depend on what the subjects
tell them? (You can bet that the method used is the one most
likely to make the drugs look good.)
TRIAL: The studies described above are also called
"trials" (tests) or "drug trials." Actually,
study is a broader term. Not all studies are drug trials.
For example, someone could do a study of the medical records
of people on a particular drug or a study of people who go
to church and people who don't go to church to see which live
longer. But usually psychiatrists who speak of studies in
the current debates are referring to drug trials or to later
studies that attempt to verify ("replicate") the
results of the trials.
DOUBLE BLIND: Psychiatrists like to refer to "double
blind studies". This means that not only do the subjects
not know whether they are getting the placebo or the drug;
also the people running the study don't know. So the researchers
set up elaborate precautions to make sure of this. That way,
supposedly, the psychiatrists will be objective, not try to
weigh the results in favor of the drug. There are several
holes in this:
1. Since most of the studies have used inactive placebos,
many of the people on placebos KNEW they were on placebos,
and those on the drugs (getting side effects from using them)
knew they were on the drugs.
2. Even if the study itself is done objectively, when the
time comes to present the results to the world in an article,
the researchers know exactly who got what, and can now do
their best to juggle the statistics to make the drug look
good. The effectiveness of these drugs, in most cases, has
more to do with how the results are analyzed and presented
than with the way the studies are conducted. Researchers can
even reinterpret actions or responses of participants in the
experiment, and reclassify people who got better on the placebo
as having gotten worse or people who got worse on the drug
as having gotten better. After all, there's are numerous psychiatric
labels for any conceivable sort of behavior, and these labels
can be used to reclassify someone as having improved or gotten
worse.
3. A true double-blind study is elaborate and requires people
who know what they're doing. It's easy to see how this would
be complicated: 50 or 60 people have to be given their drugs
and placebos every day. The right people have to get the right
thing (drug or placebo) every time. The people running the
experiment must not know which are getting which. The people
receiving their pills must not know which they are getting.
That's not so easy to pull off. A few years ago a big Wall
Street Journal article revealed that many of the companies
doing these tests for drug companies were extremely unprofessional,
with studies being run by untrained secretaries, insufficient
supervision, etc.
RANDOMIZED: To randomize things is to arrange them
in no particular order or in such a way that it is impossible
to predict which thing goes where. For example, suppose a
drug trial is going to involve 100 subjects, and these subjects
are to be divided up into two groups of 50. One group is to
receive the drug being tested. The other group is to receive
a placebo. One could bias the results in favor of the drug
by putting in the placebo group all the people in the worst
shape and putting in the drug group all the people who seem
most likely to be able to improve. And there are many other
ways of grouping subjects to get a desired result. A fair
test requires some more random way of selecting the two groups.
Usually all the subjects are first divided up into groups
according to such factors as age and sex, and then an attempt
is made to divide up each of these groups randomly. That way,
the groups that result will be similar (for example, have
the same proportion of males and females, the same age distribution).
The current psychiatric buzzword is that some drugs effectiveness
was "shown in large randomized double-blind trials".
A large trial might have several thousand subjects. This sounds
very impressive. However, you've seen how easily the results
from smaller trials can be distorted by statistical finagling.
The larger the trial, the easier it is to distort the results.
Here's how it works: With a small trial (for example, 20
subjects), it's more difficult to hide a bad result, because
the data tables aren't huge and overwhelming. It's easy to
see, for example, that 5 of the 10 people on the placebo got
better. But the results of small trials aren't considered
reliable: Two small a sampling. The argument is that different
people respond differently to any drug, and that to get a
good overview of the drug's effects, one needs to study a
large sample. The larger the sample, the more likely the results
will apply to the whole population. Thus, if 40 % of the people
in the small study (4 people) get better on the drug, that
isn't enough data to predict that 40% of the population will
get better on the drug. But if 40% of 2000 people get better
on the drug, one can be more certain that 40 % of the general
population will also get better on the drug. The researchers
will express this as a probability, and say that the 40% effectiveness
is 95% likely to apply to the general population -- or 80%
or whatever. They have mathematical formulas for determining
this probability that the results of the trial will apply
to everyone. One of the factors these formulas take into consideration
is the size of the trial: How many people were studied.
The trouble with that theory is that it assumes the researchers
analyze the results honestly and correctly. In fact, the larger
the study, the more difficult it is for those who review the
study to spot cheating or errors. A study with thousands of
subjects will produce many pages of tables of numbers. Typically,
those who review the studies only review summaries of the
studies (called "abstracts"). They seldom have the
time or desire to go through all the data. When people critical
of large studies DO go through all the data -- and track down
more data when things don't make sense, they often find that
the studies did NOT produce the claimed results. In the case
of psychiatric drug trials, most of the original trials are
flawed. I've given examples of the sorts of distortion found
above. The additional point I'm making here is that, the bigger
the study, the more likely such distortions will be found.
The small studies aren't considered reliable, and the big
studies shouldn't be considered reliable. As long as studies
are done with the intention of getting the results the pharmaceutical
companies want to get, size and degree of randomization are
no assurance of reliability. (Of course, if the drug simply
kills most of the users within 24 hours, that won't be hidden,
and the drug will be abandoned. Some things are too hard to
hide.)
PEER-REVIEWED: Psychiatrists like to refer to "peer-reviewed
studies", as if this were a guarantee of validity. A
"peer" is an equal. So a psychiatrist's peer is
another psychiatrist of comparable status in the psychiatric
world. Peer-reviewed means that some psychiatrists did a study,
and some other psychiatrists looked it over to see if it was
properly done. It does NOT say how closely these reviewers
looked at the findings, whether the reviewers were independent
of the pharmaceutical company that funded the study or were
also getting funding from it or from other similar companies,
etc. It MAY mean that the study got a good going over. But
it may also mean that the study got a pat on the back from
members of the good-old-boy network. Also there's the back-scratching
phenomenon: "You give my study good marks, and I'll give
your study good marks." Also, when the peer review is
ordered by the publisher of a big-name medical journal as
a requirement before the article is approved for publication,
the editors may assign as reviewer someone sure to give the
article a favorable review, since these publications usually
depend on money from pharmaceutical company ads.
MENTAL ILLNESS, MENTAL DISEASE, PHOBIA, MANIA, DISORDER,
DYSFUNCTION, SYNDROME, etc.: All words used to describe
sets of conditions and imply that they are illnesses with
biological causes (such as "chemical imbalances in the
brain"). When we say there are no mental illnesses, we
don't mean that the conditions don't exist, just that they
aren't illnesses. Why aren't they illnesses? The idea of "illness"
comes from the field of medicine. In medicine, an illness
is established to be an illness when some cause for it is
found (by experiment). For example, if a certain germ is always
present when the symptoms (signs of illness) are present and
if when that germ is eliminated, the illness always goes away,
that indicates that that illness is caused by that germ. (Of
course, there may be other causes as well, but at least there's
some use to going after the germ.) If a certain set of symptoms
is always caused by salt deficiency and remedied by taking
salt, we have an illness (a deficiency that is making someone
ill.)
But suppose someone has a headache. That is not an illness.
Why not? Because there are many things that can cause a headache.
Suppose we assume that it's being caused by a salt deficiency
and give the guy lots of salt, but never test to see if he
has a brain tumor and he does! The salt won't handle
it. Suppose we assume he has a brain tumor and open him up
and find there's no tumor. Then we find out he was
deficient in salt! That's why "headache" is not
an illness. It's not specific enough to be tied to a single
cause. Similarly, if a kid isn't paying attention in school,
there could be hundreds of different causes (sugary diet,
boredom, trouble at home, allergy, vitamin deficiency, misunderstood
words, pain...). Since the symptoms of ADHD could be caused
by any of these and many other things, ADHD is not an illness
(mental or otherwise). It's a variety of conditions with a
variety of causes, all lumped together by psychiatry -- a
good way to make sure no one ever finds the right cause in
a particular case.
But a psychiatrist will say, it doesn't matter, because the
drug works on many people (helps many people). Apart from
the doubtfulness of this statement (many of the people who
say it works do not seem to be better off, and certainly it
can't help their livers to be taking toxic substances every
day), there's the implied fallacy that, if a drug helps a
condition, that must mean that drug is good for that condition.
In fact, ANY drug, in small enough doses, is a stimulant,
and will likely perk someone up who is depressed -- and also
will perk up someone who is NOT depressed; and ANY drug, in
large enough doses, is a narcotic and will tend to sedate
someone who is anxious, manic, etc. Here's a simple analogy:
Let's say you want to handle people who are terribly anxious
and upset. You can do this, in most cases, by giving them
a lot of whisky to drink. Eventually they'll be sleepy and
not moving much. Does this mean that they all suffered from
the same "illness"? In other words, does the fact
that whisky calms down a lot of people mean that the source
of their anxiety was the same in each case? Hitting them on
the head hard with a baseball bat would also calm them down.
Does that mean that all anxiety comes from a single cause?
(Being alive?) You're tired: Putting you in a cage with a
tiger will probably wake you up. This will occur whether your
tiredness is from lack of sleep, from being told that a project
you've worked on for months has to be done over, from a vitamin
deficiency whatever. The tiger's growl will have you
wide awake. Does this mean that all tiredness is from a single
cause?
The thing to remember about all the "illness" and
"disorder" jargon is that it all describes conditions.
The fact that the conditions exist does not indicate that
any of them are illnesses.
Some of the terms have additional meanings. For example, a
phobia is a fear. A dysfunction is an inability to do something
for example a sexual dysfunction might be inability
of a man to get an erection. But basically they are all conditions,
and few, if any, are illnesses. Some of them are sometimes
caused by regular medical (non-psychiatric) illnesses. In
other words, some of the fake psych. illnesses are caused
by actual medical illnesses. Physical pain from an injury
is making someone act crazy, or some vitamin deficiency is
making someone depressed. So a mental illness is NOT an illness;
it's a list of conditions or symptoms, but some of the people
who have those symptoms have them because of a real physical
illness.
All this confuses people. We say "There are no mental
illnesses" and someone thinks we are saying the conditions
don't exist. But the conditions DO exist. Some people are
depressed (or sad, as we used to say). Some are anxious. But,
we explain, the conditions are not illnesses. Then we say,
look at all the possible causes of these conditions, including
this or that illness. But didn't we say they aren't illnesses?
That's right, they aren't MENTAL illnesses, but sometimes
they are caused by regular medical illness. That's a complicated
line of reasoning, and psychiatrists muddle it up on purpose.
STIGMATIZE: To mark or brand a person, usually with
a mark indicating disgrace. The mark, scar, burn or whatever
is called a stigma. (There are other related definitions.
For example, Christ's wounds are called "the Stigmata".)
A simple definition is "a mark of shame". Psychiatrists
use the word in connection with the following argument: It
is important to recognize that mental conditions are just
illnesses, like tuberculosis or diabetes, something the ill
person can't help. And the best way to treat illnesses (they
argue) is with the appropriate miracle drug. When someone
opposes this view, the psychiatrist says that those who deny
the validity of mental illnesses are "stigmatizing the
mentally ill". How are they doing this? By denying that
these people are simply ill, implying that there's something
wrong with them, that they are nuts. And this is a bad thing,
because it means that people who feel mentally ill will be
afraid to get treatment, lest they be "stigmatized"
for admitting they are depressed, anxious or whatever.
That, of course, makes little sense, since labeling someone
mentally ill is a far worse stigma than saying that someone
is sad or anxious. Would you rather be called "sad"
or "clinically depressed"? People aren't refused
for service in the army because they've been sad. They are,
often, refused for service because they've been on psychiatric
medications. Which is more likely to "brand" or
"stigmatize" you: The fact that you feel bad for
a long time after a big loss, or the psychiatric label that
says you have a brain condition that makes your brain abnormal
and that may be genetic in origin and for which there is no
cure, only relief if you keep taking your medication?
In Europe, for centuries, being (and appearing to be) a Jew
was a stigma, and Jews were often massacred for their alleged
beliefs and customs. But usually Jews would be spared if they
converted to Christianity. In the 20th Century, German and
other psychiatrists (often called "eugenicists,"
but usually with their degrees in psychiatry) labeled Jews
a distinct genetic group, physically different from "real"
Aryans (Germans and other northern White groups). This meant
that Jews couldn't help but be Jews, that they had, in a sense,
an incurable illness (Jewishness). Now conversion did them
no good. Under NAZI laws the only "solution" was
to sterilize them or kill them.
My question is, which was the greater stigma: Holding Jewish
beliefs and attitudes and behaving in certain ways and having
some choice about being or not being Jewish? Or being permanently
Jewish by genetic heritage, having no choice in the matter,
having "Jewishness" the way psychiatrists say that
sad people have "chemically imbalanced brains"?
Of course, the psychiatric approach these days is to drug
the people they thus stigmatize, not to sterilize or gas them.
And instead of enforcing this treatment, sometimes they use
lies to persuade people to ask for it. (Though, often they
do enforce medication on people, calling this "an intervention.")
But otherwise the principle is the same. (Note: It wasn't
long ago that psychiatrists in this country considered sterilization
of the insane the best course of action. Many of their drugs
-- for example, the anti-depressants -- do tend to make people
lose interest in sex.)
So whenever a psychiatrist accuses opponents of "stigmatizing"
the "unfortunate sufferers afflicted with mental illness",
realize that he means, "don't suggest that the person
himself can do anything to help himself; and don;t do anything
to make a person less willing to come to us for drugs."
CHEMICAL IMBALANCE: If, for example, you have too much
sugar in your blood or too much salt and not enough potassium
in your system or too much calcium and not enough magnesium,
these are all chemical imbalances. There are many "balanced"
sets of chemicals that work together in the body. For example,
if you take huge amounts of B Complex, you need to balance
that with some form of calcium that the body can absorb, because
B1 leeches calcium from the body. In other words, it's not
enough just to take "all the right vitamins and minerals".
You must also take the right amounts of them required to keep
them in the proper balance. If one chemical relaxes muscles
and another tenses them, too much of one gives you muscle
cramps, and too much of the other makes you limp. You have
to achieve a balance.
The brain has tens or hundreds of thousands of chemicals
interacting, and some of the most complex and least understood
chemicals are the neuro-transmitters, chemicals that carry
impulses from one nerve to another. The famous serotonin is
one neuro-transmitter chemical. Epinephrin is another. There
are thousands of other known neuro-transmitters, and probably
many that have not yet been discovered.
The clever thing about attributing "mental illnesses"
to chemical imbalances in the brain is that it's almost impossible
to prove or disprove that such things exist, but it all sounds
very scientific. I mean, I wouldn't recognize one of these
molecules if I saw it walking down the street. The pharmaceutical
companies have chemists who can actually find these things
in brains that alone takes all sorts of science and
lab equipment. None of that science is psychiatry. It's physics
and chemistry neuro-physics and neuro-chemistry. But
when psychiatrists toss around the chemical words, people
get the impression that psychiatry is a science.
The idea that some sort of chemical imbalance is the source
of mental illness is based on unscientific assumptions and
studies. For example, someone classifies a bunch of people
as depressed, studies their brains, and finds them deficient
in serotonin. So that means depression is caused by a deficiency
of serotonin? What's wrong with this?
First of all, it was done backwards. In other words, someone
at Eli Lilly took a drug being researched for some other purpose
(I think to handle ulcers), noticed that it didn't do that,
but some of the subjects looked happier, so Lilly decided
to make it an anti-depressant, and then chemists found that
it increased the amount of serotonin available to the nerve
endings in the brain, so some marketing genius said, ah, the
lack of serotonin must be the cause of depression, so this
new drug (Prozac) must fix that. Then someone did research
to see if he could find a serotonin deficiency in depressed
people, and claimed to have found one. That's backwards science.
It's finding what you're looking for because you want to find
it, not discovery.
Second, it failed to account for the fact that some depressed
people had plenty of serotonin.
Third, even if it were true that depressed people had lower
serotonin, that wouldn't establish a CAUSE. Suppose someone
attacks you at work, invalidates all your efforts, and you
feel depressed about that. It may be that part of feeling
depressed is that serotonin levels go down. Does that mean
that the CAUSE of the depression is the serotonin levels going
down? Suppose we find that all tired people have trouble keeping
their eyes open. Does that mean that the cause of tiredness
is weak eye muscles? Suppose that all people hit on the head
hard with a baseball bat have dented skulls. Does that mean
that a dented skull is the cause of the head pain? (You could
remove dents day after day and not help the person if someone
continued to hit him with a baseball bat.)
Fourth, again, there are thousands of neuro-transmitters in
the brain, all involved in fantastically complex and interdependent
chemistry. No one has figured out what Prozac (or any of the
other psych. drugs) actually do in the brain. Each of them
affects thousands of different chemical reactions in the brain
and elsewhere in the body. Lilly hit on the serotonin rationale,
because it was the first effect they documented. The scientific
reasoning here is along these lines: You throw a grenade into
a group of people. It explodes. They're dead meat. Now you
examine them and notice that in each case, their shoes have
blown off their feet. You conclude that what a grenade does
is blows someone's shoes off his feet. So you get a bright
idea: Let's market grenades to people who have arthritic hands
and have difficulty taking their shoes off! You haven't yet
figured out that grenades also kill people.
Note: Another anti-depressant, Effexor, was marketed as better
than Prozac because it increased the supply of TWO neuro-transmitters,
not just one. This was hype, since all of these chemicals
influence hundreds or thousands of neuro-transmitters. It's
just that no one knows which they influence or in what ways,
or whether any of the influences is actually a good thing.
Fifth, the correct or optimal balance of brain chemicals
(if there is one) is not known. Chemical reactions happen
rapidly, thousands of them every second. The words "chemical
imbalance" imply that some exact balance is known and
to be obtained.
Sixth, the best evidence is that the psychiatric drugs in
use create obvious chemical imbalances that is, leave
people with huge obvious deficiencies or excesses. While we
may not know some exact ideal balance, we know when things
are far from normal. For example, the current anti-depressants
are SSRI: Selective Serotonin Reuptake Inhibitors. What this
is means is: Serotonin is supposed to carry certain signals
from nerve-ending to nerve-ending. For this to happen, serotonin
has to be available between the nerve endings. When a serotonin
molecule has carried a message from one nerve to another,
the molecule is taken up" into the nerve, pulled
out of circulation. Prozac, Zoloft, etc., are believed to
inhibit (stop or slow) this "reuptake" so that the
serotonin stays available between nerve endings to carry more
messages. This is believed to increase the flood of messages
(those particular messages associated with serotonin) in the
brain, and that's supposed to stimulate the depressed person,
make him less depressed.
But what happens is that when a person has been on the drug
for months, it ceases to work for him, so the dose has to
be increased (and the "side effects" also increase,
of course). And if the person comes off the drug, having,
for months or years, more or less force-fed serotonin into
the brain by preventing the brain from taking up the serotonin
now, without the drug, he can't produce serotonin,
so is truly deficient in a vital neuro-transmitter. Often,
when some outside mechanism by-passes the body to create some
effect, over time the body itself loses the ability to create
that effect.
Notice that the first S in SSRI is an out-and-out lie. There's
scientific evidence that these drugs inhibit reuptake of serotonin.
But there's no evidence that this is selective, that
only serotonin is affected, that no deficiencies or surpluses
are created in other brain chemicals. The word "selective"
is simply pseudo-scientific jargon. Serotonin is real stuff.
And thousands of other chemicals are known and real. The reuptake
mechanism is pretty well-known. But the "selective"
part is just wishful thinking. It's like saying that when
you throw that hand grenade into a crowd, it "selectively"
blows their shoes off.
EFFECTIVE: Psychiatrists like to use this word. But
they seldom define it or explain (in their studies) what their
criteria were for judging a drug effective. Often they seem
to be relying on what their test subjects tell them, what
they say in answer to questions. Perhaps they have other means
(some actual observation), but you'd have to do a close reading
of the studies to figure this out. Most people just assume
that when they say a study was effective, they mean something
reasonable by "effective".
With ADHD, we know psychiatrists (and teachers and parents)
often consider the drugs effective if the child sits still
and appears to listen. Though there's no evidence that the
child's understanding or grades (except in behavior) have
improved, they assume that this stillness proves effectiveness.
It may simply indicate apathy, not-there-ness, deadness.
Relying on what test subjects say (for example, that they've
been helped) raises many questions:
Do they look at the person's actual productivity before and
after being drugged? What statistics do they look at? Do they
question the person's family and friends to see what they
think? Do they test the person's intelligence or creativity
or zest for life before and after treatment?
Since psychiatrists often refer to people who say they've
been helped by the drugs, let's take a look at psychiatrists'
own view of the reliability of what people say: If a psychiatrist
says you have some disorder or illness (one of the ones listed
in their Diagnostic and Statistical Manual or DSM), and you
say, "No, I'm fine. I feel great." the psychiatrist
has a label for that (I've forgotten the exact jargon). Basically,
the current psychiatric argument is that one of the symptoms
of mental illness is that the mentally ill person thinks he's
fine, doesn't know he's ill. Of course, this doesn't stop
them from calling people mentally ill who SAY they are mentally
ill. They've got you either way. The point is, they do argue
(and mostly agree) that someone who is nuts will think and
say that he is sane. But when they drug someone, and that
person says he/she has been helped and is now quite well on
the drug (for example Brooke Shields), the psychiatrist doesn't
say "This person must be nuts, because she says she's
well." Instead they cite him/her as an example of effective
treatment.
In fact, there are other conditions (known to psychiatrists)
that might better explain why many psychiatric patients say
they've been helped. One is the placebo factor they've
been told they've been given medicine and that it will help.
Mama has said "I'll kiss it and make it well", so
now the child feels better. Similarly, the doctor says, "Take
this, and you'll feel better," so they do.
Another is the Stockholm Syndrome: That's supposed to describe
how, when a person has been taken hostage by kidnappers long
enough, that person will begin to identify with the kidnappers,
feel he/she is part of their family, sympathize with them,
hope they don't get caught, etc. This phenomenon (it does
happen) became big news decades ago when Patty Hearst was
kidnapped by a radical group, and, after first being a weepy
victim, became the lover of one of her kidnappers and participated
with the group in one or more bank robberies. I think psychiatric
patients are also psychiatric hostages, and tend to go into
sympathy with their kidnappers. We tend to become what overwhelms
us.
Another reason is that many people are pleased to accept the
psychiatric idea that their problems are caused by an illness
of the brain, because this means that they have no personal
responsibility for their condition and don't have to confront
the things in their lives they don't want to confront. This
is, at least initially, a relief for them, so they feel better.
For example, someone may be deeply depressed partly because
of the things he/she has done to others. If so, a good way
to avoid looking at that is to say, "I couldn't help
it -- I'm sick" and take a pill.
Another reason people say the drugs help them is because in
some ways the drugs do seem to help them. Heroin helps a drug
user get high. So does cocaine. All drugs suppress some unpleasantness,
decrease awareness, including unpleasant awarenesses, etc.
The big fallacy in effectiveness claims is that they disregard
the long-term trade-offs: The toxicity of any drug (very hard
on the liver, for example), the increasing lack or woodenness
of emotion, the withdrawal difficulties, the monetary expense
(for individuals, governments, insurance companies)
and, of course, all the really weird and violent side effects
that come up in many cases. For some reason, psychiatrists,
by and large, are not defending drunkenness, though millions
of drunks will attest that alcohol helps them, is necessary
to them.
This brings up another aspect of the "science" of
psychiatry (really of chemistry): Why is Prozac or Ritalin
considered medicine, while heroin or cocaine is considered
a dangerous drug? It's partly dosage and the way it's administered.
Really, Ritalin is pretty much the same as Cocaine (probably
stronger), but supposedly the dosage and the way it's usually
taken (as a capsule) eliminate some of the obvious evils of
cocaine. For example, you don't destroy the inside of your
nose from sniffing it. (On the street, however, Ritalin is
sniffed). Sometimes there really is little difference between
an "evil" street drug and a "good" medical
drug. Ritalin is one example. And, actually, heroin and most
other street drugs are examples, because most of them (heroin,
for one) were first marketed as miracle psychiatric or medical
drugs, before it was decided they were actually dangerous,
deadly drugs. Then they became street drugs. Today, many current
psychiatric drugs are also street drugs. (The psychiatrists
would say that they are "abused" as street drugs.)
But the big difference is that most of the street drugs (for
example, heroin and crack) create sudden and obvious effects
that are evident to others. For example, a drug user often
can't work, can't hold a job, etc. As long as he's on the
drug, he feels good (even terrific), but is often "out
of it" in another world. Much of the chemical
science behind psych drugs deals with the question, "How
can we make someone feel good in such a way that he appears
normal?" Some drug makes people feel good, but they seem
a bit crazy. So chemists experiment and find a variation of
the drug that makes him feel better, but no obvious craziness
appears to most observers. So we get Prozac. Or a drug like
Methadone, more addictive than heroin, but with most of the
"high" eliminated, so the guy can be on it, but
hold a menial job -- or at least not upset passersby by acting
up.
What's wrong with that? The fact that no obvious craziness
appears is not necessarily a good thing, if the drug is eating
away inside, suppressing feelings, putting the person out
of touch with others, hollowing out the being. Isn't that
how people always describe the latest serial killer: "He
was such a quiet, polite person! I can't believe he could
have killed those people!"
What psychiatrists really mean by "effective" is
that the drug suppresses certain symptoms, behaviors, thoughts,
etc., without making the person appear to be nuts. There's
a lot of chemical science in the cosmetics of tinkering with
the drugs to eliminate unwanted effects. But what does this
do? You give a guy a drug to suppress his feelings. His feelings
show up in some non-optimal way, so you come up with an "improved"
drug that suppresses that non-optimal "side effect."
Some other "side effect" comes up, so we "improve"
the drug again to suppress that. Each step is sweeping something
else under the rug. Stuff is really piling up under there.
But the users look more and more normal, "just like the
rest of us". (Remember Invasion of the Body Snatchers?)
It's important to remember, when we criticize drugs for producing
all sorts of bad side effects, that the worst effects created
by these drugs are the intended effects, the things they do
to people when they are really "effective." These
people who find the drugs effective are the one's who are
being most thoroughly removed from their own feelings and
who will have the most difficult time ever getting back to
and confronting the things in their lives that caused their
problems in the first place.
A TV special about kids on Ritalin talked about side effects
(feeling like a zombie, stunted growth, suicide, etc.), but
the main thing the kids interviewed stressed was that even
when other people thought they were doing better, what felt
worst was that they weren't themselves. The drug had substituted
something for them. They felt they weren't themselves when
on the drug. That is not a side effect. That is the intended
effect. Some have more difficulty recognizing it than others.
SIGNIFICANT: Psychiatrists always stress this word.
A drug has been effective in a significant percentage of cases.
One drug is significantly more effective than another. In
every day use, "significant" is a broad term, meaning
"to a great extent" or "to a noticeable extent."
Psychiatrists use it (although often sloppily) in a technical
sense. "Significance" is a statistical concept.
In any study, finding "significant" results means
that the result found (for example, that the drug is more
effective than the placebo) is striking enough, strong enough
to be considered significant. And that significance is determined
by mathematical formulas. There are MANY different mathematical
tests for significance. And some tests are more appropriate
to a particular data set (the set of numbers, the results
of the study) than others. For example, some tests give accurate
results when used on small amounts of data, but are less useful
for large amounts of data and vice versa. Some tests are more
useful for data that is bunched into one area (no one did
great or terribly bad; everyone did pretty well), while other
tests are more reliable for widely distributed data. And so
forth. And by applying the wrong test to the data, one can
get the desired results. You'd need to spend some time studying
math and statistics to understand specific cases, but you'll
avoid misunderstanding psychiatrists if you understand that
when a psychiatrist uses the word "significant",
he's usually talking about a mathematical concept that is
complex enough to subvert.
Here's a very simple example of significance: If your study
includes only two people, any result you get won't be significant:
Not enough subjects, not a large enough sample. Of course,
if both of them are given the drug, and both commit suicide
within 24 hours, you'd think that would be useful data to
have. But it could easily be suppressed, because it isn't
"significant."
Or a particular test of significance requires dropping out
(not counting) the extreme cases -- for example, those who
score very low or very high on some scale. But that test is
only supposed to be applied to certain arrangements of data.
The researcher may apply this test inappropriately to drop
out of his calculations most of the people who got worse on
the drug or got better on the placebo.
Basically, "significance" as a statistical concept
is another of the tools used by psychiatrists to bend the
results of studies to suit the pharmaceutical companies. It's
a concept from mathematics and statistics, intended to make
science rigorous, but is used by psychiatrists to make science
all too flexible.
METHODOLOGY: A big word that means, simply, how a study
is done, how it is organized (for example, double blind),
how the subjects were selected, what tests were used to establish
significance -- and so on. Most of the big "break through"
studies that supposedly validate psychiatric theories about
chemical imbalance, the existence of genes for mental illnesses,
etc., fall apart when their methodology is examined by someone
who knows what to look for. Some psychiatrists understand
methodology, but many are simply bluffing, so that if you
say to them, "Yes, but I understand that study was later
found to have faulty methodology", they'll simply back
down.
Of course, it's even better to know your studies and know
your methodology, but most likely you won't ever be debating
specific studies with psychiatrists, since the basic flaws
are so apparent: The studies are paid for by those who want
a favorable result, and they claim to show the effectiveness
of various treatments for illnesses that don't exist. That
alone makes psychiatry a pseudo-science. Still, if you understand
the points above, you're less likely to be daunted when a
psychiatrist starts talking about significant results, effectiveness,
peer-reviewed double-blind studies, etc. At least you won't
go blank from all the strange words and strange uses of familiar
words.
There are many more psychiatric terms, but I think those above
are the key ones for the current controversies. Many psychiatric
terms are simply jargon -- difficult words that have simple
meanings, but are preferred for their impressive sound. For
example "modalities of treatment" just means types
of treatment. Thus, talk therapy is one modality, drug therapy
is another. Much of the "science" of psychiatry
consists of such pompous uses of words.
I hope this clarifies some of the most common areas of confusion
in psychiatric pronouncements. The next time a defender of
psychiatry talks about the many peer-reviewed, double-blind
studies that validate psychiatric claims, I hope you'll have
a better idea of what's being said...and what's NOT being
said.
|